Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
49,074,360
2018-03-02T17:30:00.000
1
1
0
1
1
python,ubuntu,server,mod-wsgi
0
49,112,016
1
1
0
true
0
0
Eventually Figured it out.The problem was that I had 2 versions of Python 2.7 installed in my server(2.7.12 and 2.7.13) so the definitions of one were conflicting with the other.Solved it when I completely removed Python 2.7.13 from the server.
1
0
0
0
I'm trying to install the mod_wsgi module in apache in my Ubuntu server but I need it to be specifically for version 2.7.13 in Python.For whatever reason every time I run sudo apt-get install libapache2-mod-wsgi it installs the mod_wsgi module for Python 2.7.12.I'm doing all of this because I'm running into a weird python version issue.When I run one of my python Scripts in my server terminal it works perfectly with version 2.7.13.In Apache however the script doesn't work.I managed to figure out that my Apache is running version 2.7.12 and I think this is the issue.Still can't figure out how to change that apache python version yet though.
How to install mod_wsgi to specific python version in Ubuntu?
0
1.2
1
0
0
623
49,083,639
2018-03-03T11:28:00.000
0
0
0
0
0
python,python-2.7,kivy
0
49,100,661
0
1
0
true
0
1
Loading all your images in memory will be a problem when you have a lot of images in the folder, but you could have a hidden image with the next image as source (it's not even needed to add the Image to the widget tree, you could just keep it in an attribute of your app), so everytime the user load the next image, it's displayed instantly, since it's cached already, and while the user is looking at this image, the second image widget, which stays invisible, would start loading the next image. Of course, if you want to load more than 1 image, you'll have to do something more clever, you could have a list of Image widgets in memory, and always replace the currently displayed source with the next in line for pre-fetching).
1
0
0
0
This may be a basic question, but I'm still learning Kivy and I'm not sure how to do this. The program that I'm writing with Python 2.7 and Kivy reads a folder full of images, and then will display them one at a time as the user clicks through. Right now, I'm calling a function that reads the next image on the click of a button. This means that I have a bit of lag between each image. I'd like to load all the images in the beginning, or at least some of them, so that there isn't a lag as I click through the images. I'm not sure if this is done on the Python side or the Kivy side, but I appreciate any help!
How can I pre-load or cache images with Python 2.7 and Kivy
0
1.2
1
0
0
485
49,095,353
2018-03-04T12:23:00.000
2
0
0
0
0
python,nfc
0
49,120,363
0
1
0
false
0
0
Nfcpy only supports the standardized NFC Forum Type 1, 2, 3, and 4 Tags. Mifare 1K Classic uses a proprietary communication format and requires reader hardware with NXP Crypto-1 support.
1
1
0
0
I bought the card reader ACR122U and try to read mifare 1k classic cards with nfcpy. So my question is, how can i read or write on a mifare 1k classic card using nfcpy?
How to use nfcpy for MiFare 1k classic
0
0.379949
1
0
0
526
49,099,308
2018-03-04T19:10:00.000
1
0
0
0
0
python,django,django-models,django-templates,django-views
0
49,099,402
0
1
0
false
1
0
You should use some queuing mechanism like worker and consumer to avoid this problem. For example Celery. Steps to do for sending email: 1. Add email and info to the queue referred as task 2. Consume the queue. (It runs in the different process may be parallel too) You can also use Channels newly added in Django family of apps. This will provide you an asynchronous way to handle email/any other deferred task.
1
0
0
0
I am using form.py and the user is typing some Email-id, let us say I want to send an email to that particular email and write all that email into google sheet using gspread, I am able to do this in my views.py, but the problem is it's taking a lot of time to write which slow down the rendering process. Is there any other way I can use my logic after rendering the template.
how to call some logic in my views after rendering template in django
0
0.197375
1
0
0
67
49,106,413
2018-03-05T08:42:00.000
2
0
0
0
0
python,plotly-dash
0
49,108,001
0
1
0
false
0
0
I have similar experience. A lot said python is more readable, while I agree, however, I don't find it as on par with R or Shiny in their respective fields yet.
1
3
1
0
I have used Shiny for R and specifically the Shinydashboard package to build easily navigatable dashboards in the past year or so. I have recently started using the Python, pandas, etc ecosystem for doing data analysis. I now want to build a dashboard with a number of inputs and outputs. I can get the functionality up running using Dash, but defining the layout and look of the app is really time consuming compared to using the default layout from the shinydashboard's package in R. The convenience that Shiny and Shinydashboard provides is: Easy layout of components because it is based on Bootstrap A quite nice looking layout where skinning is build in. A rich set of input components where the label/title of the input is bundled together with the input. My question is now this: Are there any extensions to Dash which provides the above functionality, or alternatively some good examples showing how to do the above?
Building a dashboard in Dash
0
0.379949
1
0
0
836
49,108,596
2018-03-05T10:43:00.000
1
0
0
0
0
python,python-3.x,python-2.7,pandas
1
49,135,392
0
2
0
true
0
0
Not exactly a solution but more of a workaround. I simply read the files in their corresponding Python versions and saved them as a CSV file, which can then be read any version of Python.
1
1
1
0
I wrote a dataframe in Python 2.7 but now I need to open it in Python 3.6, and vice versa (I want to compare two dataframes written in both versions). If I open a Python2.7-generated HDF file using pandas in Python 3.6, this is the error produced: UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 1: ordinal not in range(128) If I open a Python3.6-generated HDF file using pandas in Python 2.7, this is the error: ValueError: unsupported pickle protocol: 4 For both cases I simply saved the file by df.to_hdf. Does anybody have a clue how to go about this?
How do I read/convert an HDF file containing a pandas dataframe written in Python 2.7 in Python 3.6?
0
1.2
1
0
0
348
49,112,945
2018-03-05T14:41:00.000
0
0
0
0
0
python,django,apache,server,mod-wsgi
0
49,113,295
0
2
0
false
1
0
I'm not familiar with Linode restrictions, but if you have control over your Apache files then you could certainly do it with name-based virtual hosting. Set up two VirtualHost containers with the same IP address and port (and this assumes that both www.example.com and django2.example.com resolve to that IP address) and then differentiate requests using the ServerName setting in the container. In Apache 2.4 name-based virtual hosting is automatic. In Apache 2.2 you need the NameVirtualHost directive.
2
0
0
0
Is it possible to set two different django projects on the same IP address/server (Linode in this case)? For exmaple, django1_project running on www.example.com and django2_project on django2.example.com. This is preferable, but if this is not possible then how to make two djangos, i.e. one running on www.example.com/django1 and the second on www.example.com/django2? Do I need to adapt the settings.py, wsgi.py files or apache files (at /etc/apache2/sites-available) or something else? Thank you in advance for your help!
Two django project on the same ip address (server)
1
0
1
0
0
591
49,112,945
2018-03-05T14:41:00.000
2
0
0
0
0
python,django,apache,server,mod-wsgi
0
49,113,277
0
2
0
false
1
0
Yes that's possible to host several Python powered sites with Apache + mod_wsgi from one host / Apache instance. The only constraint : all apps / sites must be powered by the same Python version, though each app may have (or not) its own virtualenv (which is strongly recommended). It is also recommended to use mod_wsgi daemon mode and have each Django site run in separate daemon process group.
2
0
0
0
Is it possible to set two different django projects on the same IP address/server (Linode in this case)? For exmaple, django1_project running on www.example.com and django2_project on django2.example.com. This is preferable, but if this is not possible then how to make two djangos, i.e. one running on www.example.com/django1 and the second on www.example.com/django2? Do I need to adapt the settings.py, wsgi.py files or apache files (at /etc/apache2/sites-available) or something else? Thank you in advance for your help!
Two django project on the same ip address (server)
1
0.197375
1
0
0
591
49,119,793
2018-03-05T21:29:00.000
0
0
1
0
1
python,python-3.x,python-asyncio,coroutine
0
49,226,915
0
2
0
false
0
0
asyncio use a loop to run everything, await would yield back the control to the loop so it can arrange the next coroutine to run.
2
0
0
0
I was wondering how concurrency works in python 3.6 with asyncio. My understanding is that when the interpreter executing await statement, it will leave it there until the awaiting process is complete and then move on to execute the other coroutine task. But what I see here in the code below is not like that. The program runs synchronously, executing task one by one. What is wrong with my understanding and my impletementation code? import asyncio import time async def myWorker(lock, i): print("Attempting to attain lock {}".format(i)) # acquire lock with await lock: # run critical section of code print("Currently Locked") time.sleep(10) # our worker releases lock at this point print("Unlocked Critical Section") async def main(): # instantiate our lock lock = asyncio.Lock() # await the execution of 2 myWorker coroutines # each with our same lock instance passed in # await asyncio.wait([myWorker(lock), myWorker(lock)]) tasks = [] for i in range(0, 100): tasks.append(asyncio.ensure_future(myWorker(lock, i))) await asyncio.wait(tasks) # Start up a simple loop and run our main function # until it is complete loop = asyncio.get_event_loop() loop.run_until_complete(main()) print("All Tasks Completed") loop.close()
Understanding Python Concurrency with Asyncio
0
0
1
0
0
448
49,119,793
2018-03-05T21:29:00.000
2
0
1
0
1
python,python-3.x,python-asyncio,coroutine
0
49,119,860
0
2
0
true
0
0
Invoking a blocking call such as time.sleep in an asyncio coroutine blocks the whole event loop, defeating the purpose of using asyncio. Change time.sleep(10) to await asyncio.sleep(10), and the code will behave like you expect.
2
0
0
0
I was wondering how concurrency works in python 3.6 with asyncio. My understanding is that when the interpreter executing await statement, it will leave it there until the awaiting process is complete and then move on to execute the other coroutine task. But what I see here in the code below is not like that. The program runs synchronously, executing task one by one. What is wrong with my understanding and my impletementation code? import asyncio import time async def myWorker(lock, i): print("Attempting to attain lock {}".format(i)) # acquire lock with await lock: # run critical section of code print("Currently Locked") time.sleep(10) # our worker releases lock at this point print("Unlocked Critical Section") async def main(): # instantiate our lock lock = asyncio.Lock() # await the execution of 2 myWorker coroutines # each with our same lock instance passed in # await asyncio.wait([myWorker(lock), myWorker(lock)]) tasks = [] for i in range(0, 100): tasks.append(asyncio.ensure_future(myWorker(lock, i))) await asyncio.wait(tasks) # Start up a simple loop and run our main function # until it is complete loop = asyncio.get_event_loop() loop.run_until_complete(main()) print("All Tasks Completed") loop.close()
Understanding Python Concurrency with Asyncio
0
1.2
1
0
0
448
49,129,451
2018-03-06T11:12:00.000
1
0
0
0
0
python,websocket,localhost,ngrok,serve
0
52,701,751
0
1
0
false
0
0
You can use ngrok http 8000 to access it. It will work. Although, ws is altogether a different protocol than http but ngrok handles it internally.
1
2
0
0
I want to share my local WebSocket on the internet but ngrok only support HTTP but my ws.py address is ws://localhost:8000/ it is good working on localhost buy is not know how to use this on the internet?
how to use ws(websocket) via ngrok
0
0.197375
1
0
1
2,555
49,158,613
2018-03-07T18:09:00.000
0
0
1
0
0
python
0
49,158,696
0
2
0
false
0
0
There's no conflict with import & write. Once the import is done, you have all the needed information held locally. You can overwrite the file without disturbing the values you hold in your run-time space.
1
0
0
0
file2.py is just variables I want file1.py to import those variables (import file2) increment them, truncate file2.py and rewrite it with the newly incremented variables I know how to increment them I'm just not sure how I would rewrite a python file with another python file while that file is also being imported... Thanks!
How do I rewrite python file with a python script
0
0
1
0
0
777
49,171,782
2018-03-08T11:15:00.000
4
0
0
1
0
python,oracle,ubuntu,32bit-64bit,cx-oracle
1
49,172,856
1
1
0
true
0
0
I am a bit confused about your question but this should give some clarification: A 32-bit client can connect to a 64-bit Oracle database server - and vice versa You can install and run 32-bit applications on a 64-bit machine - this is at least valid for Windows, I don't know how it works on Linux. Your application (the python in your case) must have the same "bitness" as installed Oracle Client.
1
0
0
0
I am trying to set up a cronjob that executes a python (3.6) script every day at a given time that connects to an oracle 12g database with a 32 bit client (utilizing the cx_Oracle and sqlalchemy libs). The code itself was developed on a win64 bit machine. However, when trying to deploy the script onto an Ubuntu 16.04 server, I run into a dilemma when it comes to 32 vs 64 bit architectures. The server is based on a 64 bit architecture The oracle db is accessible via a 32 bit client my current python version on ubuntu is based on 64 bit and I spent about an hour of how to get a 32 bit version running on a 64 bit linux machine without much success. The error I receive at this moment when trying to run the python script refers to the absence of an oracle client (DPI-1047). However, I already encountered a similar problem in windows when it was necessary to switch the python version to the 32 bit version and to install a 32 bit oracle client. Is this also necessary in the ubuntu case or are there similar measurements needed to be taken? and if so, how do I get ubuntu to install and run python3.6 in 32 bit as well as the oracle client in 32 bit?
Running a Python Script in 32 Bit on 64 linux machine to connect to oracle DB with 32 bit client
1
1.2
1
1
0
698
49,177,246
2018-03-08T15:53:00.000
0
0
0
0
0
python,google-sheets,airflow
0
49,181,497
0
1
0
true
0
0
As far as I know there is no gsheet hook or operator in airflow at the moment. If security is not a concern you could publish it to the web and pull it in airflow using the SimpleHttpOperator. If security is a concern I recommend going the PythonOperator route and use df2gspread library. Airflow version >= 1.9 can help obtaining credentials for df2gspread
1
0
0
1
I'm new to Airflow and Python. I'm trying to connect Airflow with Google Sheets and although I have no problem connecting with Python, I do not know how I could do it from Airflow. I have searched for information everywhere but I only find Python information with gspread or with BigQuery, but not with Google Sheets. I would appreciate any advice or link.
Airflow and Google Sheets
0
1.2
1
0
0
2,341
49,185,114
2018-03-09T01:14:00.000
1
0
0
0
0
pythonanywhere
0
49,196,785
0
1
0
false
1
0
You need to actually serve the files. On your local machine, Django is serving static files for you. On PythonAnywhere, it is not. There is extensive documentation on the PythonAnywhere help pages to get you started with configuring static files.
1
0
0
0
I am using Django1.8 and I need help. how to display images and files on pythonanywhere by using model filefield and imagefield. on my development server everything is ok.but during de production I have donne everything these two field.the parodox is bootstrap is well integread. my project is on githb: Geyd/eces_edu.git help me !!!
how to diplay file field and image field on pythonaywhere
0
0.197375
1
0
0
39
49,188,928
2018-03-09T07:46:00.000
0
0
0
0
0
python-3.x,neural-network,keras,multiclass-classification,activation-function
0
49,190,269
0
1
0
false
0
0
First of all you simply should'nt use them in your output layer. Depending on your loss function you may even get an error. A loss function like mse should be able to take the ouput of tanh, but it won't make much sense. But if were talking about hidden layers you're perfectly fine. Also keep in mind, that there are biases which can train an offset to the layer before giving the ouput of the layer to the activation function.
1
0
1
0
Tanh activation functions bounds the output to [-1,1]. I wonder how does it work, if the input (features & Target Class) is given in 1-hot-Encoded form ? How keras (is managing internally) the negative output of activation function to compare them with the class labels (which are in one-hot-encoded form) -- means only 0's and 1's (no "-"ive values) Thanks!
Keras "Tanh Activation" function -- edit: hidden layers
0
0
1
0
0
641
49,195,008
2018-03-09T13:34:00.000
1
0
0
0
0
python,python-3.x,algorithm,machine-learning
0
49,195,249
0
1
0
false
0
0
The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) You can store in whatever format you like. Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one new piece of data is added every time the algorithm is used?). This depends very much on what algorithm you use. Some algorithms can easily be implemented to learn in an incremental manner. For example, Linear/Logistic Regression implemented with Stochastic Gradient Descent could easily just run a quick update on every new instance as it gets added. For other algorithms, full re-trains are the only option (though you could of course elect not to always do them over and over again for every new instance; you could, for example, simply re-train once per day at a set point in time).
1
3
1
0
This may be a stupid question, but I am new to ML and can't seem to find a clear answer. I have implemented a ML algorithm on a Python web app. Right now I am storing the data that the algorithm uses in an offline CSV file, and every time the algorithm is run, it analyzes all of the data (one new piece of data gets added each time the algorithm is used). Apologies if I am being too vague, but I am wondering how one should generally go about implementing the data and algorithm properly so that: The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one new piece of data is added every time the algorithm is used?).
Preprocessing machine learning data
0
0.197375
1
0
0
92
49,198,057
2018-03-09T16:24:00.000
2
0
0
0
0
python,amazon-web-services,amazon-route53,health-monitoring
0
49,201,020
0
1
0
true
1
0
Make up a filename. Let's say healthy.txt. Put that file on your web server, in the HTML root. It doesn't really matter what's in the file. Verify that if you go to your site and try to download it using a web browser, it works. Configure the Route 53 health check as HTTP and set the Path for the check to use /healthy.txt. To make your server "unhealthy," just delete the file. The Route 53 health checker will get a 404 error -- unhealthy. To make the server "healthy" again, just re-create the file.
1
0
0
0
I have a query as to whether what I want to achieve is doable, and if so, perhaps someone could give me some advice on how to achieve this. So I have set up a health check on Route 53 for my server, and I have arranged so that if the health check fails, the user will be redirected to a static website I have set up at a backup site. I also have a web scraper running regularly collecting data, and my question is, would their be a way to use the data I have collected, and depending on its value, either pass or fail the heath check, therefore determining what site the user would be diverted to. I have discussed with AWS support and they have said that their policies and conditions are there by design, and long story short would not support what I am trying to achieve. I'm a pretty novice programmer so I'm not sure if it's possible to work this, but this is my final hurdle so any advice or help would be hugely appreciated. Thanks!
Intentionally Fail Health Check using Route 53 AWS
1
1.2
1
0
1
42
49,199,787
2018-03-09T18:14:00.000
-1
0
0
0
1
python,libgdx,blender,index-error
1
58,185,138
0
1
0
false
1
0
Go into Object Mode before calling that function. bpy.ops.object.mode_set(mode='OBJECT', toggle=False)
1
1
0
0
I try write a game with GoranM/bdx plugin. When i create plate with texture and try export to code I get fatal error. Traceback (most recent call last): File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\ops\exprun.py", line 225, in execute export(self, context, bpy.context.scene.bdx.multi_blend_export, bpy.context.scene.bdx.diff_export) File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\ops\exprun.py", line 123, in export bpy.ops.export_scene.bdx(filepath=file_path, scene_name=scene.name, exprun=True) File "C:\Program Files\Blender Foundation\Blender\2.79\scripts\modules\bpy\ops.py", line 189, in call ret = op_call(self.idname_py(), None, kw) RuntimeError: Error: Traceback (most recent call last): File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 903, in execute return export(context, self.filepath, self.scene_name, self.exprun, self.apply_modifier) File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 829, in export "models": srl_models(objects, apply_modifier), File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 117, in srl_models verts = vertices(mesh) File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 53, in vertices vert_uv = list(uv_layer[li].uv) IndexError: bpy_prop_collection[index]: index 0 out of range, size 0 location: C:\Program Files\Blender Foundation\Blender\2.79\scripts\modules\bpy\ops.py:189 location: :-1 Maybe someone had same problem and you know how to fix it?
Blender IndexError: bpy_prop_collection
0
-0.197375
1
0
0
1,262
49,201,628
2018-03-09T20:25:00.000
0
0
0
1
0
docker,python-import,docker-volume
0
49,201,697
0
1
0
false
0
0
It looks like there is an issue with your volume mapping. The volume mapping syntax is of format "-v {local volume}:{directory inside container} So you would have to create that particular directory in your image before mapping it.
1
1
0
0
I have a docker volume that I created by running volume create my_volume and have been running my docker image with the command docker run -v my_volume:/Volumes/docker-volume/ my_image. Within the docker-volume directory I have a python file that I would like to import, but I can't figure out how to do so. Everything I've tried results in a ModuleNotFound error. It feels like there's some fundamental issue, perhaps relating to how a docker image interacts with a volume, that I'm missing. Any help would be greatly appreciated!
import python file from docker volume
0
0
1
0
0
700
49,203,197
2018-03-09T22:34:00.000
0
0
1
0
0
python,macos,32-bit,canopy
0
49,203,609
0
1
0
false
0
0
Sorry, Canopy on Mac has not provided 32-bit Python since January 2015. If you've got a really old (32-bit) version of OSX, then you're out of luck. Otherwise (you've got a recent OSX but just want to run 32-bit Python for some reason (WHY?) then... I'm not clear from your question whether you have a 32-bit / IPython available. If so, then from a terminal where that is your default Python, you can start a 32-bit kernel with ipython kernel. If that version of Ipython is not too old (sorry, not sure what exactly that means), then you should then be able to connect to that kernel from Canopy's Run menu ("Connect to existing kernel"). Not super convenient, as you'd need to redo both steps every time you wanted to do this.
1
0
0
0
ive been having trouble using a 32 bit python for canopy on mac. I dont know how to import a external version of python. Ive tried sites, but they all are from 2013-14. They just say to download the v1 with 32 bit python. I want any version of python that is 32 bit to work with canopy, I hope someone knows how, thanks.
Canopy python 32 bit Mac os x
0
0
1
0
0
38
49,203,567
2018-03-09T23:15:00.000
0
0
1
1
0
python,windows,python-idle
0
56,514,353
0
2
0
false
0
0
I work with over 30 Python developers and without fail when this happens they were behind a proxy / vpn. Turn off your proxy / vpn and it will work. Must have had this happen hundreds of times and this solution always worked.
1
0
0
0
I am new to Python and recently installed Python 3.6 on Windows 10. When I try to open IDLE, Python's IDE, I keep getting a message saying that it can not establish a subprocess. I have tried uninstalling and installing several times. I have seen several forums which say that there could be a .py file that is in the directory that is messing up IDLE. This is not my case, as I have not even been able to start using Python and I do not have a firewall either. Can someone tell me how I can get IDLE to work?
Windows 10: IDLE can't establish a subprocess
0
0
1
0
0
1,726
49,206,319
2018-03-10T07:01:00.000
2
0
0
0
0
python,django
0
49,206,357
0
2
0
true
1
0
The simplest option would be a view function (i.e. a function linked to a URL that receives a GET or POST request) in your app which does the scraping and immediately returns the results by rendering a template. For example you could have a starting page with a form and when that form is submitted that will create a POST request which will contain details that the view can use to decide which page to scrape and so on. This doesn't require Javascript or database models. If you're not comfortable with Django yet, consider starting with Flask instead as it's simpler to get going.
2
0
0
0
As an exercise, I came up with an idea of the following Django project: a web app with literally one button to scrape room data from Airbnb and one text area to display the retrieved data in a sorted manner. Preferably, for scraping I would like to use Selenium, as there is no API for this page. So the button would somehow need to launch the browser automation. So question number one is: is it possible to launch selenium from a web app? Furthermore, I already have the working script for collecting the data, however I dont't know how to fit it in a Django project: models, views, separate script? My initial idea was to launch the scraping script on button click, then dump retrieved room-related data to database (updating model's Room attributes like "price" and "link" for example) and display the data back in the text area mentioned before. So question two is: is it possbile to launch Python script in a web app on button click, for example by nesting in a Django template? Or would other technologies be required, such as Javascript? I know my question is general, but I am also looking for general advice, not a ready code sample. I am also open to other approach if what I just wrote doesn't make any sense.
How to scrape data from inside Django app
1
1.2
1
0
0
740
49,206,319
2018-03-10T07:01:00.000
2
0
0
0
0
python,django
0
50,174,783
0
2
0
false
1
0
Django follows MVT i.e Model (part where you write things related to the database ) , View (the logic analogous to what we did in controller - ref. Java) , Template(things that you'll actually see) . As suggested by Alex you can have some inputs collected on your home page and using that data to scrape desired pages. Coming to your next question, yes you can launch the script on button click and basic working knowledge of JS would do good. This is like a very general answer synonymous to how general the question is so please feel free to get more specific requests if needed.
2
0
0
0
As an exercise, I came up with an idea of the following Django project: a web app with literally one button to scrape room data from Airbnb and one text area to display the retrieved data in a sorted manner. Preferably, for scraping I would like to use Selenium, as there is no API for this page. So the button would somehow need to launch the browser automation. So question number one is: is it possible to launch selenium from a web app? Furthermore, I already have the working script for collecting the data, however I dont't know how to fit it in a Django project: models, views, separate script? My initial idea was to launch the scraping script on button click, then dump retrieved room-related data to database (updating model's Room attributes like "price" and "link" for example) and display the data back in the text area mentioned before. So question two is: is it possbile to launch Python script in a web app on button click, for example by nesting in a Django template? Or would other technologies be required, such as Javascript? I know my question is general, but I am also looking for general advice, not a ready code sample. I am also open to other approach if what I just wrote doesn't make any sense.
How to scrape data from inside Django app
1
0.197375
1
0
0
740
49,214,989
2018-03-10T23:14:00.000
8
0
0
1
0
python,macos,io
0
49,215,074
0
2
0
false
0
0
External drives can be found under /Volumes on macOS. If you provide the full path and have read access you should be able to read in your csv.
1
6
0
0
I have a Python file in /Users/homedir/... and I want it to access a csv file on an external hard drive. Does anyone know how to do this? I only need reading permission.
Access file in external hard drive using python on mac
0
1
1
0
0
7,016
49,227,490
2018-03-12T02:48:00.000
2
0
0
0
0
python,python-3.x,pandas,neural-network,decision-tree
0
49,227,672
0
1
0
false
0
0
Yes, in my opinion, encoding yes/no to 1/0 would be the right approach for you. Python's sklearn requires features in numerical arrays. There are various ways of encoding : Label Encoder; One Hot Encoder. etc However, since your variable only has 2 levels of categories, it wouldnt make much difference if you go for LabelEncoder or OneHotEncoder.
1
1
1
0
My dataset has few features with yes/no (categorical data). Few of the machine learning algorithms that I am using, in python, do not handle categorical data directly. I know how to convert yes/no, to 0/1, but my question is - Is this a right approach to go about it? Can these values of no/yes to 0/1, be misinterpreted by algorithms ? The algorithms I am planning to use for my dataset are - Decision Trees (DT), Random Forests (RF) and Neural Networks (NN).
Categorical Data yes/no to 0/1 python - is it a right approach?
0
0.379949
1
0
0
1,491
49,231,589
2018-03-12T09:14:00.000
3
0
1
0
0
python,cron,sleep,schedule
0
49,231,744
0
1
0
false
0
0
sleep will mark the process (thread) for being inactive until the given time is up. During this time the kernel will simply not schedule this process (thread). It will not waste resources. Hard disks typically have spin-down policies based solely on their usage. If they aren't accessed for a specific time, they will spin down. They will spin up as soon as some process (thread) is accessing them again. This means that letting a process (thread) sleep for some time gives the hard disk a chance to spin down (especially if the sleep duration is large, say, more than some minutes).
1
3
0
0
What exactly is happening when I call time.sleep(5) in a python script? Is the program using a lot of resources from the computer? I see people using the sleep function in their programs to schedule tasks, but this requires you leave your hard drive running the whole time right? That would be taking for you computer over the long haul right? I'm trying to figure out what's to run programs at specific times remotely, but I haven't found an explanation of how to do this that is very intuitive. Any suggestions?
What is happening when you use the python sleep module?
0
0.53705
1
0
0
101
49,235,894
2018-03-12T12:59:00.000
0
1
0
0
0
python-3.x,odoo,point-of-sale,odoo-11
0
52,870,087
0
1
0
false
1
0
You can create a wizard at the time of validation of POS order which popup after validating order. In that popup enter mail id of customer and by submit that receipt is directly forwarded to that customer.
1
0
0
0
I have required to send POS Receipt to customer while validating POS order, the challenge is ticket is defined in point_of_sale/xml/pos.xml receipt name is <t t-name="PosTicket"> how can i send this via email to customer.
Send POS Receipt Email to Customer While Validating POS Order
0
0
1
0
1
205
49,248,489
2018-03-13T04:47:00.000
1
0
0
0
0
python,pip,mariadb,centos7
1
49,254,109
0
2
0
false
0
0
You must not name your script mysql.py — in that case Python tries to import mysql from the script — and fails. Rename your script /root/Python_environment/my_Scripts/mysql.py to something else.
2
0
0
0
I have installed MySQL connector for python 3.6 in centos 7 If I search for installed modules with below command it's showing as below pip3.6 freeze mysql-connector==2.1.6 mysql-connector-python==2.1.7 pymongo==3.6.1 pip3.6 search mysql-connector mysql-connector-python (8.0.6) -MYSQL driver written in Python INSTALLED: 2.1.7 LATEST: 8.0.6 mysql-connector (2.1.6) - MySQL driver written in Python INSTALLED: 2.1.6 (latest) MySQL connector installed.But when trying to run the program using MySQL connector then its showing error no module installed MySQL connector.I am using MariaDB 10.0 python3.6 mysql1.py Traceback (most recent call last): File "mysql1.py", line 2, in import mysql.connector as mariadb File "/root/Python_environment/my_Scripts/mysql.py", line 2, in import mysql.connector ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package can any one know how to resolve
Mysql Connector issue in Python
0
0.099668
1
1
0
455
49,248,489
2018-03-13T04:47:00.000
0
0
0
0
0
python,pip,mariadb,centos7
1
49,376,529
0
2
0
true
0
0
This is the problem I faced in Environment created by python.Outside the python environment i am able to run the script.Its running succefully.In python environment i am not able run script i am working on it.if any body know can give suggestion on this
2
0
0
0
I have installed MySQL connector for python 3.6 in centos 7 If I search for installed modules with below command it's showing as below pip3.6 freeze mysql-connector==2.1.6 mysql-connector-python==2.1.7 pymongo==3.6.1 pip3.6 search mysql-connector mysql-connector-python (8.0.6) -MYSQL driver written in Python INSTALLED: 2.1.7 LATEST: 8.0.6 mysql-connector (2.1.6) - MySQL driver written in Python INSTALLED: 2.1.6 (latest) MySQL connector installed.But when trying to run the program using MySQL connector then its showing error no module installed MySQL connector.I am using MariaDB 10.0 python3.6 mysql1.py Traceback (most recent call last): File "mysql1.py", line 2, in import mysql.connector as mariadb File "/root/Python_environment/my_Scripts/mysql.py", line 2, in import mysql.connector ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package can any one know how to resolve
Mysql Connector issue in Python
0
1.2
1
1
0
455
49,254,062
2018-03-13T10:42:00.000
4
0
0
0
0
python,qt,pyqt,pyqt5,qtabwidget
0
49,256,651
0
1
0
false
0
1
Add a generic QWidget as the corner widget. Give it a QHBoxLayout. Add your buttons to the layout. I use this frequently, often by subclassing QTabWidget and creating accessor functions that return the individual buttons. Adding signals like buttonClicked(int) with the index and buttonClicked(QAbstractButton) with the button itself are helpful, too.
1
1
0
0
I want to remove the actual Close, minimize and maximize buttons of a window and create my own custom buttons, just like in chrome. I therefore want to add corner widgets to my tabwidget. Is there a way so that I can add three buttons as corner widgets of a QTabWidget? Is it somehow possible to achieve using the QHBoxLayout ? The setCornerWidget function just takes one widget as its input.
PyQt QTabWidget Multiple Corner WIdgets
1
0.664037
1
0
0
545
49,273,536
2018-03-14T09:08:00.000
0
0
0
0
1
python,cluster-analysis,k-means
0
49,294,559
0
1
0
false
0
0
If you don't have data on a word, then skip it. You could try to compute a word vector on the fly based on the context, but that essentially is the same as just skipping it.
1
2
1
0
I am trying to cluster a number of words using the KMeans algorithm from scikit learn. In particular, I use pre-trained word embeddings (300 dimensional vectors) to map each word with a number vector and then I feed these vectors to KMeans and provide the number of clusters. My issue is that there are certain words in my input corpus which I can not find in the pretrained word embeddings dictionary. This means that in these cases, instead of a vector, I get a numpy array full of nan values. This does not work with the kmeans algorithm and therefore I have to exclude these arrays. However, I am interested in seeing all these cases that were not found in the word embeddings and what is more, if possible throw them inside a separate cluster that will contain only them. My idea at this point is to set a condition that if the word is returned with a nan-values array from the embeddings index, then assign an arbitrary vector to it. Each dimension of the embeddings vector lie within [-1,1]. Therefore, if I assign the following vector [100000]*300 to all nan words, I have created a set of outliers. In practice, this works as expected, since this particular set of vectors are forced in a separate cluster. However, the initialization of the kmeans centroids is affected by these outlier values and therefore all the rest of my clusters get messed up as well. As a remedey, I tried to initiate the kmeans using init = k-means++ but first, it takes significantly longer to execute and second the improvement is not much better. Any suggestions as to how to approach this issue? Thank you.
Python KMeans Clustering - Handling nan Values
1
0
1
0
0
2,184
49,289,969
2018-03-15T01:00:00.000
0
0
1
0
0
python,string,replace,whitespace
0
49,290,127
0
2
0
false
0
0
I also tried ' \d+ ' and that works! probably not "pythonic" though...
1
1
1
0
the code below replaces numbers with the token NUMB: raw_corpus.loc[:,'constructed_recipe']=raw_corpus['constructed_recipe'].str.replace('\d+','NUMB') It works fine if the numbers have a space before and a space after, but creates a problem if the numbers are included in another string. How do I modify the code so that it only replaces numbers with NUMB if the numbers are surrounded by a space on both sides? e.g. do not modify this string: "from url 500px", but do modify this string: "dishwasher 10 pods" to "dishwasher NUMB pods". I'm not sure how to modify '\d+' to make this happen. Any ideas?
replace numbers with token if numbers have whitespace on both side
0
0
1
0
0
415
49,325,336
2018-03-16T16:19:00.000
0
0
0
0
1
python,python-3.x,orm,sqlalchemy,ponyorm
0
51,031,086
0
1
0
false
0
0
For the record I've ended up with a hybrid approach using sqlalchemy. Sqlalchemy was not flexible enough to do everything I wanted out of the box in a non verbose fashion, but had the required functionality to get a fair bit of the way along if one took the pain of writing explicitely everything needed. So I wrote a program that generates about 6000 lines of sqlalchemy code in order to have a 1 to 1 mapping between sqlalchemy objects to tables in the way required (basically defining everything explicitely for sqla). Sqlalchemy has a lot of hooks during autoload, but I have found it hard/impossible to leverage different hooks and set fine grained behaviour at each hook at the same time, that's why I went the automated explicit way. On top of these sqlalchemy objects, I've written objects that wraps them to hide the "which table" traffic control things. A bit of a kludge and I think that I could have done something with type heritance and sqlachemy objects, but time was passing and I only needed very little functionality or maintainability in that layer, so just charged ahead.
1
0
0
0
I have just inherited an extreemly legacy application (built on windows 95 - Magic7 for the connoisseurs) now backed against a recentish mssql db (2012). That's not the db system it was first designed on, and it thus comes with some seriously odd design for tables. I'm looking for a python ORM to help me talk to this thing easily. Namely, I'm after an ORM that can easily, for instance, merge 2 tables as if they were one. For instance I may have tables BILLS and BILLS_HISTORY, with different column names, and perhaps even different column types, so different strictly speaking, but sementically containing the same information (same number of columns, sementically identical values). I'm looking for an ORM that lets me define only one Bill object, that maps to both tables, and that gives me the right hooks to decide where things go, and how to write them when tweaks are needed. Another Example : say I have an object called a good. If a good is finished, it goes in the GOODS table, if it is not finished, it goes in the GOODS_UNFINISHED table. I'm looking for a goods object that can read both tables, and give me a finished property set to the right value depending which table it comes from (and with the hooks to change it from one table to the other if the property is set in some way). I'm fine with python, but I have not done much such db work before so my knowledge is limited there. I could, and might end up writing my own tailor made ORM, but that seems like a waste of time for something that will be thrown away in 6 months when the full transition is done to something new. Does anyone know of an ORM with such capabilities ? I'm planning to study ponyORM and SQLAlchemy, but I have a feeling it will take me a few days to come to a conclusion wether they are suitable for my use case. So I thought I'd ask the community too ... Cheers
Looking for a particular python ORM
1
0
1
1
0
235
49,326,559
2018-03-16T17:36:00.000
1
1
1
0
0
python,python-3.x,python-2.7,github,version
0
49,326,823
0
1
0
false
0
0
I think number 1. and 2. should both work as long as you provide enough details in the README.md file in the repository. In case of option 1, you should ask your collaborator to add you as a collaborator to the repository. In case of number 2, you should definitely cross-link each other's repositories in your README-files respectively. I'd definitely add a requirements.txt file for each python version so that the users can conveniently install your dependencies with pip install -r requirements.txt or pip3 install -r requirements.txt.
1
0
0
0
A collaborator of mine wrote a software package in Python 2.7, took advantage of it to run some tests and obtain some scientific results. We wrote together a paper about the methods he developed and the results he obtained. Everything worked out well so he recently put this package publically available on his GitHub webpage. Then I thought it would have been useful to have a Python 3.5 version of that package. Therefore I downloaded the original software and made the proper changes to have it working on Python 3.5. Now I don't know how to properly release this Python 3.5 package. I envision three possibilities: Should I put it on his original GitHub project repository? This option would lead to some confusion, because people would have to download both the Python 2.7 and the Python 3.5 code. Should I create a new repository only for my Python 3.5 package, in addition to the Python 2.7 one released by my collaborator? This option would lead to the existence of two running code repositories, and to some confusion as well, because people might not know which is the "official one" to use. Should I create a new repository only for my Python 3.5 package, and ask my collaborator to delete his Python 2.7 repository? This option would make our paper inconsistent, because it states that tests were done with Python 2.7. Do you envision any other option I did not include? Do you have any suggestion?
What's the best way to release two versions of the same software package on GitHub?
0
0.197375
1
0
0
44
49,340,877
2018-03-17T19:12:00.000
0
0
0
0
0
python,sqlite
0
49,361,558
0
2
0
false
0
0
This is worked but when using c.fetchall it didn't work.shows error saying TypeError: expected string or buffer import sqlite3 import nltk from nltk.tokenize import sent_tokenize, word_tokenize conn = sqlite3.connect('ACE.db') c = conn.cursor() c.execute('SELECT can_answer FROM Can_Answer') rows = c.fetchone() for row in rows: print(row) print(word_tokenize(row))
1
0
0
0
I want assign the data which is retrieve from database (sqlit3) particular column for a variable and call that variable for word tokenize. please help with this I know tokenize part but I want to know how to assign the db value to a variable in python.
how to assign db column value to variable and call it to tokenize in python
0
0
1
1
0
143
49,356,867
2018-03-19T06:18:00.000
1
0
0
0
0
python,django,django-permissions,django-oauth
0
53,847,566
0
1
0
false
1
0
Actually there is a difference between Django permissions and OAuth token scope, Django permissions use for define access level to your endpoint addresses like when you want just authenticated user see some data but OAuth token scope is for time you want to have third-party login and you define when somebody login what access he/she has, like when you authenticate from Gmail in scope Gmail, for example, says read and you just have read access when you login . and I didn't get you concern number 2
1
1
0
0
I'm building a dedicated OAuth2 as a service for my application, where users will be both authenticating and authorizing themselves. I've the following concerns 1) Is OAuth2 TokenScope similar to Django Permissions? 2) If I want to make role-level hierarchy application, how do I go about building one with OAuth2?
Is OAuth2 TokenScope similar to Django Permissions?
0
0.197375
1
0
0
59
49,383,201
2018-03-20T11:36:00.000
0
0
1
0
1
python,packages
0
49,383,321
0
3
0
false
0
0
Try to: Open Anaconda Prompt and then do: pip install whatever - to install wheels If you want to install spyder the open Anaconda Navigator - and you should be in the home tab - then highlat spyder and press install - thats all.
1
0
0
0
I am new to configuring and setting up Python from scratch. I have installed Anaconda and I plan to use Spyder for python development. I also have a older version of Python installed on the same machine elsewhere. I needed to get my hands on a package to use in Spyder which I needed to download and install. I downloaded and installed pip directly from the website and then I used this in the command line of the older python install to obtain the package I required. However I don't understand how I go about making this available to Spyder. I believe it works on a folder structure within it's own directory and I am unsure how to change this to get the package I have already downloaded. I thought I might be able to copy it across, or point it at the directory where the package was downloaded to but I cannot work out how to do this. I also tried using pip from within Spyder to work but it cannot find it. Can you please let me know what I need to check?
Install Python packages and directories (windows)
1
0
1
0
0
68
49,386,402
2018-03-20T14:04:00.000
0
1
0
1
0
python,cloud
0
49,386,557
0
2
0
false
0
0
You can try Heroku. It's free and they got their own tutorials. But it's good enough only if you will use it for studying. AWS, Azure or google cloud are much better for production.
1
3
0
1
I have a python code which is quite heavy, my computer cant run it efficiently, therefore i want to run the python code on cloud. Please tell me how to do it ? any step by step tutorial available thanks
How to run python code on cloud
0
0
1
0
0
3,979
49,390,281
2018-03-20T17:08:00.000
1
0
1
0
0
python,maya,maya-api
0
49,390,999
0
1
0
true
0
0
you have to use createNode : node = cmds.createNode('blinn', name='yipikai')
1
0
0
1
Hi everyone I am trying to write some script to automate my work in Maya. Right now I am looking for the way to add materials to the hypershade. I can't see anything on console (Script editor) so I can't se what python api I should use. I know that maya treat materials as sets, and to assign a material to polygon I need to put it in this set, but I don't know how to create a new set. So my question is: How I add a material to the scene using python maya-api?
How to add an material to the maya scene?
0
1.2
1
0
0
469
49,408,048
2018-03-21T13:47:00.000
1
0
1
0
1
python,python-3.x,python-2.7,spyder
0
49,424,870
0
2
0
false
0
0
So, when you create a new environment with: conda create --name python36 python=3.6 anaconda This will create an env. called python36 and the package to be installed is anaconda (which basically contains everything you'll need for python). Be sure that your new env. actually is running the ecorrect python version by doing the following: activate python environmentwith: active python36 then type: python this will indicate what python version is running in your env. It turns out, for some reason, my environment was running python2.7 and not 3.6 The cool thing is that anaconda distribution comes with spyder. Just be sure that you run Spyder from within your environment. So to do this: activate python36 then type: spyder It will automatically open spyder3 for python3. My initial issue was therefore that even though i created a python3 environment, it was still running python2.7. But after removing the old python3 environment and creating a new python3 env. and installing the desired libraries/packages it now works perfect. I have a 2.7 and 3.6 environment which can both be edited with spyder2 and spyder3 IDE
2
0
0
0
I have Python 2.7 installed (as default in Windows 7 64bit) and also have Python 3 installed in an environment (called Python3). I would like to use Spyder as my IDE. I have installed Spyder3 in my Python3 environment, but when I open Spyder 3 (from within my Python 3 env), then it opens Spyder for python 2.7 and not python 3.5 as I would've hoped for.). I don't know why. I have done TOOLS--Preferences--Python Interpreter -- Use the following Python interpreter: C:\Users\16082834\AppData\Local\Continuum\Anaconda2\envs\Python3\python.exe, but this didn't work either. Many of us are running multiple python environments; I am sure some of you might have managed to use Spyder for these different environments. Please tell me how I can get Python 3 using this method.
How to install Spyder for Python 2 and Python 3 and get Python 3 in my Spyder env?
0
0.099668
1
0
0
4,452
49,408,048
2018-03-21T13:47:00.000
2
0
1
0
1
python,python-3.x,python-2.7,spyder
0
49,408,240
0
2
0
false
0
0
One possible way is to run activate Python3 and then run pip install Spyder.
2
0
0
0
I have Python 2.7 installed (as default in Windows 7 64bit) and also have Python 3 installed in an environment (called Python3). I would like to use Spyder as my IDE. I have installed Spyder3 in my Python3 environment, but when I open Spyder 3 (from within my Python 3 env), then it opens Spyder for python 2.7 and not python 3.5 as I would've hoped for.). I don't know why. I have done TOOLS--Preferences--Python Interpreter -- Use the following Python interpreter: C:\Users\16082834\AppData\Local\Continuum\Anaconda2\envs\Python3\python.exe, but this didn't work either. Many of us are running multiple python environments; I am sure some of you might have managed to use Spyder for these different environments. Please tell me how I can get Python 3 using this method.
How to install Spyder for Python 2 and Python 3 and get Python 3 in my Spyder env?
0
0.197375
1
0
0
4,452
49,419,095
2018-03-22T01:28:00.000
0
0
0
1
0
python,python-3.6
0
49,436,806
0
2
0
false
0
0
Thanks for the reply !!! The command works but it is adding extra characters import subprocess subprocess.check_output("ps -ef | grep pmon | grep orcl | grep -v grep | awk '{print $2}'", stderr=subprocess.STDOUT, shell=True) b'21648\n'
1
1
0
0
I am trying to find pid of a oracle process by using below command ps -ef | grep pmon | grep orcl | grep -v grep When trying to use python oracle_pid = os.system("echo ps -ef | grep pmon | grep %s | grep -v grep | awk '{print $2}'" %(oracle_sid)) print(oracle_pid) it is printing 0 as value Any suggestions on how to achieve just the pid as output? Regards
Find pid using python by grepping two variables
0
0
1
0
0
182
49,438,230
2018-03-22T20:51:00.000
0
0
1
0
1
python,virtual-machine
0
49,438,506
0
1
0
false
0
0
You could possible map the directory to the vm as a network resource?
1
0
0
0
I have a Python script deployed and running in azure virtual machine, however in the script, I have to read a local CSV file for further processing, that leads a failure, server came back with notice that: "file does not exist " May I ask how can I running Python script in VM but read and save on my computer local file ? If not possible, may I ask how to overcome this problem with cloud storage, particularly with Azure?
How to running Python script in virtual machine but read and save on computer local csv file
1
0
1
0
0
1,594
49,458,945
2018-03-23T22:11:00.000
1
0
0
0
0
python-3.x,ldap
0
49,470,034
0
2
0
false
0
0
The easiest way is to save the raw byte value in a file and open it with a picture editor. The photo is probably a jpeg, but it can be in any format.
1
1
0
0
I'm using ldap3. I can connect and read all attributes without any issue, but I don't know how to display the photo of the attribute thumbnailPhoto. If I print(conn.entries[0].thumbnailPhoto) I get a bunch of binary values like b'\xff\xd8\xff\xe0\x00\x10JFIF.....'. I have to display it on a bottle web page. So I have to put this value in a jpeg or png file. How can I do that?
how to get and display photo from ldap
0
0.099668
1
0
1
4,282
49,477,640
2018-03-25T15:41:00.000
0
0
0
0
1
python,python-3.x,tensorflow
1
52,206,018
1
2
0
false
0
0
This looks like it is an issue with Numpy, which is a dependency of Tensorflow. Did you try upgrading your version of numpy using pip or conda? Like such: pip install --ignore-installed --upgrade numpy
1
0
1
0
I get the following error when trying to use tensorflow importError Traceback (most recent call last) in () ----> 1 import tensorflow as tf ~\Anaconda3\lib\site-packages\tensorflow__init__.py in () 22 23 # pylint: disable=wildcard-import ---> 24 from tensorflow.python import * 25 # pylint: enable=wildcard-import 26 ~\Anaconda3\lib\site-packages\tensorflow\python__init__.py in () 54 # imported using tf.load_op_library() can access symbols defined in 55 # _pywrap_tensorflow.so. ---> 56 import numpy as np 57 try: 58 if hasattr(sys, 'getdlopenflags') and hasattr(sys, 'setdlopenflags'): ~\Anaconda3\lib\site-packages\numpy__init__.py in () 140 return loader(*packages, **options) 141 --> 142 from . import add_newdocs 143 all = ['add_newdocs', 144 'ModuleDeprecationWarning', ~\Anaconda3\lib\site-packages\numpy\add_newdocs.py in () 11 from future import division, absolute_import, print_function 12 ---> 13 from numpy.lib import add_newdoc 14 15 ############################################################################### ~\Anaconda3\lib\site-packages\numpy\lib__init__.py in () 6 from numpy.version import version as version 7 ----> 8 from .type_check import * 9 from .index_tricks import * 10 from .function_base import * ~\Anaconda3\lib\site-packages\numpy\lib\type_check.py in () 9 'common_type'] 10 ---> 11 import numpy.core.numeric as _nx 12 from numpy.core.numeric import asarray, asanyarray, array, isnan, zeros 13 from .ufunclike import isneginf, isposinf ~\Anaconda3\lib\site-packages\numpy\core__init__.py in () 36 from . import numerictypes as nt 37 multiarray.set_typeDict(nt.sctypeDict) ---> 38 from . import numeric 39 from .numeric import * 40 from . import fromnumeric ~\Anaconda3\lib\site-packages\numpy\core\numeric.py in () 1818 1819 # Use numarray's printing function -> 1820 from .arrayprint import array2string, get_printoptions, set_printoptions 1821 1822 ~\Anaconda3\lib\site-packages\numpy\core\arrayprint.py in () 42 from .umath import absolute, not_equal, isnan, isinf, isfinite, isnat 43 from . import multiarray ---> 44 from .multiarray import (array, dragon4_positional, dragon4_scientific, 45 datetime_as_string, datetime_data, dtype, ndarray, 46 set_legacy_print_mode) This error occured after I tried to upgrade TF from version 1.1 to the latest version. So I dont know what current TF version I am using. I am using Windows 10 without a GPU. Do you know how to fix it?
TensorFlow: ImportError: cannot import name 'dragon4_positional'
0
0
1
0
0
1,449
49,480,796
2018-03-25T20:50:00.000
1
0
0
0
1
django,python-3.x
0
49,482,862
0
1
0
true
1
0
Either approach will work. If it makes sense for your services to share the same code base, you can create a single project and use separate apps for each service and separate settings files for each deployment. The settings file would activate the desired app by listing it in INSTALLED_APPS, and would include settings specific to that service. Or, if you don't need the services to be coupled in that way, you could certainly make each one its own project.
1
0
0
0
At first, I am learning Python and Django only... So I'm a noobie yet. I need to build a microservice architecture to I could run each my service on a separate server machine. In the Django I need to create an environment, project and apps. So, can I run these apps after on different servers? If no, how can I make it with Django? Do I need to create a separate project for each service? P.S. If my question is stupid, pls, explain where am I wrong. I am from Java Spring world where I was need to create just new app for each service.
Different IPs are for different apps
1
1.2
1
0
0
36
49,481,114
2018-03-25T21:26:00.000
1
0
0
0
0
python-3.x,audio,scipy
0
49,957,782
0
1
0
true
0
0
wavfile.read() returns two things: data: This is the data from your wav file which is the amplitude of the audio taken at even intervals of time. sample rate: How many of those intervals make up one second of audio.
1
0
1
0
I have never worked with audio before. For a monophonic wav file read() returns an 1-D array of integers. What do these integers represent? Are they the frequencies? If not how do I use them to get the frequencies?
What is returned by scipy.io.wavefile.read()?
0
1.2
1
0
0
152
49,484,820
2018-03-26T05:57:00.000
0
0
0
0
0
python,nltk,text-mining,naivebayes
0
49,485,045
0
1
0
true
0
0
Just saving the model will not help. You should also save your VectorModel (like tfidfvectorizer or countvectorizer what ever you have used for fitting the train data). You can save those the same way using pickle. Also save all those models you used for pre-processing the train data like normalization/scaling models, etc. For the test data repeat the same steps by loading the pickle models that you saved and transform the test data in train data format that you used for model building and then you will be able to classify.
1
0
1
0
I have using nltk packages and train a model using Naive Bayes. I have save the model to a file using pickle package. Now i wonder how can i use this model to test like a random text not in the dataset and the model will tell if the sentence belong to which categorize? Like my idea is i have a sentence : " Ronaldo have scored 2 goals against Egypt" And pass it to the model file and return categorize "sport".
Text Categorization Test NLTK python
1
1.2
1
0
0
147
49,491,650
2018-03-26T12:36:00.000
-1
0
1
0
0
python,import,python-internals
0
49,491,846
0
1
0
false
0
0
First of all, a module is a python file that contains classes and functions. when you say From A Import B python searches for A(a module) in the standard python library and then imports B(the function or class) which is the module if it finds A. If it doesn't it goes out and starts searching in the directory were packages are stored and searches for the package name( A ) and then if it finds it, it imports the Module name(B). If it fails in the past 2 processes it returns an error. Hope this helps.
1
0
0
0
If I understand correctly, the python syntax from ... import ... can be used in two ways from package-name import module-name from module-name import function-name I would like to know a bit of how Python internally treats the two different forms. Imagine, for example, that the interpreter gets "from A import B", does the interpreter actually try to determine whether A is a package-name/ module-name, or does it internally treat packages and modules as the same class of objects (something like Linux treats files and directories very similarly)?
How does Python internally distinguish "from package import module" between "from module import function"
1
-0.197375
1
0
0
175
49,494,093
2018-03-26T14:38:00.000
-1
0
0
0
0
python,scrapy,web-crawler
0
49,523,000
0
3
1
false
1
0
I'm no expert but I would say that your speed is pretty slow. I just went to google, typed in the word "hats", pressed enter and: about 650,000,000 results (0.63 seconds). That's gonna be tough to compete with. I'd say that there's plenty of room to improve.
2
5
0
0
I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.
What is a good crawling speed rate?
0
-0.066568
1
0
1
453
49,494,093
2018-03-26T14:38:00.000
0
0
0
0
0
python,scrapy,web-crawler
0
70,224,507
0
3
1
false
1
0
It really depends but you can always check your crawling benchmarks for your hardware by typing scrapy bench on your command line
2
5
0
0
I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.
What is a good crawling speed rate?
0
0
1
0
1
453
49,504,741
2018-03-27T05:06:00.000
0
0
0
0
0
python,python-2.7
0
49,505,061
0
1
0
false
0
0
If you have access to both machines, then one way could be to leverage python's sockets. The client on the local machine would send a request to the server on the remote machine, then the server would do os.path.ismount('/path') and send back the return value to the client.
1
0
0
0
os.path.ismount() will verify whether the given path is mounted on the local linux machine. Now I want to verify whether the path is mounted on the remote machine. Could you please help me how to achieve this. For example: my dev machine is : xx:xx:xxx I want to verify whether the '/path' is mounted on yy:yy:yyy. How can achieve this by using os.path.ismount() function
Verify mountpoint in the remote server
0
0
1
0
1
128
49,523,349
2018-03-27T22:23:00.000
0
0
0
0
0
python,llvm,header-files
0
49,525,779
0
1
0
false
0
0
llvmlite is a python binding for LLVM, which is independent from C or C++ or any other language. To parse C or C++, one option is to use the python binding for libclang.
1
0
0
0
I'd like to parse a c and/or c++ header file in python using llvmlite. Is this possible? And if so, how do I create an IR representation of the header's contents?
How to parse a c/c++ header with llvmlite in python
1
0
1
0
0
216
49,526,618
2018-03-28T05:18:00.000
1
1
1
0
0
python,frameworks,libraries
0
49,530,470
0
1
0
true
0
0
First of all, you should try to be comfortable with every Python mechanisms (classes, recursion, functions... everything you usually find in any book or complete tutorial). It could be useful for any problem you want to solve. Then, you should start your own project using the suitable libraries and frameworks. You must set a clear goal, do you want to build a website or a software ? You won't use the same libraries/framework for any purpose. Some of them are really often used so you could start by reading their documentation. Anyhow, to answer your question, framework and libraries are not the most important bit of coding. They are just your tools, whereas the way you think to solve problems and build your algorithms is your art. The most important thing to be a painter is not knowing how to use a brush (even if, of course, it's really useful)
1
1
0
0
Coding is entirely new to me. Right now, I am teaching myself Python. As of now, I am only going over algorithms. I watched a few crash courses online about the language. Based on that, I don't feel like I am able to code any sort of website or software which leads me wonder if the libraries and frameworks of any programming language are the most important bit? Should I spend more time teaching myself how to code with frameworks and libraries? Thanks
Are framework and libraries the more important bit of coding?
0
1.2
1
0
0
43
49,550,182
2018-03-29T07:20:00.000
-4
0
0
0
1
python,keras
0
51,157,225
0
9
0
false
0
0
for 1), I think you may build another model with right name and same structure with the exist one. then set weights from layers of the exist model to layers of the new model.
2
27
1
0
1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. Class Model seem to have the property model.name, but when changing it I get "AttributeError: can't set attribute". What is the Problem here? 2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name="hiddenLayer1"). Watch out, Layers with same name share weights!
Keras rename model and layers
0
-1
1
0
0
40,090
49,550,182
2018-03-29T07:20:00.000
10
0
0
0
1
python,keras
0
63,853,924
0
9
0
false
0
0
To rename a keras model in TF2.2.0: model._name = "newname" I have no idea if this is a bad idea - they don't seem to want you to do it, but it does work. To confirm, call model.summary() and you should see the new name.
2
27
1
0
1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. Class Model seem to have the property model.name, but when changing it I get "AttributeError: can't set attribute". What is the Problem here? 2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name="hiddenLayer1"). Watch out, Layers with same name share weights!
Keras rename model and layers
0
1
1
0
0
40,090
49,561,062
2018-03-29T16:33:00.000
0
0
0
0
0
python,heroku
1
49,571,369
0
2
0
false
1
0
If we need to import function from fileName into main.py, write "from .fileName import functionName". Thus we don't need to write any dependency in requirement file.
1
0
0
0
I'm developing a chatbot using heroku and python. I have a file fetchWelcome.py in which I have written a function. I need to import the function from fetchWelcome into my main file. I wrote "from fetchWelcome import fetchWelcome" in main file. But because we need to mention all the dependencies in the requirement file, it shows error. I don't know how to mention user defined requirement. How can I import the function from another file into the main file ? Both the files ( main.py and fetchWelcome.py ) are in the same folder.
Heroku Python import local functions
0
0
1
0
1
694
49,561,882
2018-03-29T17:21:00.000
1
0
0
0
0
python,pandas,machine-learning,scikit-learn,svm
0
49,562,017
0
2
0
false
0
0
For me personally, I set random_state to a specific number (usually 42) so if I see variation in my programs accuracy I know it was not caused by how the data was split. However, this can lead to my network over fitting on that specific split. I.E. I tune my network so it works well with that split, but not necessarily on a different split. Because of this, I think it's best to use a random seed when you submit your code so the reviewer knows you haven't over fit to that particular state. To do this with sklearn.train_test_split you can simply not provide a random_state and it will pick one randomly using np.random.
1
5
1
0
I understand how random state is used to randomly split data into training and test set. As Expected, my algorithm gives different accuracy each time I change it. Now I have to submit a report in my university and I am unable to understand the final accuracy to mention there. Should I choose the maximum accuracy I get? Or should I run it with different RandomStates and then take its average? Or something else?
How to choose RandomState in train_test_split?
0
0.099668
1
0
0
5,696
49,572,547
2018-03-30T10:14:00.000
0
0
0
1
1
python,websocket,gevent
1
49,618,957
0
1
1
false
0
0
Your answer may be as simple as adding timeouts to some of your spawns or gevent calls. Gevent is still single threaded, and so if an IO bound resource hangs, it can't context switch until it's been received. Setting a timeout might help bypass these issues and move your app forward?
1
0
0
0
I have a Python app that uses websockets and gevent. It's quite a big application in my personal experience. I've encountered a problem with it: when I run it on Windows (with 'pipenv run python myapp'), it can (suddenly but very rarily) freeze, and stop accepting messages. If I then enter CTRL+C in cmd, it starts reacting to all the messages, that were issued when it was hanging. I understand, that it might block somewhere, but I don't know how to debug theses types of errors, because I don't see anything in the code, that could do it. And it happens very rarily on completely different stages of the application's runtime. What is the best way to debug it? And to actually see what goes behind the scenes? My logs show no indication of a problem. Could it be an error with cmd and not my app?
Python application freezes, only CTRL-C helps
0
0
1
0
0
453
49,576,487
2018-03-30T14:45:00.000
1
0
1
0
0
sql,python-3.x,pandas
0
49,577,047
0
2
0
true
0
0
Once you are able to get the month out into a variable: mon you can use the following code to get the quarter information: for mon in range(1, 13): print (mon-1)//3 + 1, print which would return: for months 1 - 3 : 1 for months 4 - 6 : 2 for months 7 - 9 : 3 for months 10 - 12 : 4
1
1
1
0
I am writing a sql query using pandas within python. In the where clause I need to compare a date column (say review date 2016-10-21) with this value '2016Q4'. In other words if the review dates fall in or after Q4 in 2016 then they will be selected. Now how do I convert the review date to something comparable to 'yyyyQ4' format. Is there any python function for that ? If not, how so I go about writing one for this purpose ?
How to compare date (yyyy-mm-dd) with year-Quarter (yyyyQQ) in python
0
1.2
1
0
0
798
49,584,153
2018-03-31T04:10:00.000
0
1
0
0
0
python,3d,computational-geometry,bin-packing
0
49,597,211
0
2
0
true
0
0
A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure. As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.
2
3
1
0
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. Update: I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
Measurement for intersection of 2 irregular shaped 3d object
0
1.2
1
0
0
1,361
49,584,153
2018-03-31T04:10:00.000
0
1
0
0
0
python,3d,computational-geometry,bin-packing
0
49,688,037
0
2
0
false
0
0
By straight voxelization: If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel. Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.) Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.
2
3
1
0
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. Update: I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
Measurement for intersection of 2 irregular shaped 3d object
0
0
1
0
0
1,361
49,585,758
2018-03-31T08:19:00.000
1
0
1
0
0
python,machine-learning,pycharm
0
49,585,883
0
2
0
false
0
0
you can just import numpy to actvate science mode. import numpy as np
2
1
0
0
Like Spyder, you can execute code block. how can i do in Pycharm in science mode. in spyder you use # In[] How can i do this in pycharm
How executed code block Science mode in Pycharm
0
0.099668
1
0
0
94
49,585,758
2018-03-31T08:19:00.000
1
0
1
0
0
python,machine-learning,pycharm
0
49,585,817
0
2
0
true
0
0
pycharm use code cell. you can do with this '#%% '
2
1
0
0
Like Spyder, you can execute code block. how can i do in Pycharm in science mode. in spyder you use # In[] How can i do this in pycharm
How executed code block Science mode in Pycharm
0
1.2
1
0
0
94
49,586,831
2018-03-31T10:41:00.000
1
0
0
1
0
python,emulation,sdn,mininet,openflow
0
49,602,289
0
1
0
false
0
0
Yes, you can do this with 6LowPAN.py. You then add switches and controller into the topology with their links.
1
0
0
0
In mininet-wifi examples, I found a sample (6LowPAN.py) that creates a simple topology contains 3 nodes. Now, I intend to create another topology as follows: 1- Two groups of sensor nodes such that each group connects to a 'Sink node' 2- Connect each 'Sink node' to an 'ovSwitch' 3- Connect the two switches to a 'Controller' Is that doable using mininet-wifi? Any tips how to do it?? Many thanks in advance :)
Building WSN topology integrated with SDN controller (mininet-wifi)
0
0.197375
1
0
0
444
49,593,985
2018-04-01T01:28:00.000
1
0
0
0
0
python,tensorflow,machine-learning,neural-network,deep-learning
0
49,594,057
0
1
1
true
0
0
A large number of features makes it easier to parallelize the normalization of the dataset. This is not really an issue. Normalization on large datasets would be easily GPU accelerated, and it would be quite fast. Even for large datasets like you are describing. One of my frameworks that I have written can normalize the entire MNIST dataset in under 10 seconds on a 4-core 4-thread CPU. A GPU could easily do it in under 2 seconds. Computation is not the problem. While for smaller datasets, you can hold the entire normalized dataset in memory, for larger datasets, like you mentioned, you will need to swap out to disk if you normalize the entire dataset. However, if you are doing reasonably large batch sizes, about 128 or higher, your minimums and maximums will not fluctuate that much, depending upon the dataset. This allows you to normalize the mini-batch right before you train the network on it, but again this depends upon the network. I would recommend experimenting based on your datasets, and choosing the best method.
1
0
1
0
It is a common practice to normalize input values (to a neural network) to speed up the learning process, especially if features have very large scales. In its theory, normalization is easy to understand. But I wonder how this is done if the training data set is very large, say for 1 million training examples..? If # features per training example is large as well (say, 100 features per training example), 2 problems pop up all of a sudden: - It will take some time to normalize all training samples - Normalized training examples need to be saved somewhere, so that we need to double the necessary disk space (especially if we do not want to overwrite the original data). How is input normalization solved in practice, especially if the data set is very large? One option maybe is to normalize inputs dynamically in the memory per mini batch while training.. But normalization results will then be changing from one mini batch to another. Would it be tolerable then? There is maybe someone in this platform having hands on experience on this question. I would really appreciate if you could share your experiences. Thank you in advance.
Neural Network - Input Normalization
0
1.2
1
0
0
1,045
49,599,452
2018-04-01T15:08:00.000
0
0
1
1
0
python,eyed3
0
49,675,091
0
2
0
false
0
0
Gave up...waste of my time and everyone else's sorry. What I apparently needed was the eyed3 (lowercase 'd') non-python utility.
2
2
0
0
I just installed the abcde CD utility but it's complaining that it can't find eyeD3, the Python ID3 program. This appears to be a well-known and unresolved deficiency in the abcde dependencies, and I'm not a Python programmer, so I'm clueless. I have the Python 2.7.12 came with Mint 18, and something called python3 (3.5.2). If I try to install eyeD3 with pip (presumably acting against 2.7.12), it says it's already installed (in /usr/lib/python2.7/dist-packages/eyeD3). I don't know how to force pip to install under python3. If I do a find / -name eyeD3, the only other thing it turns up is /usr/share/pyshared/eyeD3. But both of those are only directories, and both just contain Python libraries, not executables. There isn't any other file called eyeD3 anywhere on disk. Does anyone know what it's supposed to be called, where it's supposed to live, and how I can install it? P
Finding the eyeD3 executable
0
0
1
0
0
1,262
49,599,452
2018-04-01T15:08:00.000
1
0
1
1
0
python,eyed3
0
49,600,355
0
2
0
false
0
0
I don't know how to force pip to install under python3. python3 -m pip install eyeD3 will install it for Python3.
2
2
0
0
I just installed the abcde CD utility but it's complaining that it can't find eyeD3, the Python ID3 program. This appears to be a well-known and unresolved deficiency in the abcde dependencies, and I'm not a Python programmer, so I'm clueless. I have the Python 2.7.12 came with Mint 18, and something called python3 (3.5.2). If I try to install eyeD3 with pip (presumably acting against 2.7.12), it says it's already installed (in /usr/lib/python2.7/dist-packages/eyeD3). I don't know how to force pip to install under python3. If I do a find / -name eyeD3, the only other thing it turns up is /usr/share/pyshared/eyeD3. But both of those are only directories, and both just contain Python libraries, not executables. There isn't any other file called eyeD3 anywhere on disk. Does anyone know what it's supposed to be called, where it's supposed to live, and how I can install it? P
Finding the eyeD3 executable
0
0.099668
1
0
0
1,262
49,619,655
2018-04-02T22:28:00.000
0
0
1
0
0
python,csv,text
0
49,623,125
0
3
0
false
0
0
First, get a distinct of all breakfast items. A pseudo code like below Iterate through each line Collect item and person in 2 different lists Do a set on those 2 lists Say persons, items Counter = 1 for person in persons: for item in items: Print "breafastitem", Counter Print person, item
1
0
1
0
Trying to take data from a csv like this: col1 col2 eggs sara bacon john ham betty The number of items in each column can vary and may not be the same. Col1 may have 25 and col2 may have 3. Or the reverse, more or less. And loop through each entry so its output into a text file like this breakfast_1 breakfast_item eggs person sara breakfast_2 breakfast_item bacon person sara breakfast_3 breakfast_item ham person sara breakfast_4 breakfast_item eggs person john breakfast_5 breakfast_item bacon person john breakfast_6 breakfast_item ham person john breakfast_7 breakfast_item eggs person betty breakfast_8 breakfast_item bacon person betty breakfast_9 breakfast_item ham person betty So the script would need to add the "breakfast" number and loop through each breakfast_item and person. I know how to create one combo but not how to pair up each in a loop? Any tips on how to do this would be very helpful.
python pair multiple field entries from csv
0
0
1
0
0
50
49,624,485
2018-04-03T07:25:00.000
2
0
0
1
0
python,static-ip-address
0
49,625,033
0
1
0
false
0
0
I don't know of a Python netsh API. But it should not be hard to do with a pair of subprocess calls. First issue netsh interface show interface, parse the output you get back, then issue your set address command. Or am I missing the point?
1
0
0
0
Windows command netsh interface show interface shows all network connections and their names. A name could be Wireless Network Connection, Local Area Network or Ethernet etc. I would like to change an IP address with netsh interface ip set address "Wireless Network Connection" static 192.168.1.3 255.255.255.0 192.168.1.1 1 with Python script, but I need a network interface name. Is it possible to have this information like we can have a hostname with socket.gethostname()? Or I can change an IP address with Python in other way?
How to find out Windows network interface name in Python?
1
0.379949
1
0
1
2,892
49,629,518
2018-04-03T11:57:00.000
0
0
1
1
1
python,python-3.x,cmd,module
0
49,639,764
0
1
0
false
0
0
On Windows systems, third-party modules (single files containing one or more functions or classes) and third-party packages (a folder [a.k.a. directory] that contains more than one module (and sometimes other folders/directories) are usually kept in one of two places: c:\\Program Files\\Python\\Lib\\site-packages\\ and c:\\Users\\[you]\\AppData\\Roaming\\Python\\. The location in Program Files is usually not accessible to normal users, so when PIP installs new modules/packages on Windows it places them in the user-accessible folder in the Users location indicated above. You have direct access to that, though by default the AppData folder is "hidden"--not displayed in the File Explorer list unless you set FE to show hidden items (which is a good thing to do anyway, IMHO). You can put the module you're working on in the AppData\\Roaming\\Python\\ folder. You still need to make sure the folder you put it in is in the PATH environment variable. PATH is a string that tells Windows (and Python) where to look for needed files, in this case the module you're working on. Google "set windows path" to find how to check and set your path variable, then just go ahead and put your module in a folder that's listed in your path. Of course, since you can add any folder/directory you want to PATH, you could put your module anywhere you wanted--including leaving it on the Desktop--as long as the location is included in PATH. You could, for instance, have a folder such as Documents\\Programming\\Python\\Lib to put your personal modules in, and use Documents\\Programming\\Python\\Source for your Python programs. You'd just need to include those in the PATH variable. FYI: Personally, I don't like the way python is (by default) installed on Windows (because I don't have easy access to c:\\Program Files), so I installed Python in a folder off the drive root: c:\Python36. In this way, I have direct access to the \\Lib\\site-packages\\ folder.
1
0
0
0
I'm reading headfirst python and have just completed the section where I created a module for printing nested list items, I've created the code and the setup file and placed them in a file labeled "Nester" that is sitting on my desktop. The book is now asking for me to install this module onto my local copy of Python. The thing is, in the example he is using the mac terminal, and I'm on windows. I tried to google it but I'm still a novice and a lot of the explanations just go over my head. Can someone give me clear thorough guide?.
how do I install my modual onto my local copy of python on windows?
0
0
1
0
0
888
49,637,924
2018-04-03T19:38:00.000
1
0
0
0
0
python,django,django-admin,django-authentication,django-permissions
0
49,638,105
0
1
0
true
1
0
Information about UserItemExpiryDate has to be stored in a separate table (Model). I would recommend using your coding in Django. There are few scenarios to consider: 1) A new user is created, and he/she should have access to items. In this case, you add entries to UserItemExpiry with new User<>Item combination (as key) and expiry date. Then, for logged in user you look for items from Items that has User<>Item in UserItemExpiry in the future. 2) A new item is created, and it has to be added to existing users. In such case, you add entries to UserItemExpiry with ALL users<> new Item combination (as key) and expiry date. And logic for "selecting" valid items is the same as in point 1. Best of luck, Radek Szwarc
1
1
0
0
I am fairly new to Django and could not figure out by reading the docs or by looking at existing questions. I looked into Django permissions and authentication but could not find a solution. Let's say I have a Detail View listing all instances of a Model called Item. For each Item, I want to control which User can view it, and for how long. In other words, for each User having access to the Item, I want the right/permission to view it to expire after a specified period of time. After that period of time, the Item would disapear from the list and the User could not access the url detailing the Item. The logic to implement is pretty simple, I know, but the "per user / per object" part confuses me. Help would be much appreciated!
Django : how to give user/group permission to view model instances for a specified period of time
0
1.2
1
0
0
469
49,652,693
2018-04-04T13:46:00.000
1
0
0
0
0
excel,python-3.x,pandas,import
0
49,656,081
0
2
0
false
0
0
Try converting the file from .xlsx to .CSV I had the same problem with text columns so i tried converting to CSV (Comma Delimited) and it worked. Not very helpful, but worth a try.
1
0
1
0
I am working on a excel file with large text data. 2 columns have lot of text data. Like descriptions, job duties. When i import my file in python df=pd.read_excel("form1.xlsx"). It shows the columns with text data as NaN. How do I import all the text in the columns ? I want to do analysis on job title , description and job duties. Descriptions and Job Title are long text. I have over 150 rows.
how to read text from excel file in python pandas?
0
0.099668
1
1
0
2,378
49,656,877
2018-04-04T17:20:00.000
2
0
0
0
0
python,html,web,flask
0
49,656,968
0
2
0
false
1
0
Yes, you can. Just like you said, you can use uwsgi to run your site efficiently. There are other web servers like uwsgi: I usually use Gunicorn. But note that Flask can run without any of these, it will simply be less efficient (but if it is just for you then it should not be a problem). You can find tutorials on the net with a few keywords like "serving flask app". If you want to access your site from the internet (outside of your local network), you will need to configure your firewall and router/modem to accept connections on port 80 (HTTP) or 443 (HTTPS). Good luck :)
1
2
0
0
I want to know that if I can make a web server with Flask in my pc like xampp apache (php) for after I can access this page in others places across the internet. Or even in my local network trough the wifi connection or lan ethernet. Is it possible ? I saw some ways to do this, like using "uwsgi".. something like this... but I colud never do it. OBS: I have a complete application in Flask already complete, with databases and all things working. The only problem is that I don't know how to start the server and access by the others pc's.
make a web server in localhost with flask
0
0.197375
1
0
0
5,337
49,658,301
2018-04-04T18:48:00.000
1
0
0
0
0
python,python-3.x,user-interface,tkinter
0
49,658,628
0
2
0
false
0
1
For the same reason that you can't write to a database without using a database module, you can't create GUIs without a GUI module. There simply is no way to draw directly on the screen in a cross-platform way without a module. Writing GUIs is very complex. These modules exist to reduce the complexity.
2
0
0
0
I have been wanting to know how to make a GUI without using a module on Python, I have looked into GUI's in Python but everything leads to Tkinter or other Python GUI modules. The reason I do not want to use Tkinter is because I want to understand how to do it myself. I have looked at the Tkinter modules files but it imports like 4 other Modules. I don't mind the modules like system, os or math just not modules which I will use and not understand. If you do decide to answer my question please include as much detail and information on the matter. Thanks -- Darrian Penman
Python - How do I make a window along with widgets without using modules like Tkinter?
0
0.099668
1
0
0
1,223
49,658,301
2018-04-04T18:48:00.000
1
0
0
0
0
python,python-3.x,user-interface,tkinter
0
49,658,596
0
2
0
true
0
1
You cannot write a GUI in Python without importing either a GUI module or importing ctypes. The latter would require calling OS-specific graphics primitives, and would be far worse than doing the same thing in C. (EDIT: see Roland comment below for X11 systems.) The python-coded tkinter mainly imports the C-coded _tkinter, which interfaces to the tcl- and C- coded tk GUI package. There are separate versions of tcl/tk for Windows, *nix, and MacOS.
2
0
0
0
I have been wanting to know how to make a GUI without using a module on Python, I have looked into GUI's in Python but everything leads to Tkinter or other Python GUI modules. The reason I do not want to use Tkinter is because I want to understand how to do it myself. I have looked at the Tkinter modules files but it imports like 4 other Modules. I don't mind the modules like system, os or math just not modules which I will use and not understand. If you do decide to answer my question please include as much detail and information on the matter. Thanks -- Darrian Penman
Python - How do I make a window along with widgets without using modules like Tkinter?
0
1.2
1
0
0
1,223
49,660,637
2018-04-04T21:19:00.000
0
0
1
0
0
python,regex
0
49,660,779
0
3
0
false
0
0
^(th\w*) gives you all results where the string begins with th . If there is more than one word in the string you will only get the first. (^|\s)(th\w*) wil give you all the words begining with th even if there is more than one word begining with th
1
0
0
0
I'm doing the cipher for python. I'm confused on how to use Regular Expression to find a paired word in a text dictionary. For example, there is dictionary.txt with many English words in it. I need to find word paired with "th" at the beginning. Like they, them, the, their ..... What kind of Regular Expression should I use to find "th" at the beginning? Thank you!
Regular Expression in python how to find paired words
0
0
1
0
0
68
49,661,492
2018-04-04T22:36:00.000
1
0
1
0
0
python,ide,pycharm,copy-paste
0
49,679,290
0
1
0
true
0
0
I figured it out: it's caused by the copy-on-select setting of my linux system. To turn it off, go to mobax-settings-configurations-x11-clipboard-disable 'copy on select'
1
0
0
0
This has been a very annoying problem for me and I couldn't find any keymaps or settings that could cause this behavior. Setup: Pycharm Professional 2018.1 installed on redhat linux I remote into the linux machine using mobaX and launch pycharm with window forwarding Scenario 1: I open a browser on windows, copy some text, go to editor or console, paste it somewhere without highlighting any text, hit ctrl+v, it pastes fine Scenario 2: I open a browser on windows, copy some text, go to editor or console, highlight some text there, hit ctrl+v in attempt to replace the highlighted text with what's in my clipboard. The text didn't change. I leave pycharm and paste somewhere else, the text in clipboard has now become the text I highlighted. Edit: ok I just realized this: as soon as I highlight the text, it gets copied...I've turned this feature off for terminal, but couldn't find a global settings for the editor etc. Anyone know how?
pycharm ctrl+v copies the item in console instead paste when highlighted
0
1.2
1
0
0
360
49,662,869
2018-04-05T01:37:00.000
1
0
0
0
1
python,tensorflow,keras
0
49,662,938
0
3
0
false
0
0
At the moment you are returning a 3D array. Add a Flatten() layer to convert the array to 2D, and then add a Dense(1). This should output (batch_size, 1).
1
1
1
0
I have a model that starts with a Conv2D layer and so it must take input of shape (samples, rows, cols, channels) (and the model must ultimately output a shape of (1)). However, for my purposes one full unit of input needs to be some (fixed) number of samples, so the overall input shape sent into this model when given a batch of input ends up being (batch_size, samples, rows, cols, channels) (which is expected and correct, but...). How do I send each item in the batch through this model so that I end up with an output of shape (batch_size, 1)? What I have tried so far: I tried creating an inner model containing the Conv2D layer et al then wrapping the entire thing in a TimeDistributed wrapper, followed by a Dense(units=1) layer. This compiled, but resulted in an output shape of (batch_size, samples, 1). I feel like I am missing something simple...
In Keras, how to send each item in a batch through a model?
0
0.066568
1
0
0
587
49,665,757
2018-04-05T06:45:00.000
2
0
0
0
0
python,tensorflow,keras,gpu
1
64,231,036
0
4
0
false
0
0
OOM means out of memory. May be it is using more memory at that time. Decrease batch_size significantly. I set to 16, then it worked fine
1
27
1
1
I'm trying to train a neural net on a GPU using Keras and am getting a "Resource exhausted: OOM when allocating tensor" error. The specific tensor it's trying to allocate isn't very big, so I assume some previous tensor consumed almost all the VRAM. The error message comes with a hint that suggests this: Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. That sounds good, but how do I do it? RunOptions appears to be a Tensorflow thing, and what little documentation I can find for it associates it with a "session". I'm using Keras, so Tensorflow is hidden under a layer of abstraction and its sessions under another layer below that. How do I dig underneath everything to set this option in such a way that it will take effect?
How to add report_tensor_allocations_upon_oom to RunOptions in Keras
1
0.099668
1
0
0
26,528
49,672,291
2018-04-05T12:22:00.000
0
0
0
0
1
python,django
0
49,672,440
0
2
0
false
1
0
You should encode all data as UTF-8 which is unicode.
1
0
0
0
I have a problem with multilanguage and multi character encoded text. Project use OpenGraph and it will save in mysql database some information from websites. But database have problem with character encoding. I tryed encoding them to byte. That is problem, becouse in admin panel text show us bute and it is not readable. Please help me. How can i save multilanguage text in database and if i need encode to byte them how can i correctly decode them in admin panel and in views
Django multilanguage text and saving it on mysql
0
0
1
1
0
89
49,679,283
2018-04-05T18:38:00.000
0
0
1
0
0
python,python-2.7,intellij-idea,virtualenv
0
49,679,964
0
1
0
false
0
0
Was able to install it by doing: Activating the virtualenv in the 'Terminal' tool window: source <virtualenv dir>/bin/activate Executing a pip install requests[security]
1
1
0
0
I'm using python 2.7.10 virtualenv when running python codes in IntelliJ. I need to install requests[security] package. However I'm not sure how to add that [security] option/config when installing requests package using the Package installer in File > Project Structure settings window.
How to Install requests[security] in virtualenv in IntelliJ
0
0
1
0
1
852
49,687,824
2018-04-06T07:42:00.000
3
0
1
0
0
python,ontology,reasoner,owlready,hermit
0
49,688,765
0
1
0
true
0
0
You do not need to implement the reasoner. The sync_reasoner() function already calls HermiT internally and does the reasoning for you. A reasoner will reclassify individuals and classes for you which means it creates a parent-child hierarchy of classes and individuals. When you load an ontology only explicit parent-child relations are represented. However, when you call the reasoner, the parent-child hierarchy is updated to include inferred relations as well. An example of this is provided in Owlready2-0.5/doc/intro.rst. Before calling sync_reasoner() calling test_pizza.__class__ prints onto.Pizza, which is explicit information. However, after calling sync_reasoner() calling test_pizza.__class__ prints onto.NonVegetarianPizza, which is the inferred information.
1
5
0
0
We have an ontology but we need to use the reasoner HermiT to infer the sentiment of a given expression. We have no idea how to use and implement a reasoner in python and we could not find a good explanation on the internet. We found that we can use sync_reasoner() for this, but what does this do exactly? And do we have to call the reasoner manually everytime or does it happen automatically?
Use HermiT in Python
0
1.2
1
0
0
736
49,698,480
2018-04-06T17:50:00.000
1
0
1
0
0
python,macos,pyinstaller,dylib
0
49,698,626
0
1
0
true
0
0
I was wondering how to edit the code in the library so that when it runs, it saves data to inside the app Don't do that. This isn't a standard practice in macOS applications, and will fail in some standard system configurations. For example, it will fail if the application is used by a non-administrator user, or if the application is run from a read-only disk image or network share. More importantly, it'll also make it difficult or impossible to sign the application bundle with a developer certificate.
1
1
0
0
I am using Pyinstaller to create my Python app from a set of scripts. This script uses a library that saves downloaded data to the '~/' directory (using the os.join function). I was wondering how to edit the code in the library so that when it runs, it saves data to inside the app (like in the package, the Contents/Resources maybe)?
Saving data to MacOS python application
1
1.2
1
0
0
148
49,724,954
2018-04-09T02:59:00.000
6
0
0
0
0
python,python-3.x,rust,pytorch,tensor
0
49,734,613
0
2
0
true
0
0
Contiguous array The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other. The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the "storage" would be 27 items long, but the interpretation of "coordinates" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension. This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.
1
14
1
0
I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation. What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust? Are there any resources that provide good insights into how this is done? I am currently building a contiguous array, so that, given dimensions of 3 x 3 x 3, my array would just have 3^3 elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder. The dimension of the tensor should be dynamic, so that I could have a tensor with n dimensions.
How are PyTorch's tensors implemented?
0
1.2
1
0
0
1,685