Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
49,058,503
2018-03-01T20:52:00.000
0
0
1
0
python,dictionary,try-catch
49,061,427
4
false
0
0
Sorry for the confusion guys. I know that I can use simply key_name in dict.keys(). But I have to search all the keys to know if that key exist. So I was thinking about try except like the above one. My question is if this is good practice to use try except? Is there any other way that will serve the same purpose of not visiting all the keys?
1
2
0
I was wondering the best way to find if a key exist in python dictionary, not visiting the whole list of keys again and again. I am thinking of try except. Is there any better way?
Using try except to find if a key exist in dictionary
0
0
0
313
49,059,089
2018-03-01T21:37:00.000
1
0
0
0
python,vector,semantics,word2vec,word-embedding
49,081,754
1
false
0
0
Actually averaging of word vectors can be done in two ways Mean of word vectors without tfidf weights. Mean of Word vectors multiplied with tfidf weights. This will solve your problem of word importance.
1
2
1
Given a list of word embedding vectors I'm trying to calculate an average word embedding where some words are more meaningful than others. In other words, I want to calculate a semantically weighted word embedding. All the stuff I found is on just finding the mean vector (which is quite trivial of course) which represents the average meaning of the list OR some kind of weighted average of words for document representation, however that is not what I want. For example, given word vectors for ['sunglasses', 'jeans', 'hats'] I would like to calculate such a vector which represents the semantics of those words BUT with 'sunglasses' having a bigger semantic impact. So, when comparing similarity, the word 'glasses' should be more similar to the list than 'pants'. I hope the question is clear and thank you very much in advance!
Semantically weighted mean of word embeddings
0.197375
0
0
937
49,059,772
2018-03-01T22:34:00.000
1
0
0
0
python,django
49,060,276
2
false
1
0
Find the DJANGO_SETTINGS_MODULE environment variable within your project. The value of it is in python path syntax. Use it to locate the settings file since it is the one getting used by django and make sure that DEBUG = False. Additionally you could add a print statement which gives some visual feedback on server start. Restart your development server after you have saved the configuration by executing python manage.py runserver.
1
0
0
I'm trying to develop 404 and 500 error custom templates for my Django project. When I change DEBUG to False, Django always returns me: You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 404 page. After change ALLOWED_HOSTS to ['.domain.com', 'www.domain.com'] I obtain: Invalid HTTP_HOST header: 'domain.com'. You may need to add u'domain.com' to ALLOWED_HOSTS. What am I doing wrong? Why Django does not recognize the variable?
Django doesn't recognize DEBUG = False
0.099668
0
0
4,805
49,062,709
2018-03-02T04:12:00.000
0
0
1
0
python,unique,combinations,probability
49,063,034
4
false
0
0
With only 4 total events, you can use a simple algorithm. Arrange all the girls arbitrarily into a matrix with 4 columns and 42 rows. Each row is a group for the first event. Then, shift column 2 down one row, shift column 3 down 2 rows, and shift column 4 down by 3 rows. You now have a new grouping for the 2nd event. Repeat two more times to get the groupings for the 3rd and 4th events. Since each column is shifted by a different amount, and no column is ever shifted by the full 42 rows, no two girls will ever share the same row twice. (Obviously this strategy will not generate the maximum number of valid groupings, but it will certainly give you 4 of them.)
1
1
0
So my girlfriend was asked to group the girls in her sorority into groups of 4. She needs to do this for 4 different events (1 event per week). There are 168 girls total (convenient that it fits into groups of 4 evenly). The caveat is that no girl can be in a group with a girl that they've already been in a group with before. When she told me this problem I told her I could code her a little script that would do this, no problem. I didn't think it would be this challenging... Initially I thought I would write a little script in python that would use a random number generator to randomly select girls from the list of girls and place them into groups of 4. Each group would have an incrementing ID starting with 1, and NOT resetting for new events (so the first group for event 2 would have an ID of 43). Each girl would would be an object that on top of her name, would also contain the IDs of the groups that she's been in already. For future iterations/events, I would again randomly select girls from the list and put them into groups of 4, but would have checks to make sure that there are no overlaps in their previous group IDs. If a girl fails the check, re-seed the number generator and keep generating a random number until it matches the index of a girl who passes the check. I still think this would work (unless this problem is mathematically impossible), but it would be SOOOOO slow and inefficient. I know there must be a more elegant solution than brute-forcing combinations of groups. How could I accomplish this task?
How can I rearrange the items in a list into groups of 4 so that no item in a group has been in a group with the other items in that group before?
0
0
0
188
49,062,970
2018-03-02T04:45:00.000
0
0
0
0
python,machine-learning
49,065,671
1
true
0
0
For the first question, I would say, you don't need to convert it, but it would make the evaluation on the test set easier. Your classifier will output one hot encoded values, which you can convert back to string, and evaluate those values, however I think if you would have the test targets as 0-1s would help. For the second one, you need to fit the standardscaler on the train set, and use (transform) that on the test set.
1
0
1
I am new to Machine Learning. I am currently solving a classification problem which has strings as its target. I have split the test and training sets and I have dealt with the string attributes by converting them by OneHotEncoder and also, I am using StandardScaler to scale the numerical features of the training set. My question is for the test set, do I need to convert the test set targets which are still in string format such as I did with training set's string targets using the OneHotEncoder, or do I leave the test set alone as it is and the Classifier will do the job itself? Similarly for the numerical attributes do I have to use StandardScaler to scale the numerical attributes in the test set or the Classifier will do this itself once the training is done on the training set?
Training a categorical classification example
1.2
0
0
86
49,063,689
2018-03-02T06:13:00.000
0
0
0
0
python,windows,active-window
49,073,983
1
false
0
0
It seems win32gui isn't recognized in shell which I was using to test if it worked or not, however after just adding it to an actual python script it works just fine.
1
0
0
I'm bored and just making something for the heck of it in Python. I saw someone typing with spaces between all their letters and decided to make a python script that does this. It was pretty easy, but then I wanted to take it a step further, because copy/pasting from console takes time, so I want to have this script put spaces after every keyboard press but only when I have Discord as the active window. The only things I could find that could give you the active window are from 5-15 years ago, and are all outdated. They say use win32gui, and I pipinstalled it, but it doesn't seem to work. EDIT: For clarification, I ran "pip install win32gui" and it installed, I opened a python shell and typed "import win32gui" and it said no such module I looked through the modules and found win32 and according to the help command win32gui is part of its package, so I tried win32.win32gui and it says there's no such attribute I'm new to coding, I'm not entirely sure what I'm doing.
get active window in python
0
0
0
443
49,063,955
2018-03-02T06:38:00.000
1
0
0
0
python-3.x
49,070,495
2
false
1
0
Selenium cannot see or interact with native context menus.
1
0
0
I would like to use ActionChains function of Selenium. Below is like my codes. But It does not work when it opens right click menu. The ARROW_DOWN and ENTER are implemented in main window not, right click menu. How can the ARROW_DOWN and ENTER code be implemented in right click menu. Brower = webdriver.Chrome() actionChain = ActionChains(Browser) actionChain.context_click(myselect[0]).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ENTER).perform()
(Python Selenium with Chrome) How to click in the right click menu list
0.099668
0
1
913
49,065,126
2018-03-02T08:14:00.000
0
0
0
0
python,excel,openpyxl
52,547,698
1
false
0
0
Use the openpyxl package version 2.5 above, the function to keep the pivot table format is only available from the 2.5 release onward.
1
2
0
I have an xlsx with 2 worksheet. In the first sheet I have an Excel pivot, in the second one the data source for the pivot table. I would like to modify the data source via python keeping the pivot structure in the other sheet. Opening the woorkbook with openpyxl I lost the pivot table, does anyone know if there is an option to avoid this behaviour?
Keeping excel-like pivot with openpyxl
0
1
0
607
49,065,866
2018-03-02T09:06:00.000
0
0
1
0
python,symlink
49,066,270
3
false
0
0
Create a list and save to it paths of interesting files. If you don't know them, maybe compare list of files in specific locations before and after running program. You can read files from that list in a loop and check if some of them contain path from the list. Problem could occur with reading symlinks, as they contain insides of what they point. There is a os.readlink() function which read symlink of itself. Symlink itself contain path to file it points.
1
0
0
I need to process the output of a certain program in Python. The output of that program contains several symlinks which point to themselves, which can be a pain to work with. I want to detect, from a list of files, which ones are symlinks pointing to themselves so that I can delete them.
List all symbolic links to themselves
0
0
0
2,024
49,067,354
2018-03-02T10:33:00.000
2
0
1
0
python
49,067,377
1
false
0
0
im_pixel[j,i] just means that the key being passed to im_pixel is the tuple j, i. This will call whatever im_pixel has defined for __getitem__ with this tuple as a parameter. What this does will be defined by the type of im_pixel For example if im_pixel was a dictionary it would fetch the key (j, i). Anything immutable and hashable is allowed to be a dictionary key in Python and a tuple is both immutable and hashable so this would be allowed for a dictionary type. As Duncan mentions the whole key must be immutable, so the individual elements of the tuple must also be immutable as well.
1
0
0
I'm a newbie of python. I couldn't understand the usage of pix = im_pixel[j, i]. In [], there is a comma (,).. Is this the right syntax?
Python:what does "pix = im_pixel[j, i]" means?
0.379949
0
0
83
49,067,976
2018-03-02T11:13:00.000
0
1
0
1
python-3.x,package,atom-editor
49,252,287
1
true
0
0
I solved it by going inscript 3.17.3 code: script --> lib --> grammars --> python.coffee In there I changed "command: 'python'" to "command: 'py'" for lines 3 and 7. This should work for whatever command your windows batch uses to run python.
1
0
0
So I'm a new intern at a company and they gave me practically a blank computer. I needed to install some programs from their Git to run python and the only available python editors are Idle and Atom. I personally prefer atom, so I installed with the package "script" that should run python scripts. However I can't run python 3.4 and I get this error " 'python' is not recognized as an internal or external command, operable program or batch file." Which I get since on cmd typing 'python' does not launch the python only typing 'py'. Since I am an intern I have no control over the Environmental variables so I can't change py command to python. How can I change atom script package to use "py" command instead of "python".
running Python commands on Atom
1.2
0
0
246
49,071,433
2018-03-02T14:43:00.000
0
0
0
0
python,flask,web-deployment
49,099,488
1
false
1
0
Looks like Docker might be my best bet: Have Nginx running on the host, and the application running in container A with Gunicorn. Nginx directs traffic to container A. Before starting the file sync, tear down container A and start up container B, which listens on the same local port. Container B can be a maintenance page or a copy of the application. Start file sync and wait for it to finish. When done, tear down container B, and start container A again.
1
4
0
I'm looking for high-level insight here, as someone coming from the PHP ecosystem. What's the common way to deploy updates to a live Flask application thats running on a single server (no load balancing nodes), served by some WSGI like Gunicorn behind Nginx? Specifically, when you pull updates from a git repository or rsync files to the server, I'm assuming this leaves a small window where a request can come through to the application while its files are changing. I've mostly deployed Laravel applications for production, so to prevent this is use php artisan down to throw up a maintenance page while files copy, and php artisan up to bring the site back up when its all done. What's the equivalent with Flask, or is there some other way of handling this (Nginx config)? Thanks
Deploying updates to a Flask application
0
1
0
360
49,074,246
2018-03-02T17:23:00.000
0
0
0
0
python,pandas,csv
49,074,441
1
false
0
0
The expression sorted(glob.glob("DailyDownload/*/*_YEH.csv"))[-1] will return one file from the most recent day's downloads. This might work for you if you are certain that only one file per day will be downloaded. A better solution might be to grab all the files (glob.glob("DailyDownload/*/*_YEH.csv") and then somehow mark them as you process them. Perhaps store the list of processed files in a database? Or delete each file as you complete the processing?
1
0
1
Quick question here I'd like to use pandas read_csv to bring in a file for my python script but it is a daily drop and both the filename and file location changes each day... My first thought is to get around this by prompting the user for the path? Or is there a more elegant solution that can be coded? The filepath (with name) is something like this: DailyDownload>20180301>[randomtags]_YEH.csv
Read_CSV() with a non-constant file location
0
0
0
34
49,074,360
2018-03-02T17:30:00.000
1
1
0
1
python,ubuntu,server,mod-wsgi
49,112,016
1
true
0
0
Eventually Figured it out.The problem was that I had 2 versions of Python 2.7 installed in my server(2.7.12 and 2.7.13) so the definitions of one were conflicting with the other.Solved it when I completely removed Python 2.7.13 from the server.
1
0
0
I'm trying to install the mod_wsgi module in apache in my Ubuntu server but I need it to be specifically for version 2.7.13 in Python.For whatever reason every time I run sudo apt-get install libapache2-mod-wsgi it installs the mod_wsgi module for Python 2.7.12.I'm doing all of this because I'm running into a weird python version issue.When I run one of my python Scripts in my server terminal it works perfectly with version 2.7.13.In Apache however the script doesn't work.I managed to figure out that my Apache is running version 2.7.12 and I think this is the issue.Still can't figure out how to change that apache python version yet though.
How to install mod_wsgi to specific python version in Ubuntu?
1.2
0
0
623
49,075,601
2018-03-02T18:55:00.000
0
0
0
0
python-3.x,odoo-11
49,604,167
1
false
1
0
in Odoo, ACLs only apply to regular models and don't need to be defined for Abstract or Transient models, if defined, these will be disregarded and a warning message will be triggered into the server log.
1
0
0
I define access rules, that don't permit read. But it remained readable! So object-level permissions don't work for Transient Model?
The access rules for wizards doesn't works in Odoo 11
0
0
0
561
49,079,990
2018-03-03T02:18:00.000
1
0
1
0
python,linux,tensorflow
49,102,970
3
true
0
0
Compiling tensorflow from source solved the problem, so it seems my system wasn't supported.
1
3
1
I installed tensorflow following the instructions on their site, but when I try to run import tensorflow as tf I get the following error: Illegal instruction (core dumped). I tried this with the CPU and GPU versions, using Virtualenv and "native" pip, but the same error occurs in every case. The parameters of my PC: OS : LinuxMint 18.3 CPU: AMD Athlon Dual Core 4450e GPU: GTX 1050 Ti I found that some people experienced this error when they compiled tensoflow from source and misconfigured some flags. Could it be that my CPU is too old and not supported? Is it possible that compiling from source solves this issue?
Error while importing TensorFlow: Illegal instruction (core dumped)
1.2
0
0
3,188
49,083,639
2018-03-03T11:28:00.000
0
0
0
0
python,python-2.7,kivy
49,100,661
1
true
0
1
Loading all your images in memory will be a problem when you have a lot of images in the folder, but you could have a hidden image with the next image as source (it's not even needed to add the Image to the widget tree, you could just keep it in an attribute of your app), so everytime the user load the next image, it's displayed instantly, since it's cached already, and while the user is looking at this image, the second image widget, which stays invisible, would start loading the next image. Of course, if you want to load more than 1 image, you'll have to do something more clever, you could have a list of Image widgets in memory, and always replace the currently displayed source with the next in line for pre-fetching).
1
0
0
This may be a basic question, but I'm still learning Kivy and I'm not sure how to do this. The program that I'm writing with Python 2.7 and Kivy reads a folder full of images, and then will display them one at a time as the user clicks through. Right now, I'm calling a function that reads the next image on the click of a button. This means that I have a bit of lag between each image. I'd like to load all the images in the beginning, or at least some of them, so that there isn't a lag as I click through the images. I'm not sure if this is done on the Python side or the Kivy side, but I appreciate any help!
How can I pre-load or cache images with Python 2.7 and Kivy
1.2
0
0
485
49,083,727
2018-03-03T11:40:00.000
-1
0
1
0
python,geocoding,str-replace
49,207,037
1
false
0
0
The coodinates from google geocode, (-1.2890979, 36.90147) --------- return latitude,longitude are not string but tuple. That means you cannot use the replace function. Instead convert the tuple using str(tuple) where tuple is the location variable. Then use the replace function to remove the unnecessary characters.
1
1
0
I need help with the string.replace(" ", "") I am using google geocode and it is returning the geocode as (-1.2890979, 36.90147) --------- return latitude,longitude I have stored the geocode in a variable called location I want to send the coodinates without the brackets and without the space but location.replace give a 500 Internal server error. Any suggestions on what i can do to correct this?
python geocode.replace(" ", "") Internal server error
-0.197375
0
0
89
49,085,633
2018-03-03T15:14:00.000
1
0
1
1
python,python-3.x,macos,pip,homebrew
60,066,221
3
false
0
0
In case this helps, I had a similar problem where a homebrew upgrade made python3 seem to disappear. brew install python3 ...told me python3 was already installed and just needed to be linked with brew link python Ran it, and the symlinks it created seem to have resolved the issue.
2
10
0
I had python2 installed on my macOS and I parallel installed python3 (without homebrew). It worked perfect and I could use python3 and pip3 from every directory without problems parallel to python and pip for version 2. Some days ago I did not find pip3 and I had to set an alias to python3 -m ... (I thought I didn't use it before but I had!). Today in the morning I worked with python3 without problems and now python3 got a command not found but I cannot find it on my directories, too. Where is my python3? And why it is gone? The only reason I see is that I installed homebrew about a week ago for installing mutt. Is it possible that the brew upgrade function has destroyed paths or even installations? Would be great to get help from you! Thanks a lot.
Python3 is suddenly gone (on macOS) - used it for at least a year
0.066568
0
0
4,348
49,085,633
2018-03-03T15:14:00.000
5
0
1
1
python,python-3.x,macos,pip,homebrew
49,093,701
3
true
0
0
Maybe someone else has the same problem. Therefore the steps for my solution: the which command shows me the directories of the versions linked everywhere in the system the argument --version gave me an overview of where I found the different versions of python and vim (for more informations about the directory-changes I checked vim, too) looking at echo $PATH and ls -lha /etc/paths* I knew more about the current sequence of possible directories of installations and about changes (brew saved the original file as /etc/paths~orig) with these informations I first upgraded with brew upgrade python3 (it seems that my installed libraries stayed like in the days before the mysterious loss of python3), then I could change the paths and add some alias to get the environment I want to work with Now everything seems to be like before the problems. If I will notice any further changes I now have the knowledge to solve them within some minutes. Good feeling! It's not solved why brew downgraded the python3-installation because I'm sure not having installed it in the days of adding python3 to python2. But that isn't very important. Thanks to the helpers - especially @avigil.
2
10
0
I had python2 installed on my macOS and I parallel installed python3 (without homebrew). It worked perfect and I could use python3 and pip3 from every directory without problems parallel to python and pip for version 2. Some days ago I did not find pip3 and I had to set an alias to python3 -m ... (I thought I didn't use it before but I had!). Today in the morning I worked with python3 without problems and now python3 got a command not found but I cannot find it on my directories, too. Where is my python3? And why it is gone? The only reason I see is that I installed homebrew about a week ago for installing mutt. Is it possible that the brew upgrade function has destroyed paths or even installations? Would be great to get help from you! Thanks a lot.
Python3 is suddenly gone (on macOS) - used it for at least a year
1.2
0
0
4,348
49,087,309
2018-03-03T17:51:00.000
0
0
0
0
python,pyqt
49,087,417
2
false
0
1
You need to connect the stateChanged signal (for QCheckBox; emitted every time you check/uncheck the box) or currentIndexChanged signal (for QComboBox; emitted every time you select a different item in the combo box) to a slot (you can also use a lambda here). In that slot all you need to do is call the QLineEdit's show() or hide() method to toggle the visibility of the line edit.
1
0
0
I am trying to show/hide a QLineEdit (or some other widget) using QCheckBox or QComboBox.
Change widget visibility using QCheckBox or QComboBox in PyQt
0
0
0
1,223
49,088,925
2018-03-03T20:31:00.000
0
0
0
1
python,xbee
52,888,170
2
false
0
0
There is a way to send a command to a remoted xbee: First, connect to the local XBee and then send a command to the local Xbee so the local Xbee can send a remote_command to the remoted XBee. Here are the details: Create a bytearray of the command. For e.g: My command is: 7E 00 10 17 01 00 13 A2 00 41 47 XX XX FF FE 02 50 32 05 C5, generated using XCTU. It is a remote AT command to set the pin DIO12 of the remoted XBee to digital out, high [5]. Create a raw bytearray of it. raw = bytearray([0x7E,0x00,0x10,0x17,0x01,0x00,0x13,0xA2,0x00,0x41,0x47,0xXX,0xXX, 0xFF,0xFE,0x02,0x50,0x32,0x05,0xC5]) Create a packet: using from digi.xbee.packets.common import RemoteATCommandPacket ATpacket = RemoteATCommandPacket.create_packet(raw, OperatingMode.API_MODE) Send the packet to the local XBee: device.send_packet(ATpacket) Bonus: A more simple way to create a packet: ATpacket = RemoteATCommandPacket(1,XBee16BitAddress.from_hex_string("0013A2004147XXXX"),XBee16BitAddress.from_hex_string("FFFE"),2,"P2",bytearray([0x05]))
2
0
0
as the title said, i'm looking for a way to send an AT command to a remote xbee and read the response. my code is in python and i'm using digi-xbee library. another question: my goal of using AT command is to get the node ID of that remote xbee device when this last one send me a message, i don't want to do a full scan of the network, i just want to get its node id and obviously the node id doens't come within the frame. so i had to send to it an AT command so he send me back its node ID. if you have any suggestions that may help, please tell me, i'm open to any helpful idea. PS. i tried to use read_device_info() within the callback function that launch when a data received but it didn't work. it works outside the function but inside no! thanks in advance
how to send remote AT command to xbee device using python digi-xbee library
0
0
1
777
49,088,925
2018-03-03T20:31:00.000
0
0
0
1
python,xbee
49,214,426
2
false
0
0
when you recive a message you get an xbee_message object, first you must define a data receive callback function and add it to device . In that message you call remote_device_get_64bit_addr().
2
0
0
as the title said, i'm looking for a way to send an AT command to a remote xbee and read the response. my code is in python and i'm using digi-xbee library. another question: my goal of using AT command is to get the node ID of that remote xbee device when this last one send me a message, i don't want to do a full scan of the network, i just want to get its node id and obviously the node id doens't come within the frame. so i had to send to it an AT command so he send me back its node ID. if you have any suggestions that may help, please tell me, i'm open to any helpful idea. PS. i tried to use read_device_info() within the callback function that launch when a data received but it didn't work. it works outside the function but inside no! thanks in advance
how to send remote AT command to xbee device using python digi-xbee library
0
0
1
777
49,090,954
2018-03-04T01:12:00.000
1
0
0
1
python,jupyter,aws-glue
49,095,523
2
true
1
0
I think it should be possible, if you can setup a Jupyter notebook locally, and enable SSH tunneling to the AWS Glue. I do see some reference sites for setting up local Jupyter notebook, enable SSH tunneling, etc, though not AWS Glue specific.
1
3
0
I got started using AWS Glue for my data ETL. I've pulled in my data sources into my AWS data catalog, and am about to create a job for the data from one particular Postgres database I have for testing. I have read online that when authoring your own job, you can use a Zeppelin notebook. I haven't used Zeppelin at all, but have used Jupyter notebook heavily as I'm a python developer, and was using it a lot for data analytics, and machine learning self learnings. I haven't been able to find it anywhere online, so my question is this "Is there a way to use Jupyter notebook in place of a Zeppelin notebook when authoring your own AWS Glue jobs?"
Is it possible to use Jupyter Notebook for AWS Glue instead of Zeppelin
1.2
1
0
4,551
49,090,986
2018-03-04T01:19:00.000
1
0
0
1
python,pip
49,090,992
1
false
0
0
Is it typo? Command should be sudo easy_install pip
1
0
0
I want to learn Python. Based on my tutorial, I understand I need to install "pip", with the following line in terminal : sudo easy install pip But I get "command not found" Looking quickly online, I understand it may have to do with my "$PATH" ? I don't know what it means, for for info my $PATH seems to be : /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/mysql/bin What would I need to change (and how) to be able to install pip ? thanks very much in advance
"command not found" when doing “sudo easy install pip”​
0.197375
0
0
1,427
49,091,459
2018-03-04T02:49:00.000
0
1
0
0
python,json,protocol-buffers,grpc
50,177,889
1
false
0
0
Fixed the issue by only picking the fields that is needed when deserializing the data, rather than deserialize all the data returned from the server.
1
0
0
We are trying to convert a gRPC protobuf message to finally be a json format object for processing in python. The data sent across from server in serialized format is around 35MB and there is around 15K records. But when we convert protobuf message into string (using MessageToString) it is around 135 MB and when we convert protobuf message into a JSON string (using MessageToJson) it is around 140MB. But the time taken for conversion is around 5 minutes for each. It does not add any value wherein we take so much time to convert data on the client side. Any thoughts or suggestion or caveats that we are missing would be helpful. Thanks.
Converting gRPC protobuf message to json runs for long
0
0
1
1,175
49,095,353
2018-03-04T12:23:00.000
2
0
0
0
python,nfc
49,120,363
1
false
0
0
Nfcpy only supports the standardized NFC Forum Type 1, 2, 3, and 4 Tags. Mifare 1K Classic uses a proprietary communication format and requires reader hardware with NXP Crypto-1 support.
1
1
0
I bought the card reader ACR122U and try to read mifare 1k classic cards with nfcpy. So my question is, how can i read or write on a mifare 1k classic card using nfcpy?
How to use nfcpy for MiFare 1k classic
0.379949
0
0
526
49,096,206
2018-03-04T13:56:00.000
0
0
0
0
python,algorithm,performance
49,102,029
1
true
0
0
You may check if the algorithm outlined below are fast enough. Sort the numbers in the 3D array that are in the given range and keep track of the indexes. Now do a nested loop where the outer loop find candidates for the smallest number and the inner for the largest. The inner loop starts with the next number in the list and terminates as soon as you find numbers that correspond to none-overlapping sub-fields (first number that satisfy condition 2, all remaining numbers fails condition 3) or the difference are greater than the best pair of numbers already found (this and all remaining numbers fails condition 3). Update the information for the best candidate pair if appropriate when the inner loop terminates.
1
0
1
I have to write an algorithm that will find two numbers in a 3D array (nested lists) that are: That are in a given range (min < num1, num2, < max) Do not overlap Are as close in value as possible ( abs(num1 - num1) is minimal) If there exists more pairs of numbers that satisfy 1), 2) and 3), pick the ones whose sum is maximal Original data is a N x N field consisting of elementary squares that each have a single random number in them. The problem is to find two sub-fields whose sums satisfy the 4 conditions written. I calculate all possible sums and store them in the 3D array sums[i][j][k] with coordinates of starting point (i, j) and its size (k). I need to keep track of indexes to ensure that fields do not overlap. Right now I am doing this using 6 nested for loops (one for each index, 3 indexes per number) and lots of if statements (to check that sums are in range and fields do not overlap) and then simply iterating over every possible combination which is really slow. Is there a faster way to do it (maybe without so many loops)? Only standard libraries are allowed
How can I find pairs of numbers in a matrix without using so many nested loops?
1.2
0
0
149
49,099,308
2018-03-04T19:10:00.000
1
0
0
0
python,django,django-models,django-templates,django-views
49,099,402
1
false
1
0
You should use some queuing mechanism like worker and consumer to avoid this problem. For example Celery. Steps to do for sending email: 1. Add email and info to the queue referred as task 2. Consume the queue. (It runs in the different process may be parallel too) You can also use Channels newly added in Django family of apps. This will provide you an asynchronous way to handle email/any other deferred task.
1
0
0
I am using form.py and the user is typing some Email-id, let us say I want to send an email to that particular email and write all that email into google sheet using gspread, I am able to do this in my views.py, but the problem is it's taking a lot of time to write which slow down the rendering process. Is there any other way I can use my logic after rendering the template.
how to call some logic in my views after rendering template in django
0.197375
0
0
67
49,099,537
2018-03-04T19:32:00.000
1
0
0
1
python,linux,python-idle
49,099,794
1
true
0
0
If you can start python2.7 in a terminal, you should be able to run its version of IDLE by adding option -m idlelib.idle to the startup command. Details depend on your system. If you can start IDLE with an icon, you should be able to copy the icon and change the python version from 3.6 to 2.7 in the icon properties. Again, detail depend on your system.
1
0
0
When I open IDLE it is version 3.4.0. I know that I have Python 2.7.6 installed on my system as well. How can I open the IDLE interface for this version?
How do I open different version of IDLE (2.7.6)? linux
1.2
0
0
284
49,101,093
2018-03-04T22:17:00.000
0
1
0
0
python,api,amazon-web-services,lambda,whoosh
53,909,533
2
true
0
0
The answer seems to be no. Serverless environments by default are ephemeral and don't support persistent data storage which is needed for something like storing an index that Whoosh generates.
1
2
0
I'm trying to set up Whoosh search in a serverless environment (aws lambda hosted api) and having trouble with Whoosh since it hosts the index on the local filesystem. That becomes an issue with containers that aren't able to update and reference a single index. Does anyone know if there is a solution to this problem. I am able to select the location that the directory is hosted but it has to be on the local filesystem. Is there a way to represent an s3 file as a local file? I'm currently having to reindex every time the app is initialized and while it works it's clearly an expensive and terrible workaround.
Is it possible to use Whoosh search in a serverless environment?
1.2
0
0
266
49,102,284
2018-03-05T01:27:00.000
0
0
1
0
python,class
49,122,277
1
false
0
0
Well that took me a bit, and this is now going to be forgotten, maybe help a random newbie to python in the future Anyways, I didn't realise that arrays are heterogeneous, and can store instances of a class. If your reading this you should be able to figure it out from there.
1
0
0
So im trying to make instances of a class proceduraly for a program that is going to run in minecraft: pi edition The only problem im having is that I (think I) need to make a different instance of a class for each mob/block, but I cant find anyway to create class instances prodecuraly, so It will make instance "Mob0" for the first one and then "Mob1" for the second. Using variables to combine Mob and a number does work however as it just resets the variables to a class instance Any help is appreciated, or other ways to do the same thing
How to proceduraly create instances of a class
0
0
0
83
49,102,424
2018-03-05T01:51:00.000
0
0
0
0
python,pandas,sas
50,023,202
1
false
0
0
I have run into the same error, but with SQL server data, not SAS. I believe Python Pandas may be trying to store this as a pandas datetime, which stores its values in nanoseconds. 9999DEC31 has far too many nanoseconds, as you might expect, for it to handle. You could try reading it in as an integer of days since the SAS epoch, or a string, and use the datetime module (datetime.date) to store it. Or read in the year, month, day separately and recombine using class datetime.date(year, month, day). Whatever gives you the least amount of grief. I can confirm that datetime.date CAN handle 9999DEC31. Because datetime.date is not a native Pandas class, your column would be stored by Pandas as dtype "object". But never fear, if you've done it right, every single element would be a datetime.date. Please note: If you need to work with those datetime.dates in Pandas, you would have to use the methods and objects provided in datetime, which differ from the pandas.datetime. I hope that helps. Let me know if you need more info.
1
0
1
I am doing something very simple but it seems that it does not work. I am importing a SAS table into pandas's dataframe. for the date column. I have NA which is actually using '9999dec31'd to represent it, which is 2936547 in numeric value. Python Pandas.read_sas() cant work with this value because it is too big. Any workaround? Thank you,
Python Pandas , Date '9999dec31'd
0
0
0
305
49,103,531
2018-03-05T04:32:00.000
0
0
0
0
python,tensorflow,deep-learning,keras,lstm
49,103,642
1
false
0
0
From an implementation perspective, the short answer would be yes. However, I believe your question could be more specific, maybe what you mean is whether you could do it with tf.estimator?
1
0
1
Can any one please help me out? I am working on my thesis work. Its about Predicting Parkinson disease, Since i want to build an LSTM model to adapt independent of patients. Currently i have implemented it using TensorFlow with my own loss function. Since i am planning to introduce both labeled train and unlabeled train data in every batch of data to train the model. I want to apply my own loss function on this both labeled and unlabeled train data and also want to apply cross entropy loss only on labeled train data. Can i do this in tensorflow? So my question is, Can i have combination of loss functions in a single model training on different set of train data?
Tensorflow: Combining Loss Functions in LSTM Model for Domain Adaptation
0
0
0
297
49,103,709
2018-03-05T04:57:00.000
6
0
1
0
python,c++,synchronization,multiprocessing
49,118,076
2
true
0
1
Perhaps shmget and shmat are not necessarily the most appropriate interfaces for you to be using. In a project I work on, we provide access to a daemon via a C and Python API using memory mapped files, which gives us a very fast way of accessing data The order of operations goes somewhat like this: the client makes a door_call() to tell the daemon to create a shared memory region the daemon securely creates a temporary file the daemon open()s and then mmap()s that file the daemon passes the file descriptor back to the client via door_return() the client mmap()s the file descriptor and associates consecutively-placed variables in a structure with that fd the client does whatever operations it needs on those variables - when it needs to do so. the daemon reads from the shared region and does its own updates (in our case, writes values from that shared region to a log file). Our clients make use of a library to handle the first 5 steps above; the library comes with Python wrappers using ctypes to expose exactly which functions and data types are needed. For your problem space, if it's just the python app which writes to your output queue then you can track which frames have been processed just in the python app. If both your python and c++ apps are writing to the output queue then that increases your level of difficulty and perhaps refactoring the overall application architecture would be a good investment.
1
12
0
I am trying to modify a python program to be able to communicate with a C++ program using shared memory. The main responsibility of the python program is to read some video frames from an input queue located in shared memory, do something on the video frame and write it back to the output queue in shared memory. I believe there are few things I need to achieve and it would be great if someone can shed some light on it: Shared memory: In C/C++, you can use functions like shmget and shmat to get the pointer to the shared memory. What is the equivalent way to handle this in python so both python and C++ program can use the same piece of shared memory? Synchronization: Because this involves multi-processing, we need some sort of locking mechanism for the shared memory in both C++ and python programs. How can I do this in python? Many thanks!
How to use shared memory in python and C/C++
1.2
0
0
7,802
49,104,023
2018-03-05T05:32:00.000
0
0
0
0
python,opencv
50,983,852
2
false
0
0
I don't know why it doesn't work but to solve your problem I would suggest to implement a new function which returns true even if there is a small difference for each pixel color value. Using the appropriate threshold, you should be able to exclude false negatives.
1
3
1
I've been trying to compare videos from frames taken from video using opencv videocapture() python! Took the first frame from a video let's call it frame1 and when I saved the video and took the same first frame again let's call it frame2 Comparing frame 1 and frame 2 returns false. When I expected true. I also saved the frame as an image in png(lossless format) and saved video and again same first frame. But they don't match? How to get the same frame everytime when dealing with videos opencv! Python
Compare frames from videos opencv python
0
0
0
1,712
49,104,553
2018-03-05T06:20:00.000
2
0
1
0
python
49,104,626
1
false
0
0
!= must be written as a unit - ! = is a SyntaxError.
1
0
0
I use the code as follows: However, I got the following error: File “”, line 5 if student not in unique_engagement_students and enrollment[‘join_date’] != enrollment[‘cancel_date’]: ^ SyntaxError: invalid syntax
Python Invalid Syntax with my “if not in and statement”
0.379949
0
0
391
49,106,413
2018-03-05T08:42:00.000
2
0
0
0
python,plotly-dash
49,108,001
1
false
0
0
I have similar experience. A lot said python is more readable, while I agree, however, I don't find it as on par with R or Shiny in their respective fields yet.
1
3
1
I have used Shiny for R and specifically the Shinydashboard package to build easily navigatable dashboards in the past year or so. I have recently started using the Python, pandas, etc ecosystem for doing data analysis. I now want to build a dashboard with a number of inputs and outputs. I can get the functionality up running using Dash, but defining the layout and look of the app is really time consuming compared to using the default layout from the shinydashboard's package in R. The convenience that Shiny and Shinydashboard provides is: Easy layout of components because it is based on Bootstrap A quite nice looking layout where skinning is build in. A rich set of input components where the label/title of the input is bundled together with the input. My question is now this: Are there any extensions to Dash which provides the above functionality, or alternatively some good examples showing how to do the above?
Building a dashboard in Dash
0.379949
0
0
836
49,108,596
2018-03-05T10:43:00.000
1
0
0
0
python,python-3.x,python-2.7,pandas
49,135,392
2
true
0
0
Not exactly a solution but more of a workaround. I simply read the files in their corresponding Python versions and saved them as a CSV file, which can then be read any version of Python.
1
1
1
I wrote a dataframe in Python 2.7 but now I need to open it in Python 3.6, and vice versa (I want to compare two dataframes written in both versions). If I open a Python2.7-generated HDF file using pandas in Python 3.6, this is the error produced: UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 1: ordinal not in range(128) If I open a Python3.6-generated HDF file using pandas in Python 2.7, this is the error: ValueError: unsupported pickle protocol: 4 For both cases I simply saved the file by df.to_hdf. Does anybody have a clue how to go about this?
How do I read/convert an HDF file containing a pandas dataframe written in Python 2.7 in Python 3.6?
1.2
0
0
348
49,109,365
2018-03-05T11:24:00.000
1
0
1
0
python,line
49,109,422
1
true
0
0
What your asking is somewhat terminal-specific. However, the following solution should work in both Linux and Windows. Write \r to return to the beginning of the current line. Write as many spaces as needed to "cover" any previous content on the line. Write \r to return to the beginning of the current line again. Write the new text for this line.
1
0
0
How can I delete a complete line in the output screen of python? Can I use the escape sequence '\b' for this?
Delete a complete line in python 3 output
1.2
0
0
34
49,112,866
2018-03-05T14:37:00.000
0
0
1
0
installation,python-3.5,python-3.6,uninstallation
49,112,995
1
false
0
0
Some steps: Open Control Panel. Click "Uninstall a Program." Scroll down to Python and click uninstall for each version you don't want anymore. This works on Windows 7, no additional programs or scripts required. Hope this helps.
1
0
0
I have both the versions of python 3.5.2 and python 3.6.3, but I want to remove the 3.6.3 version (as all the dependencies are installed in 3.5.2) and then upgrade the 3.5.2 version to the latest one. Please suggest a way to do so without damaging the OS(I am using Ubuntu 16.04).
Want to remove one of the two version of python installed on my machine
0
0
0
416
49,112,945
2018-03-05T14:41:00.000
0
0
0
0
python,django,apache,server,mod-wsgi
49,113,295
2
false
1
0
I'm not familiar with Linode restrictions, but if you have control over your Apache files then you could certainly do it with name-based virtual hosting. Set up two VirtualHost containers with the same IP address and port (and this assumes that both www.example.com and django2.example.com resolve to that IP address) and then differentiate requests using the ServerName setting in the container. In Apache 2.4 name-based virtual hosting is automatic. In Apache 2.2 you need the NameVirtualHost directive.
2
0
0
Is it possible to set two different django projects on the same IP address/server (Linode in this case)? For exmaple, django1_project running on www.example.com and django2_project on django2.example.com. This is preferable, but if this is not possible then how to make two djangos, i.e. one running on www.example.com/django1 and the second on www.example.com/django2? Do I need to adapt the settings.py, wsgi.py files or apache files (at /etc/apache2/sites-available) or something else? Thank you in advance for your help!
Two django project on the same ip address (server)
0
0
0
591
49,112,945
2018-03-05T14:41:00.000
2
0
0
0
python,django,apache,server,mod-wsgi
49,113,277
2
false
1
0
Yes that's possible to host several Python powered sites with Apache + mod_wsgi from one host / Apache instance. The only constraint : all apps / sites must be powered by the same Python version, though each app may have (or not) its own virtualenv (which is strongly recommended). It is also recommended to use mod_wsgi daemon mode and have each Django site run in separate daemon process group.
2
0
0
Is it possible to set two different django projects on the same IP address/server (Linode in this case)? For exmaple, django1_project running on www.example.com and django2_project on django2.example.com. This is preferable, but if this is not possible then how to make two djangos, i.e. one running on www.example.com/django1 and the second on www.example.com/django2? Do I need to adapt the settings.py, wsgi.py files or apache files (at /etc/apache2/sites-available) or something else? Thank you in advance for your help!
Two django project on the same ip address (server)
0.197375
0
0
591
49,116,070
2018-03-05T17:20:00.000
0
0
0
0
python-3.x,pandas,numpy,random,scikit-learn
49,116,554
1
false
0
0
Oh, there is an easy way ! create a list / array of unique group_ids create a random mask for this list and use the mask to split the file
1
1
1
I have a DataFrame where multiple rows share group_id values (very large number of groups). Is there an elegant way to randomly split this data into training and test data in a way that the training and test sets do not share group_id? The best process I can come up with right now is - create mask from msk = np.random.rand() - apply it to the DataFrame - check test file for rows that share group_id with training set and move these rows to training set. This is clearly non-elegant and has multiple issues (including the possibility that the test data ends up empty). I feel like there must be a better way, is there? Thanks
randomly split DataFrame by group?
0
0
0
324
49,118,216
2018-03-05T19:40:00.000
1
1
0
0
python,python-3.x,binary-data
52,750,509
1
false
0
0
Seeking, possibly several times, within just 50 kB is probably not worthwhile: system calls are expensive. Instead, read each message into one bytes and use slicing to “seek” to the offsets you need and get the right amount of data. It may be beneficial to wrap the bytes in a memoryview to avoid copying, but for small individual reads it probably doesn’t matter much. If you can use a memoryview, definitely try using mmap, which exposes a similar interface over the whole file. If you’re using struct, its unpack_from can already seek within a bytes or an mmap without wrapping or copying.
1
3
0
I'm currently reading binary files that are 150,000 kb each. They contain roughly 3,000 structured binary messages and I'm trying to figure out the quickest way to process them. Out of each message, I only need to actually read about 30 lines of data. These messages have headers that allow me to jump to specific portions of the message and find the data I need. I'm trying to figure out whether it's more efficient to unpack the entire message (50 kb each) and pull my data from the resulting tuple that includes a lot of data I don't actually need, or would it cost less to use seek to go to each line of data I need for every message and unpack each of those 30 lines? Alternatively, is this something better suited to mmap?
Efficiently processing large binary files in python
0.197375
0
0
218
49,118,277
2018-03-05T19:44:00.000
12
0
1
1
python,macos,installation,conda
60,176,143
4
true
0
0
brew cask install anaconda export PATH="/usr/local/anaconda3/bin:$PATH"
1
11
0
What is the recommended approach for installing Anaconda on Mac? I tried with brew cask install anaconda which after a while returns anaconda was successfully installed!. After that - trying conda command returns command not found: conda. Is there any post step installation that needs to be done? And what is recommended way to install Conda on MacOS?
What is the best way to Install Conda on MacOS (Apple/Mac)?
1.2
0
0
18,606
49,119,297
2018-03-05T20:53:00.000
0
0
0
0
python,django,django-logging
49,120,892
1
false
1
0
The gunicorn logs will tell you a report on how the workers are being spawned and closed and their socket errors if any. Assuming you are question the django app logs, where you actually decide what you want to log and to which specific file, the workers are allowed to write the logs one-after-the-other i.e also thread safe, will never right over the other. But this has its issues if you are trying to loggerate your logs e.g https://justinmontgomery.com/rotating-logs-with-multiple-workers-in-django
1
0
0
I have a Django/Python application deployed(using gunicorn) with 9 workers with 5 thread each. Let's say if on a given time 45 requests are getting processed, Each thread is writing a lot of logs. How Django avoids writing logs for multiple threads at the same time? And how the file open and close works for each request(does this happen at each request, if yes, any other efficient way of logging)?
django/python logging with gunicorn: how does django logging work with gunicorn
0
0
0
427
49,119,718
2018-03-05T21:24:00.000
0
0
0
0
python,windows,pyspark
49,120,540
2
false
0
0
I don't work with Python on Windows, so this answer will be very vague, but maybe it will guide you in the right direction. Sometimes there are cross-platform errors due to one module still not being updated for the OS, frequently when another related module gets an update. I recall something happened to me with a django application which required somebody more familiar with Windows to fix it for me. Maybe you could try with an environment using older versions of your modules until you find the culprit.
2
0
1
I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages. 'NoneType' object is not callable I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be.
Error in prediction using sknn.mlp
0
0
0
52
49,119,718
2018-03-05T21:24:00.000
0
0
0
0
python,windows,pyspark
49,144,887
2
false
0
0
I finally solved the problem on windows. Here is the solution in case you face it. The Theano package was faulty. I installed the latest version from github and then it threw another error as below: RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environment. In order to solve this, I created a variable named MKL_Threading_Layer under user environment variable and passed GNU. Reset the kernel and it was working. Hope it helps!
2
0
1
I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages. 'NoneType' object is not callable I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be.
Error in prediction using sknn.mlp
0
0
0
52
49,119,793
2018-03-05T21:29:00.000
0
0
1
0
python,python-3.x,python-asyncio,coroutine
49,226,915
2
false
0
0
asyncio use a loop to run everything, await would yield back the control to the loop so it can arrange the next coroutine to run.
2
0
0
I was wondering how concurrency works in python 3.6 with asyncio. My understanding is that when the interpreter executing await statement, it will leave it there until the awaiting process is complete and then move on to execute the other coroutine task. But what I see here in the code below is not like that. The program runs synchronously, executing task one by one. What is wrong with my understanding and my impletementation code? import asyncio import time async def myWorker(lock, i): print("Attempting to attain lock {}".format(i)) # acquire lock with await lock: # run critical section of code print("Currently Locked") time.sleep(10) # our worker releases lock at this point print("Unlocked Critical Section") async def main(): # instantiate our lock lock = asyncio.Lock() # await the execution of 2 myWorker coroutines # each with our same lock instance passed in # await asyncio.wait([myWorker(lock), myWorker(lock)]) tasks = [] for i in range(0, 100): tasks.append(asyncio.ensure_future(myWorker(lock, i))) await asyncio.wait(tasks) # Start up a simple loop and run our main function # until it is complete loop = asyncio.get_event_loop() loop.run_until_complete(main()) print("All Tasks Completed") loop.close()
Understanding Python Concurrency with Asyncio
0
0
0
448
49,119,793
2018-03-05T21:29:00.000
2
0
1
0
python,python-3.x,python-asyncio,coroutine
49,119,860
2
true
0
0
Invoking a blocking call such as time.sleep in an asyncio coroutine blocks the whole event loop, defeating the purpose of using asyncio. Change time.sleep(10) to await asyncio.sleep(10), and the code will behave like you expect.
2
0
0
I was wondering how concurrency works in python 3.6 with asyncio. My understanding is that when the interpreter executing await statement, it will leave it there until the awaiting process is complete and then move on to execute the other coroutine task. But what I see here in the code below is not like that. The program runs synchronously, executing task one by one. What is wrong with my understanding and my impletementation code? import asyncio import time async def myWorker(lock, i): print("Attempting to attain lock {}".format(i)) # acquire lock with await lock: # run critical section of code print("Currently Locked") time.sleep(10) # our worker releases lock at this point print("Unlocked Critical Section") async def main(): # instantiate our lock lock = asyncio.Lock() # await the execution of 2 myWorker coroutines # each with our same lock instance passed in # await asyncio.wait([myWorker(lock), myWorker(lock)]) tasks = [] for i in range(0, 100): tasks.append(asyncio.ensure_future(myWorker(lock, i))) await asyncio.wait(tasks) # Start up a simple loop and run our main function # until it is complete loop = asyncio.get_event_loop() loop.run_until_complete(main()) print("All Tasks Completed") loop.close()
Understanding Python Concurrency with Asyncio
1.2
0
0
448
49,122,144
2018-03-06T01:38:00.000
6
0
1
0
python,visual-studio-code
49,126,446
1
true
0
0
From vscode settings help: Path to folder with a list of Virtual Environments (e.g. ~/.pyenv, ~/Envs, ~/.virtualenvs). You should put: "python.venvPath": "/home/jfb/EVs" Reload vscode window and try again.
1
3
0
After setting python.venvPath to: "python.venvPath": "/home/jfb/EVs/env/bin/python3" and restarting vs code the virtual environment does not show up in the list of Python interpreters. All the search results I have read seem to say this is all that is needed to have the virtual environment version of python show up in the list of interpreters. Using vs code vers 1.20.1 on Mint 18. It seems so simple so what am I missing. Regards, Jim
Setting python.venvPath has no effect
1.2
0
0
5,425
49,122,455
2018-03-06T02:21:00.000
1
0
1
0
python,gluon,mxnet
49,142,254
1
true
0
0
Unfortunately, it is impossible at the moment, but might be changed later.
1
0
0
How can I set the number of CPUs to use in MXNET Gluon to a certain number, say 12? I don't see the answer in the documentation anywhere and by default MXNET uses all the CPUs.
Limit number of CPUs using MXNET
1.2
0
0
204
49,123,727
2018-03-06T05:05:00.000
0
0
1
0
python
49,125,303
1
false
0
0
Even after re-installing, the error persists but i accidentally found a solution for this by restarting and re-installing Anaconda.
1
0
0
I'm trying to install anaconda so i'm uninstalling all python for a clean install. I can't uninstall using the apps & feature since that has a different admin control and I can only uninstall using the uninstall software that comes along with python that is located in the python folder. My problem is that, I can't find the folder where the 'Python Launcher' is located but it is still shown in the Apps & features list. It also doesn't show when searched.
Found python launcher 3.5 in Apps & feature but can't find location
0
0
0
34
49,125,345
2018-03-06T07:19:00.000
0
0
0
0
python,iframe,wkhtmltopdf,wkhtmltoimage
49,125,491
1
false
1
0
Converting HTML to PDF may not work here. Go for getting snaps of webpages in png/jpeg format. try FireShot Chrome Extension. I am not sure if it'll work, Trying is not bad.
1
0
0
I want to translate site using Google Websites Translate and then download it like pdf or jpg. I tried to use wkhtmltopdf, but Google Websites Translate return result in frame. Thus if I take a screenshot (pdf or jpg) of translated page I get empty pdf.
How to take a screenshot of site which is translated using Google Websites Translate?
0
0
0
95
49,126,007
2018-03-06T08:04:00.000
0
0
0
0
python,computer-vision,deep-learning,keras,conv-neural-network
52,918,725
3
false
0
0
Since you haven't mentioned it in the details, the following suggestions (if you haven't implemented them already), could help: 1) Normalizing the input data (say for e.g, if you are working on input images, x_train = x_train/255 before feeding the input to the layer) 2) Try linear activation for the last output layer 3) Running the fitting over higher epochs, and experimenting with different batch sizes
1
2
1
I have a small dataset of ~150 images. Each image has an object (rectangle box with white and black color) placed on the floor. The object is same in all images but the pattern of the floor is different. The objective is to train network to find the center of the image. Each image is of dimension 256x256x3. Train_X is of size 150x256x256x3 and Train_y is of size 150x2 (150 here indicates the total number of images) I understand 150 images is too small a dataset, but I am ok giving up on some accuracy so I trained data on Conv nets. Here is the architecture of convnet I used Conv2D layer (filter size of 32) Activation Relu Conv2D layer (filter size of 64) Activation Relu Flattern layer Dense(64) layer Activation Relu Dense(2) Activation Softmax model.compile(loss='mse', optimizer='sgd') Observation: Trained model always return the normalized center of image 0.5,0.5 as the center of 'object' even on the training data. I was hoping to get center of a rectangular object rather than the center of the image when I run predict function on train_X. Am I getting this output because of my conv layer selections?
Object center detection using Convnet is always returning center of image rather than center of object
0
0
0
599
49,129,451
2018-03-06T11:12:00.000
1
0
0
0
python,websocket,localhost,ngrok,serve
52,701,751
1
false
0
0
You can use ngrok http 8000 to access it. It will work. Although, ws is altogether a different protocol than http but ngrok handles it internally.
1
2
0
I want to share my local WebSocket on the internet but ngrok only support HTTP but my ws.py address is ws://localhost:8000/ it is good working on localhost buy is not know how to use this on the internet?
how to use ws(websocket) via ngrok
0.197375
0
1
2,555
49,130,905
2018-03-06T12:30:00.000
0
0
0
0
nlp,python-3.5,spacy
49,131,939
1
true
0
0
Paragraphs should be fine. Could you give an example input data point?
1
1
1
I am trying to train a new Spacy model to recognize references to law articles. I start using a blank model, and train the ner pipe according to the example given in the documentation. The performance of the trained model is really poor, even with several thousands on input points. I am tryong to figure out why. One possible answer is that I am giving full paragraphs to train on, instead of sentences that are in the examples. Each of these paragraphs can have multiple references to law articles. Is this a possible issue? Turns out I was making a huge mistake in my code. There is nothing wrong with paragraphs. As long as your code actually supplies them to spacy.
Do I need to provide sentences for training Spacy NER or are paragraphs fine?
1.2
0
0
444
49,131,554
2018-03-06T13:03:00.000
0
0
1
1
python,python-3.x,pip,homebrew
49,654,021
1
false
0
0
You may have a folder /usr/local/lib/python2.7/site-packages with your old packages... In such a case you can list the installed packages with ls
1
0
0
I recently updated my packages on a macOS High Sierra using brew update && brew upgrade in where now python and pip are symlinks to python3, pip3. From brew info python Unversioned symlinks python, python-config, pip etc. pointing to python3, python3-config, pip3 etc., respectively, have been installed. I tend to install all my packages within my $HOME directory by using: pip install --user <package>, so my first instinct to reinstall the packages was to do a pip freeze to get the list of packages and then just try to install them using pip3 but after the upgrade I notice I don't have pip2 anymore. Is there a way without having to install python2, to list user instaled packages, something like pip freeze so that later I could just reinstall them using pip (now pip3)?, (I still have the $HOME/Library/Python/2.7 directory with all its contents)
how to migrate all pip2 packages to pip3 not having anymore pip2
0
0
0
395
49,136,190
2018-03-06T16:58:00.000
0
0
0
1
python,python-3.x,google-app-engine,importerror
49,155,149
1
true
1
0
I needed to include an additional file. ./setup.cfg [install] prefix=
1
0
0
I am deploying a Python3 app to Google App Engine Flexible Environment. I have all my dependencies listed in the requirements.txt file. During deployment I received messages indicating the Google libraries have been deployed. But, when the service is started it fails with from google.cloud import storage ModuleNotFoundError: No module named 'google' This runs fine locally.
Google App Engine Flexible Python service fails with "ModuleNotFoundError: No module named 'google'"
1.2
0
0
205
49,136,417
2018-03-06T17:11:00.000
2
0
0
1
python,google-app-engine,google-cloud-datastore
49,137,161
1
false
1
0
Not really an answer, just some considerations. I see a few difficulties to consider if using GCS for storing the shared models: importing the models in your app code would be a bit more complex, you'll need to use GCS libraries to read the file(s) for dynamic importing as they would not be available in the local filesystem. As a side effect of the dynamic importing you may loose some development capabilities in your IDE (like auto-completion, object structure verifications, etc) Preserving them might be possible, but probably not trivial. splitting the model definitions across model files (for partial reuse, inheritance and/or inter-model references for example) would not be a simple task. The point above would need to be addressed in the model files as well, in addition to the application code. deploying the app code on GAE and the models on GCP will always be non-atomic, extra care for coordinating the deployments and probably backward/forward compatibility would be needed to minimize/eliminate transient failures. IMHO the symlinks would be a simpler approach.
1
3
0
If App Engine service A and service B both depend on a datastore model, is there an effective way to share that model between both services without either duplicating the model or symlinking the file the model class definition is declared in? Would like to hear anyone's experience with this. Maybe storing the shared dependencies in Cloud Storage and pulling the relevant files from there?
Reuse datastore models across services without symlink
0.379949
0
0
42
49,138,355
2018-03-06T19:14:00.000
0
0
1
0
python,arrays
49,138,408
3
false
0
0
I think you want set e.g. set(arr2).issubset(arr1)
1
1
0
I want to Find whether an array is subset of another array or not and one of the method i can think of doing it is using Hashtable but i want to implement it in python. Attached in the thread is the c++ implementation. I'm not looking for built in functions here like set etc.. Python only has concept of dictionary in terms of hashtables but not sure how to proceed from here. Any suggestions would help me solve it. Below are couple of lists: arr1[] = [11, 1, 13, 21, 3, 7] arr2[] = [11, 3, 7, 1] Method (c++ Use Hashing) 1) Create a Hash Table for all the elements of arr1[]. 2) Traverse arr2[] and search for each element of arr2[] in the Hash Table. If element is not found then return 0. 3) If all elements are found then return 1. Lists can be million of numbers as well so a scalable and efficiet solution is expected.
Find whether an array is subset of another array hashtable way(Python)
0
0
0
748
49,142,567
2018-03-07T01:07:00.000
2
0
1
0
python,anaconda
57,350,144
2
false
0
0
It happened to me as well. I created a new env and was able to switch to the new env using command conda activate . But once I was in the new env I was not able to use conda command at all, even to deactivate the env. I just opened a new WIN's command prompt then switched to the new env and then was able to use conda command with no issues.
1
1
0
conda command was working fine from Anaconda prompt. I created a new environment for tensorflow after which it says - 'conda' is not recognized as an internal or external command,operable program or batch file. I have checked all my PATH variables, and root, scripts and lib folder paths are added to the PATH. It just does not recognize any commands - conda, activate, deactivate, any of these.
Conda not recognized as internal or external after creating new environment
0.197375
0
0
1,460
49,143,803
2018-03-07T03:50:00.000
-1
0
1
1
python,macos,path,pycharm,anaconda
49,167,249
3
false
0
0
Add the path in ~/.bashrc such as export PATH="/anaconda3/bin:$PATH" Then, PyCharm CE can be worked with Anaconda.
1
1
0
PyCharm CE and Anaconda have been installed. I know I should create a symlink to $HOME/.anaconda, but what is the command on macOS/Linux? Or any other solution? Thx
PyCharm: Anaconda installation is not found on macOS
-0.066568
0
0
5,981
49,148,607
2018-03-07T09:40:00.000
0
0
0
0
python,selenium,drag-and-drop,webdriver
49,151,725
1
false
1
0
I solved the problem by creating a script in AutoIT.
1
0
0
I want to put jpg in dropzone from other window. Can I do that? In my test I open new window (my html with jpg) and I want to drag and drop it to dropzone on my main window. I have error: Message: stale element reference: element is not attached to the page document. Maybe there is another solution for placing this file eg from a disk? I've tried several ways, including loading from a file from the disk and sending it using send keys.
Can i use drag and drop from other window? Python Selenium
0
0
1
87
49,149,442
2018-03-07T10:20:00.000
0
0
1
0
python,mongodb,pymongo
49,155,384
1
true
0
0
Ok I figured this one out. Because I was getting the proper format returned, that means that I was trying to communicate with db, but something was amiss with the python/javascript communication. I'm pretty sure that python does not understand javascript. That is why everything must be explictly stated as a string. For example, with: db["collection"].update({u'foo' : bar},{'$push':{u'baz' : foobaz}}). Notice how I must use 'foo' instead of foo. The same goes for the values. I am updating the values by a variable passed into a function. In order for pymongo to correctly convert this into javascript to communicate with my mongoDB database, i must also explicitly convert it into a string with str(bar) and str(foobaz). None of this would be possible with python unless I was using pymongo. So in order for pymongo to do it's magic, I must give it the proper format. This doesn't seem to apply to all versions of python and pymongo, as i've searched this problem and none of the people using pymongo had to use this specific conversion. This may also have something to do with the fact that i'm using python 2.7. I know that the typechecking/conversion is a little different in python3 in some instances. If someone can explain this in a more lucid way (using the correct python and javascript terminology) that would be great. Update: This also seems to hold true for any type. eg. in order to increment using the $inc operator with 1, I must also use int(1) to convert my int into an int explicitly. (Even though this is kind of weird, because int should be primitive, but maybe it's an object in python).
1
1
0
I have written a python program that creates a mongoDB database. I have written function that is supposed to update this databse. I've searched through many many forum posts for similar problems, but none of them seem to address this exact one. Basically, My objects are very simple and like so. object{ 'foo' : 'bar' 'baz' : [foobar,foobarbaz,] } I basically create these objects, then, if they are repeated I update them with a function like this: db["collection"].update({u'foo' : bar},{'$push':{u'baz' : foobaz}}) I am trying to append a string to the list which is the value for the field name 'baz'. However, I keep getting this object in return: {'updatedExisting': False, u'nModified': 0, u'ok': 1.0, u'n': 0} I've tried replacing update with update_one. I am using python 2.7, Ubuntu 16.04, pymongo 2.7.2, mongodb 3.6.3 Thanks!
MongoDB / PyMongo / Python update not working
1.2
1
0
610
49,150,948
2018-03-07T11:36:00.000
0
1
0
0
python,encoding,exception-handling,python-2.6,shutil
49,153,288
3
false
0
0
Well the output is perfectly correct. 'Г' is the unicode character U+0413 (CYRILLIC CAPITAL LETTER GHE) and its UTF-8 encoding is the 2 characters '\xd0' and '\x93'. You simply have to read the log file with an utf8 enabled text editor (gvim or notepad++ are), or if you have to process it in Python make sure to read it as an utf8 encoded file.
2
0
0
Python 2.6 Linux (CentOS) I using shutil.copyfile() function to copy file. I write an exception message in the log file if file is doesn't exist. So, i get message with wrong symbols, because my file path contains russian characters. For example: origin file path - "/PNG/401/405/018_/01200Г/osv_1.JPG" ('Г' it is russian symbol) file path in message - "/PNG/401/405/018_/01200\xd0\x93/osv_1.JPG" I'm tried to use this code print(str(error).decode('utf-8')) but it's doesn't work. But this code print(os.listdir(r'/PNG/401/405/018_/')[0].decode('utf-8')) work pretty well. Any ideas?
IOError return message with broken encoding
0
0
0
84
49,150,948
2018-03-07T11:36:00.000
0
1
0
0
python,encoding,exception-handling,python-2.6,shutil
49,230,851
3
true
0
0
print(str(error).decode('string-escape')) - works for me
2
0
0
Python 2.6 Linux (CentOS) I using shutil.copyfile() function to copy file. I write an exception message in the log file if file is doesn't exist. So, i get message with wrong symbols, because my file path contains russian characters. For example: origin file path - "/PNG/401/405/018_/01200Г/osv_1.JPG" ('Г' it is russian symbol) file path in message - "/PNG/401/405/018_/01200\xd0\x93/osv_1.JPG" I'm tried to use this code print(str(error).decode('utf-8')) but it's doesn't work. But this code print(os.listdir(r'/PNG/401/405/018_/')[0].decode('utf-8')) work pretty well. Any ideas?
IOError return message with broken encoding
1.2
0
0
84
49,152,116
2018-03-07T12:34:00.000
-2
0
0
0
python,matplotlib,matplotlib-basemap,contourf
49,181,090
3
false
0
0
Given that the question has not been updated to clearify the actual problem, I will simply answer the question as it is: No, there is no way that contour would not interpolate because the whole concept of a contour plot is to interpolate the values.
1
3
1
I am trying to plot some 2D values in a Basemap with contourf (matplotlib). However, contourf by default interpolates intermediate values and gives a smoother image of the data. Is there any way to make contourf to stop interpolating between values? I have tried by adding the keyword argument interpolation='nearest' but contourf does not use it. Other option would be to use imshow, but there are some functionalities of contourf that do not work with imshow. I am using python 3.6.3 and matplotlib 2.1.2
Stop contourf interpolating values
-0.132549
0
0
6,384
49,155,292
2018-03-07T15:15:00.000
0
0
0
0
rethinkdb,rethinkdb-javascript,rethinkdb-python
49,210,904
1
true
0
0
I am not aware if it is possible with RethinkDB. However, I know that it is not a feature that would scale well in a cluster of DB servers as it would introduce a bottle neck in the inserts commands. It is always possible to do it by hand however, simply by providing an id field to the documents you are inserting in the tables.
1
0
0
Is there a way to make rethinkdb generate primary key automatically and to ensure the key is in an increasing order., like say 1 to n I know when we insert a row into rethinkdb it automatically generates a primary key and returns a variable generated_keys, but I want a primary key which increases in a linear fashion say like starting from 4000 to n or 5000 to n, so on.
RethinkDB - Automatically generating primary keys that are linear
1.2
0
0
77
49,156,297
2018-03-07T16:02:00.000
0
0
0
1
python,amazon-ec2,windows-task-scheduler
49,156,596
1
true
0
0
'schtasks' is just an executable. You can always launch that within python script, using subprocess for example. You need to keep in mind for the following though: It could require elevation. If your script is within ordinary process, you could get UAC prompt, and will need a way to deal with it. If you'd like to create a task running without user logging on, you need to manage user/password. If you create a bunch of tasks, you need to manage them, query/pause/stop/delete??? That's all I can think of right now...
1
0
0
I have set up a windows server on AWS and have set it up to run python. I'm trying to get this to run on a regular basis but I'm not sure is schtasks will work to run the python code. Could you give some advice around this please. Also, the reason I didn't set this up on a native python friendly OS is because I was having some issues installing the libraries I needed. Any help or advice is hugely appreciated.
Is it Possible to use schtasks on Python Code?
1.2
0
0
325
49,158,613
2018-03-07T18:09:00.000
0
0
1
0
python
49,158,696
2
false
0
0
There's no conflict with import & write. Once the import is done, you have all the needed information held locally. You can overwrite the file without disturbing the values you hold in your run-time space.
1
0
0
file2.py is just variables I want file1.py to import those variables (import file2) increment them, truncate file2.py and rewrite it with the newly incremented variables I know how to increment them I'm just not sure how I would rewrite a python file with another python file while that file is also being imported... Thanks!
How do I rewrite python file with a python script
0
0
0
777
49,162,276
2018-03-07T22:29:00.000
1
0
0
0
python,tensorflow,neural-network,artificial-intelligence
49,273,055
1
true
0
0
I don't know if and how this would work in TF. But specific "dynamic" deep learning libraries exist that might give a better fit to your use case. PyTorch for example.
1
0
1
I'm investigating on using tensorflow for an experimental AI algorithm using dynamic neural nets alowing the system to scale(remove and add) layers and width of layers. How should one go about this? And the followup is if I want to also make the nets hierarchical so that they converge to two values (the classifier and the estimate of how sure it is). E.g. If there is a great variance not explained by the neural net it might give a 0,4 out of 1 as a classifier but also a "sure" value indicating how good the neural net "feels" about the estimate. To compare to us humans we can grasp a concept and also grade how confident we are. Thet hierarchical structure I also want to be dynamic connecting subnets together and disconnecting and also removing them entirely from the system. My main question: Is this an experiment I should do in tensorflow? I understand this is not a true technical question. But I hope if you feel it being out of bounds try to edit it to a more objective question.
How to make dynamic hierarchical TensorFlow neural net?
1.2
0
0
195
49,164,507
2018-03-08T02:39:00.000
0
0
1
0
python,word
49,164,608
2
false
0
0
The len operator in Python, when applied to a string, gives you the number of characters in that string, not the number of words. If you want to know the number of words in a string, you need to identify a mechanism as to how words are defined - for normal English that could, for example be using a space, and you could use len(a.split(' ')). For mixed-language strings including unicode characters you'll need to define custom rules, including separating out cases where each character is a word vs. where words are separated by spaces - in your example you'd need to count the English words separately from the Chinese, Korean and emojis.
1
1
0
I have a sentence mixed with Chinese, Korean and English words. I used the len() function in Python but it gave me the wrong answer. For example, we have the string a = '여보세요,我是Jason. Nice to meet you☺❤' The correct word number, excluding punctuations, is 13, but len(a) = 32 How to count the number of words correctly? Thank you very much.
How to count the number of Chinese, Korean and English words
0
0
0
1,833
49,165,004
2018-03-08T03:40:00.000
0
0
1
0
python,django,windows
49,166,062
3
false
1
0
You should use pip after active your virtualenv python environment
1
0
0
Any Windows python developers here? I created a virtualenv, activated it and installed django. If I run pip3 list, it shows Django (2.0.3) as installed. The problem is when I try and use django, it never works, it just returns "no module django." When I try pip3 install django again, it says it's already installed at myname\envs...\site-packages. But when I use the django command it never looks at this path, it looks at appdata/local/programs/python/python36-32/python.exe (i.e. not the virtualenv but the python installation itself). Anyone have any ideas?
How to setup Django inside virtualenv on Windows?
0
0
0
993
49,165,884
2018-03-08T05:14:00.000
0
0
0
0
python,pitch-detection
49,166,187
1
false
0
0
according to the first result when i google "python audio processing" you can do real-time audio input/output using PyAudio. PyAudio is a wrapper around PortAudio and provides cross platform audio recording/playback in a nice, pythonic way
1
0
0
I believe a lot of the libraries I have looked into require loading an audio file. Is this possible to do with audio that is being recorded live? If so, what library should I use?
How can I estimate the pitch of audio that I am currently recording in python?
0
0
0
77
49,166,657
2018-03-08T06:22:00.000
0
0
0
0
python,cluster-analysis
49,189,473
3
false
0
0
Most evaluation methods need a distance matrix. They will then work with mixed data, as long as you have a distance function that helps solving your problem. But they will not be very scalable.
2
3
1
I am trying to cluster some big data by using the k-prototypes algorithm. I am unable to use K-Means algorithm as I have both categorical and numeric data. Via k prototype clustering method I have been able to create clusters if I define what k value I want. How do I find the appropriate number of clusters for this.? Will the popular methods available (like elbow method and silhouette score method) with only the numerical data works out for mixed data?
How to find the optimal number of clusters using k-prototype in python
0
0
0
8,033
49,166,657
2018-03-08T06:22:00.000
1
0
0
0
python,cluster-analysis
49,523,832
3
false
0
0
Yeah elbow method is good enough to get number of cluster. Because it based on total sum squared.
2
3
1
I am trying to cluster some big data by using the k-prototypes algorithm. I am unable to use K-Means algorithm as I have both categorical and numeric data. Via k prototype clustering method I have been able to create clusters if I define what k value I want. How do I find the appropriate number of clusters for this.? Will the popular methods available (like elbow method and silhouette score method) with only the numerical data works out for mixed data?
How to find the optimal number of clusters using k-prototype in python
0.066568
0
0
8,033
49,170,157
2018-03-08T09:53:00.000
0
0
1
1
python
49,170,817
1
false
0
0
That is unrelated to Python, but to the filesystem security provided by the OS. The key is that permissions are not given to programs but to the user under which they run. Windows provides the command runas that allows to run a command (whatever the language is uses) under a different user. There is even a /savecred option that allows not to provide the password on each activation but instead save in in the current user's profile. So if you setup a dedicated user to run the scrip, give it only read permissions on the server folder and run the scrip under that user, then even a bug in the script could not tamper that folder. BTW, if the script is runned as a scheduled task, you can directly say what user should be used and give its password at config time.
1
0
0
I'm writing a python script which copies files from a server, performs a few operations on them, and delete the files locally after processing. The script is not supposed to modify the files on the server in any way. However, since bugs may occur, I would like to make sure that I'm not modifying\deleting the original server files. Is there a way to prevent a python script from having writing permissions to a specific folder? I work on Windows OS.
Making sure that a script does not modify files in specific folder
0
0
0
38
49,171,782
2018-03-08T11:15:00.000
4
0
0
1
python,oracle,ubuntu,32bit-64bit,cx-oracle
49,172,856
1
true
0
0
I am a bit confused about your question but this should give some clarification: A 32-bit client can connect to a 64-bit Oracle database server - and vice versa You can install and run 32-bit applications on a 64-bit machine - this is at least valid for Windows, I don't know how it works on Linux. Your application (the python in your case) must have the same "bitness" as installed Oracle Client.
1
0
0
I am trying to set up a cronjob that executes a python (3.6) script every day at a given time that connects to an oracle 12g database with a 32 bit client (utilizing the cx_Oracle and sqlalchemy libs). The code itself was developed on a win64 bit machine. However, when trying to deploy the script onto an Ubuntu 16.04 server, I run into a dilemma when it comes to 32 vs 64 bit architectures. The server is based on a 64 bit architecture The oracle db is accessible via a 32 bit client my current python version on ubuntu is based on 64 bit and I spent about an hour of how to get a 32 bit version running on a 64 bit linux machine without much success. The error I receive at this moment when trying to run the python script refers to the absence of an oracle client (DPI-1047). However, I already encountered a similar problem in windows when it was necessary to switch the python version to the 32 bit version and to install a 32 bit oracle client. Is this also necessary in the ubuntu case or are there similar measurements needed to be taken? and if so, how do I get ubuntu to install and run python3.6 in 32 bit as well as the oracle client in 32 bit?
Running a Python Script in 32 Bit on 64 linux machine to connect to oracle DB with 32 bit client
1.2
1
0
698
49,175,681
2018-03-08T14:36:00.000
1
0
1
0
python,pandas
49,182,785
1
true
0
0
If you have only one HDD (not even an SSD drive), then the disk IO is your bottleneck and you'd better write to it sequentially instead of writing in parallel. The disk head needs to be positioned before writing, so trying to write in parallel will most probably be slower compared to one writer process. It would make sense if you would have multiple disks...
1
0
1
Is it possible to write multiple CSVs out simultaneously? At the moment, I do a listdir() on an outputs directory, and iterate one-by-one through a list of files. I would ideally like to write them all at the same time. Has anyone had any experience in this before?
Parallelize Pandas CSV Writing
1.2
0
0
61
49,177,214
2018-03-08T15:51:00.000
1
0
1
1
python,linux,python-3.x,python-venv,virtual-environment
63,673,178
2
false
0
0
You need to have the binary already available to get venv to use it, but if you have it, it shouldn't matter if you use python2 -m venv or python3 -m venv. If you want 3.6, try: python -m venv python=`which python3.6` ~/envs/py36
1
3
0
When I run python -m venv, the virtual environment directory that venv creates includes a binary named python and another named python3 which is just a link to python. (In my installation, python is Python 3.6 and python2 is Python 2.7.) My problem is, sometimes (and I can't understand what's the difference between subsequent invocations) it also creates another symlink python3.6 pointing to python, but sometimes it doesn't. I need this symlink (actually, tox needs it). The binaries pip3.6 and easy_install-3.6 are always installed in the virtualenv. Is there any way I can make sure that python -m venv creates a symlink python3.6? (Disclaimer: I'm using pyenv to manage my Python installation, but I can reproduce the behavior above using /usr/bin/python -m venv)
How can I make venv install a python3.6 binary?
0.099668
0
0
2,203
49,177,246
2018-03-08T15:53:00.000
0
0
0
0
python,google-sheets,airflow
49,181,497
1
true
0
0
As far as I know there is no gsheet hook or operator in airflow at the moment. If security is not a concern you could publish it to the web and pull it in airflow using the SimpleHttpOperator. If security is a concern I recommend going the PythonOperator route and use df2gspread library. Airflow version >= 1.9 can help obtaining credentials for df2gspread
1
0
0
I'm new to Airflow and Python. I'm trying to connect Airflow with Google Sheets and although I have no problem connecting with Python, I do not know how I could do it from Airflow. I have searched for information everywhere but I only find Python information with gspread or with BigQuery, but not with Google Sheets. I would appreciate any advice or link.
Airflow and Google Sheets
1.2
0
0
2,341
49,179,845
2018-03-08T18:14:00.000
1
0
1
0
java,python,multithreading,concurrency
49,180,737
1
false
1
0
Instead of reasoning about performance, I highly recommend to measure it for your application. Don't risk thread problems for a performance improvement that you most probably won't ever notice. So: write thread-safe code without any performance-tricks, use a decent profiler to find the percentage of time spent inside the data structure access, and then decide if that part is worth any improvement. I bet there will be other bottlenecks, not the shared data structure. If you like, come back to us with your code and the profiler results.
1
0
0
Use case: a single data structure (hashtable, array, etc) whose members are accessed frequently by multiple threads and modified infrequently by those same threads. How do I maintain performance while guaranteeing thread safety (ie, preventing dirty reads). Java: Concurrent version of the data structure (concurrent hashmap, Vector, etc). Python: No need if only threads accessing it, because of GIL. If it's multiple processes that will be reading and updating the data structure, then use threading.Lock. Force the each process's code to acquire the lock before and release the lock after accessing the data structure. Does that sound reasonable? Will Java's concurrent data structure impose too much penalty to read speed? Is there higher level concurrency mechanism in python?
Threaty safety vs performance in Java and Python
0.197375
0
0
113
49,182,502
2018-03-08T21:09:00.000
1
0
1
0
python,ipython,jupyter-notebook,google-colaboratory
59,556,545
3
false
0
0
I see it under Tools->Settings->Editor (as on On 1//1/2020)
3
37
0
Normally it is possible in jupyter or iPython notebooks to show lines number for a cell, however I don't see where in Google Colaboratory (Colab).
How to show line numbers in Google Colaboratory?
0.066568
0
0
25,208
49,182,502
2018-03-08T21:09:00.000
62
0
1
0
python,ipython,jupyter-notebook,google-colaboratory
49,183,101
3
true
0
0
Yep, the shortcut (Ctrl + M + L) works, other option is use the bar menu, at tool -> preference -> show line numbers Update: new path: Tools -> Settings -> Editor -> show line numbers
3
37
0
Normally it is possible in jupyter or iPython notebooks to show lines number for a cell, however I don't see where in Google Colaboratory (Colab).
How to show line numbers in Google Colaboratory?
1.2
0
0
25,208
49,182,502
2018-03-08T21:09:00.000
5
0
1
0
python,ipython,jupyter-notebook,google-colaboratory
49,182,947
3
false
0
0
Holding Ctrl and pressing ML (one by one) switches on/off line numbers in the cells containing code.
3
37
0
Normally it is possible in jupyter or iPython notebooks to show lines number for a cell, however I don't see where in Google Colaboratory (Colab).
How to show line numbers in Google Colaboratory?
0.321513
0
0
25,208
49,185,114
2018-03-09T01:14:00.000
1
0
0
0
pythonanywhere
49,196,785
1
false
1
0
You need to actually serve the files. On your local machine, Django is serving static files for you. On PythonAnywhere, it is not. There is extensive documentation on the PythonAnywhere help pages to get you started with configuring static files.
1
0
0
I am using Django1.8 and I need help. how to display images and files on pythonanywhere by using model filefield and imagefield. on my development server everything is ok.but during de production I have donne everything these two field.the parodox is bootstrap is well integread. my project is on githb: Geyd/eces_edu.git help me !!!
how to diplay file field and image field on pythonaywhere
0.197375
0
0
39
49,185,963
2018-03-09T03:05:00.000
0
0
1
0
python
49,185,973
1
false
0
0
-10 = 3(-4)+2, hence the remainder is 2 mathematically. Alternatively notice that that 10 % 3 is equal to 1. Hence the remainder of -10 % 3 should be -1 which is equal to 3-1 = 2 modulo 3. By the rule that a % b follows the sign of b and |a%b| < b, 2 is returned as the answer.
1
0
0
for instance,-10 % 3 = 2, this does not make any sense as the definition for % is remainder. Thanks
I cannot understand the remainder operator for negative values in python
0
0
0
99
49,187,548
2018-03-09T06:05:00.000
1
0
1
0
python-3.6
49,200,143
1
false
0
0
Thanks. Got it: from datetime import datetime date_1 = datetime.strptime('10:06 AM - 26 Feb 2015', '%I:%M %p - %d %b %Y') date_2 = datetime.strptime('9:38 AM - 4 Mar 2015', '%I:%M %p - %d %b %Y') diff = date_2 - date_1 print(diff)
1
1
0
I have a json file where I have publistime as one entry and I want to find the difference between to time. "pubTime": "9:38 AM - 4 Mar 2015" "pubTime": "12:52 AM - 4 Mar 2015", pubTime": "5:03 PM - 3 Mar 2015",
How I can calculate the duration between these two times '9:38 AM - 4 Mar 2015' and '10:06 AM - 26 Feb 2015' in Python
0.197375
0
0
21
49,188,928
2018-03-09T07:46:00.000
0
0
0
0
python-3.x,neural-network,keras,multiclass-classification,activation-function
49,190,269
1
false
0
0
First of all you simply should'nt use them in your output layer. Depending on your loss function you may even get an error. A loss function like mse should be able to take the ouput of tanh, but it won't make much sense. But if were talking about hidden layers you're perfectly fine. Also keep in mind, that there are biases which can train an offset to the layer before giving the ouput of the layer to the activation function.
1
0
1
Tanh activation functions bounds the output to [-1,1]. I wonder how does it work, if the input (features & Target Class) is given in 1-hot-Encoded form ? How keras (is managing internally) the negative output of activation function to compare them with the class labels (which are in one-hot-encoded form) -- means only 0's and 1's (no "-"ive values) Thanks!
Keras "Tanh Activation" function -- edit: hidden layers
0
0
0
641
49,193,808
2018-03-09T12:25:00.000
0
0
0
0
python,tensorflow,cuda,ubuntu-16.04,cudnn
49,312,396
1
true
0
0
Thanks to @Robert Crovella, you give me the helpful solution of my question! When I try to use a different way: pip install tensorflo-gpu==1.4 to install again, It found my older version of tensorflow1.5 and uninstall tf1.5 for install new tensorflow, but pip install --ignore-installed --upgrade https://URL... couldn't find it. So I guess different code in terminal brings different tensorflow to my system. Thank you again
1
0
1
I want to install tensorflow1.2 on Ubuntu 16.04 LST, After installing with pip, I test it with import tensorflow as tf in terminal, error shows that ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory It seems that tensorflow needs higher version CUDA, But the version of my tensorflow is 1.2, so I think my CUDA version is high enough. If CUDA9.0 is too high for tensorflow1.2? By the way, I found other people can run tensorflow1.2 using CUDA8.0 and cuDNN5.1, so can you help me solve this problem, Thank you very much!
Install tensorflow1.2 with CUDA8.0 and cuDNN5.1 shows 'ImportError: libcublas.so.9.0'
1.2
0
0
341
49,195,008
2018-03-09T13:34:00.000
1
0
0
0
python,python-3.x,algorithm,machine-learning
49,195,249
1
false
0
0
The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) You can store in whatever format you like. Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one new piece of data is added every time the algorithm is used?). This depends very much on what algorithm you use. Some algorithms can easily be implemented to learn in an incremental manner. For example, Linear/Logistic Regression implemented with Stochastic Gradient Descent could easily just run a quick update on every new instance as it gets added. For other algorithms, full re-trains are the only option (though you could of course elect not to always do them over and over again for every new instance; you could, for example, simply re-train once per day at a set point in time).
1
3
1
This may be a stupid question, but I am new to ML and can't seem to find a clear answer. I have implemented a ML algorithm on a Python web app. Right now I am storing the data that the algorithm uses in an offline CSV file, and every time the algorithm is run, it analyzes all of the data (one new piece of data gets added each time the algorithm is used). Apologies if I am being too vague, but I am wondering how one should generally go about implementing the data and algorithm properly so that: The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one new piece of data is added every time the algorithm is used?).
Preprocessing machine learning data
0.197375
0
0
92
49,198,057
2018-03-09T16:24:00.000
2
0
0
0
python,amazon-web-services,amazon-route53,health-monitoring
49,201,020
1
true
1
0
Make up a filename. Let's say healthy.txt. Put that file on your web server, in the HTML root. It doesn't really matter what's in the file. Verify that if you go to your site and try to download it using a web browser, it works. Configure the Route 53 health check as HTTP and set the Path for the check to use /healthy.txt. To make your server "unhealthy," just delete the file. The Route 53 health checker will get a 404 error -- unhealthy. To make the server "healthy" again, just re-create the file.
1
0
0
I have a query as to whether what I want to achieve is doable, and if so, perhaps someone could give me some advice on how to achieve this. So I have set up a health check on Route 53 for my server, and I have arranged so that if the health check fails, the user will be redirected to a static website I have set up at a backup site. I also have a web scraper running regularly collecting data, and my question is, would their be a way to use the data I have collected, and depending on its value, either pass or fail the heath check, therefore determining what site the user would be diverted to. I have discussed with AWS support and they have said that their policies and conditions are there by design, and long story short would not support what I am trying to achieve. I'm a pretty novice programmer so I'm not sure if it's possible to work this, but this is my final hurdle so any advice or help would be hugely appreciated. Thanks!
Intentionally Fail Health Check using Route 53 AWS
1.2
0
1
42
49,199,748
2018-03-09T18:11:00.000
0
0
0
1
python-3.x,scrapy
49,200,138
1
false
1
0
You will either need to add the path to where scrapy is located to your Path Environment Variable (if you're on Windows) or you could run it from your c:\Users\User\Anaconda3\Scripts and run the scrapy command with relative paths to your quotes_spider subfolder.
1
0
0
I'm learning scrapy from a udemy tutorial. I've installed Scrapy to the following path: c:\Users\User\Anaconda3\Scripts afterwards I enter in the "startproject" command which creates the project "quotes spider" in a new folder with this path: c:\Users\User\Anaconda3\Scripts\quotes_spider in the tutorial, the instructor changes his directory to that subfolder and is able to call scrapy from that subfolder. When I try to do that i get the error: 'scrapy' is not recognized as an internal or external command, operable program or batch file. How am I able to call scrapy from this subfolder??
run python in sub folder created after "startproject" command
0
0
0
62
49,199,787
2018-03-09T18:14:00.000
-1
0
0
0
python,libgdx,blender,index-error
58,185,138
1
false
1
0
Go into Object Mode before calling that function. bpy.ops.object.mode_set(mode='OBJECT', toggle=False)
1
1
0
I try write a game with GoranM/bdx plugin. When i create plate with texture and try export to code I get fatal error. Traceback (most recent call last): File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\ops\exprun.py", line 225, in execute export(self, context, bpy.context.scene.bdx.multi_blend_export, bpy.context.scene.bdx.diff_export) File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\ops\exprun.py", line 123, in export bpy.ops.export_scene.bdx(filepath=file_path, scene_name=scene.name, exprun=True) File "C:\Program Files\Blender Foundation\Blender\2.79\scripts\modules\bpy\ops.py", line 189, in call ret = op_call(self.idname_py(), None, kw) RuntimeError: Error: Traceback (most recent call last): File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 903, in execute return export(context, self.filepath, self.scene_name, self.exprun, self.apply_modifier) File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 829, in export "models": srl_models(objects, apply_modifier), File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 117, in srl_models verts = vertices(mesh) File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 53, in vertices vert_uv = list(uv_layer[li].uv) IndexError: bpy_prop_collection[index]: index 0 out of range, size 0 location: C:\Program Files\Blender Foundation\Blender\2.79\scripts\modules\bpy\ops.py:189 location: :-1 Maybe someone had same problem and you know how to fix it?
Blender IndexError: bpy_prop_collection
-0.197375
0
0
1,262
49,199,818
2018-03-09T18:16:00.000
0
0
1
0
python,numpy,anaconda
57,648,777
3
false
0
0
Kindly perform invalidate cache and restart if you are using PyCharm. No need to uninstall numpy or run any command.
2
9
1
I'm quite new to Python/Anaconda, and I'm facing an issue that I couldn't solve on my own or googling. When I'm running Python on cmd I can import and use numpy. Working fine. When I'm running scripts on Spyder, or just trying to import numpy on Anaconda Prompt this error message appears: ImportError: Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try git clean -xdf (removes all files not under version control). Otherwise reinstall numpy. Original error was: cannot import name 'multiarray' I don't know if there are relations to it, but I cannot update conda, as well. When I try to update I receive Permission Errors. Any ideas?
Importing the multiarray numpy extension module failed (Just with Anaconda)
0
0
0
9,555
49,199,818
2018-03-09T18:16:00.000
0
0
1
0
python,numpy,anaconda
49,199,982
3
false
0
0
I feel like I would have to know a little more but, it seems to be that you need to reinstall numpy and check if the complete install was successful. Keep in mind that Anaconda is a closed environment so you don't have as much control. with regards to the permissions issue you may have installed it with a superuser/admin. That would mean that in order to update you would have to update with your superuser/admin.
2
9
1
I'm quite new to Python/Anaconda, and I'm facing an issue that I couldn't solve on my own or googling. When I'm running Python on cmd I can import and use numpy. Working fine. When I'm running scripts on Spyder, or just trying to import numpy on Anaconda Prompt this error message appears: ImportError: Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try git clean -xdf (removes all files not under version control). Otherwise reinstall numpy. Original error was: cannot import name 'multiarray' I don't know if there are relations to it, but I cannot update conda, as well. When I try to update I receive Permission Errors. Any ideas?
Importing the multiarray numpy extension module failed (Just with Anaconda)
0
0
0
9,555
49,200,056
2018-03-09T18:31:00.000
0
0
0
1
python,ubuntu,ssl,virtualbox,pycurl
59,869,746
2
false
0
0
sudo yum install openssl-devel for Red Hat and CentOS, etc.
2
0
0
I'm having issues with link-time and compile-time ssl backend errors when trying to import pycurl into a python script. This script ran fine on OSX, however not on my current setup. I am running Ubuntu 16.04LTS on VirtualBox, Host OS Win10, Python 2.7.12, using pycurl-7.43.0.1. The error I receive is ImportError: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (nss). I've tried uninstalling and then executing export PYCURL_SSL_LIBRARY=openssl before reinstalling again with no success. Any help is appreciated, let me know if any more information is needed.
Pycurl Import Error: SSL Backend Mismatch
0
0
0
651
49,200,056
2018-03-09T18:31:00.000
1
0
0
1
python,ubuntu,ssl,virtualbox,pycurl
49,224,400
2
true
0
0
Was missing dependencies remedied through sudo apt-get install libssl-dev
2
0
0
I'm having issues with link-time and compile-time ssl backend errors when trying to import pycurl into a python script. This script ran fine on OSX, however not on my current setup. I am running Ubuntu 16.04LTS on VirtualBox, Host OS Win10, Python 2.7.12, using pycurl-7.43.0.1. The error I receive is ImportError: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (nss). I've tried uninstalling and then executing export PYCURL_SSL_LIBRARY=openssl before reinstalling again with no success. Any help is appreciated, let me know if any more information is needed.
Pycurl Import Error: SSL Backend Mismatch
1.2
0
0
651