Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I would like to run all my tests (or part of it) with a single keyboard shortcut so to achieve faster test cycle.
So what I'm doing currently is to press ctrl+shift+R (on OSX) to prompt Run... dialog and then select the run test configuration, but it requires two stroke and a mental load of selecting appropriate configuration.
Is there a way for me to run my tests quickly like how I can run my app( single stroke of ctrl+R)? | 0 | python,pycharm | 2015-12-22T05:59:00.000 | 0 | 34,409,373 | Once you have selected that test-oriented run configuration once, the next times you can just do Ctrl-r, which runs the most-recently-run run configuration. | 0 | 40 | false | 0 | 1 | Set different keyboard shortcut for running unittest on Pycharm | 34,421,968 |
2 | 4 | 0 | 14 | 416 | 1 | 1 | 0 | I recently discovered pytest. It seems great. However, I feel the documentation could be better.
I'm trying to understand what conftest.py files are meant to be used for.
In my (currently small) test suite I have one conftest.py file at the project root. I use it to define the fixtures that I inject into my tests.
I have two questions:
Is this the correct use of conftest.py? Does it have other uses?
Can I have more than one conftest.py file? When would I want to do that? Examples will be appreciated.
More generally, how would you define the purpose and correct use of conftest.py file(s) in a py.test test suite? | 0 | python,testing,pytest | 2015-12-25T20:08:00.000 | 0 | 34,466,027 | I use the conftest.py file to define the fixtures that I inject into my tests, is this the correct use of conftest.py?
Yes, a fixture is usually used to get data ready for multiple tests.
Does it have other uses?
Yes, a fixture is a function that is run by pytest before, and sometimes
after, the actual test functions. The code in the fixture can do whatever you
want it to. For instance, a fixture can be used to get a data set for the tests to work on, or a fixture can also be used to get a system into a known state before running a test.
Can I have more than one conftest.py file? When would I want to do that?
First, it is possible to put fixtures into individual test files. However, to share fixtures among multiple test files, you need to use a conftest.py file somewhere centrally located for all of the tests. Fixtures can be shared by any test. They can be put in individual test files if you want the fixture to only be used by tests in that file.
Second, yes, you can have other conftest.py files in subdirectories of the top tests directory. If you do, fixtures defined in these lower-level conftest.py files will be available to tests in that directory and subdirectories.
Finally, putting fixtures in the conftest.py file at the test root will make them available in all test files. | 0 | 147,291 | false | 0 | 1 | In pytest, what is the use of conftest.py files? | 51,718,551 |
2 | 4 | 0 | 17 | 416 | 1 | 1 | 0 | I recently discovered pytest. It seems great. However, I feel the documentation could be better.
I'm trying to understand what conftest.py files are meant to be used for.
In my (currently small) test suite I have one conftest.py file at the project root. I use it to define the fixtures that I inject into my tests.
I have two questions:
Is this the correct use of conftest.py? Does it have other uses?
Can I have more than one conftest.py file? When would I want to do that? Examples will be appreciated.
More generally, how would you define the purpose and correct use of conftest.py file(s) in a py.test test suite? | 0 | python,testing,pytest | 2015-12-25T20:08:00.000 | 0 | 34,466,027 | In a wide meaning conftest.py is a local per-directory plugin. Here you define directory-specific hooks and fixtures. In my case a have a root directory containing project specific tests directories. Some common magic is stationed in 'root' conftest.py. Project specific - in their own ones. Can't see anything bad in storing fixtures in conftest.py unless they are not used widely (In that case I prefer to define them in test files directly) | 0 | 147,291 | false | 0 | 1 | In pytest, what is the use of conftest.py files? | 34,493,931 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to generate PDF file from Latex template. I've done it in development environment (running python manage.py straight from eclipse)... but I can't make it work into the server, which is running using cherokee and uwsgi.
We have realized that open(filename) creates a file owning to root (also root group). This isn't taking place in development environment... but the most strange thing about this issue is that somewhere else in our code we are creating a text file (latex uses is a text file too), but it's created with the user cherokee is supposed to use, not root!
What happened? How can we fix it?
We are running this code on ubuntu linux and a virtual environment both in development and production.
We started following some instructions to do it using python's temporary file and folder creation functions, but we thought that it could be something related with them, and created them "manually" in order to try to solve this issue... but it didn't work. | 0 | python,django,permissions,uwsgi,cherokee | 2015-12-26T11:52:00.000 | 1 | 34,471,080 | As I've said in my comments this issue was related to supervisord. I've solved it assigning the right path and user into "environment" variable of supervisord's config file. | 0 | 240 | false | 1 | 1 | Django uwsgi subprocess and permissions | 34,545,562 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I want to run a couple of Python scripts from PHP.
On an Ubuntu machine everything looks good right out of the box.
On FreeBSD though I get /usr/local/lib/python2.7: Permission denied
Any idea how to give permissions to Apache to run a Python through shell_exec or exec ?
Also see how I had to name the full path of the Python ?
Is there any way to avoid that too ? | 0 | php,python,linux | 2015-12-28T10:02:00.000 | 1 | 34,491,359 | Be sure to use full paths for both python and your script.
$foo = exec('/usr/bin/python /path/script.py');
Also, make sure the file permissions where your script is located can be accessed by www, probably will need to chmod 755 /path. | 0 | 474 | true | 0 | 1 | FreeBSD PHP exec permission denied | 34,491,632 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 1 | Is it possible to access local files via remote SSH connection (local files of the connecting client of course, not other clients)?
To be specific, I'm wondering if the app I'm making (which is designed to be used over SSH, i.e. user connects to a remote SSH server and the script (written in Python) is automatically executed) can access local (client's) files. I want to implement an upload system, where user(s) (connected to SSH server, running the script) may be able to upload images, from their local computers, over to other hosting sites (not the SSH server itself, but other sites, like imgur or pomf (the API is irrelevant)). So the remote server would require access to local files to send the file to another remote hosting server and return the link. | 0 | python,linux,ssh | 2015-12-28T20:14:00.000 | 1 | 34,500,111 | You're asking if you can write a program on the server which can access files from the client when someone runs this program through SSH from the client?
If the only program running on the client is SSH, then no. If it was possible, that would be a security bug in SSH. | 0 | 866 | true | 0 | 1 | Remote SSH server accessing local files | 34,500,718 |
1 | 3 | 1 | 0 | 1 | 1 | 0 | 0 | If this question could be worded better/needs to be split into many questions, please alert me
I need to package Python scripts in order to ship them as single-executables (ideally), or single-executables with supporting files (non-ideally).
I have seen py2app and py2exe. They do not fit my requirements, as I am looking for a single method to do this, and in the future may need to have the packaged scripts interact with the executable that they are being run from.
What is the best way to go about this? The scripts which I would be embedding may even require multiple files, which complicates matters I'm sure.
If I wanted to use an interpreter other than CPython (ie: PyPy or Stackless) in the future, would the process be different (other than API calls in the C++ executable)?
Does Python have to be installed on the computers which would be running the package, or does embedding Python mean that it is fully embedded? I saw on the Python Wiki something about Py_SetPythonHome(), which would indicate to me that it needs Python (or at least its libraries) to be installed. Am I correct? | 0 | python,c++,python-embedding | 2016-01-04T00:10:00.000 | 0 | 34,583,134 | Are you sure that you need to embed the Python files?
I ask because you mention you want to package the Python files as single executables. Couldn't you install Python on the target machine and since Python scripts are executables on their own, you would only need something to kick them off. A master python script could kick off all the rest of the scripts.
Otherwise you should look in C++ what can run a Python script. Then have the master python script run all the other scripts. | 0 | 1,476 | false | 0 | 1 | Embedding Python into C++ | 34,583,312 |
3 | 3 | 0 | 3 | 1 | 0 | 1.2 | 1 | I am planning to invoke AWS Lambda function by modifying objects on AWS S3 bucket. I also need to send a large amount of data to the AWS Lambda function. How can I send the data to it in an efficient way? | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 2016-01-04T07:24:00.000 | 0 | 34,586,419 | I would use another S3 bucket to first send the data and then use it from the Lambda function | 0 | 1,571 | true | 0 | 1 | Send large data to AWS Lambda function | 34,586,632 |
3 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 1 | I am planning to invoke AWS Lambda function by modifying objects on AWS S3 bucket. I also need to send a large amount of data to the AWS Lambda function. How can I send the data to it in an efficient way? | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 2016-01-04T07:24:00.000 | 0 | 34,586,419 | Your Lambda function should just read from the database your large data resides in.
Assuming your modified object on S3 contains - inside the object or as the object name - some type of foreign key to the data you need out of your database:
A) If your Lambda has access to the database directly: then you can just make your lambda function query your database directly to pull the data.
B) If your Lambda does not have direct access to the database: Then consider cloning the data as needed from the database to a secure S3 bucket for access by your lambda's when they are triggered/need it. Clone the data to S3 as JSON or some other easy to read format as logical objects for your business case (orders, customers, whatever). This method will be the fastest/most efficient for the Lambda if its possible for your use case. | 0 | 1,571 | false | 0 | 1 | Send large data to AWS Lambda function | 34,967,438 |
3 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 1 | I am planning to invoke AWS Lambda function by modifying objects on AWS S3 bucket. I also need to send a large amount of data to the AWS Lambda function. How can I send the data to it in an efficient way? | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 2016-01-04T07:24:00.000 | 0 | 34,586,419 | I recently did this by gzipping the data before invoking the lambda function. This is super easy to do with most programming languages. Depending on your database content this will be a better or worse solution. The content of my database had a lot of data repetition and zipped very nicely. | 0 | 1,571 | false | 0 | 1 | Send large data to AWS Lambda function | 38,183,446 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I have been learning a lot of python recently using sublime text on a mac, I installed python 3 and have mainly been using that but as a lot of documentation is in python 2.7 and it comes with the Mac I decided to start using 2.7 instead. I have quite a few libraries installed (for python 3 and for 2.7) When I load my terminal it takes a good 15 seconds for it to get to the prompt and it takes the same amount of time to build python 2.7 from sublime text before it starts executing the code.
I know this post is probably too vague but if anyone has had a similar experience or could suggest anything to point me in the right direction I would really appreciate it.
Thanks. | 0 | python,macos,terminal,sublimetext | 2016-01-04T15:13:00.000 | 1 | 34,594,184 | Run python -vvv to dump out imports Python is doing when it starts up. If the slowdown is by a third party library this should give a hint.
Check your ~/.bashrc script for duplicate entries (see comments below). | 0 | 380 | true | 0 | 1 | Sublime Text Python builds and opening a terminal takes very long time | 34,594,257 |
2 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | I want to import the vlc from my python script, but it is getting error like follows:
Traceback (most recent call last):
File "test.py", line 3, in
import vlc
ImportError: No module named vlc
How to solve this problem?? | 0 | python,libvlc | 2016-01-06T05:49:00.000 | 0 | 34,626,477 | Try to use pip install python-vlc in command prompt (if you are using windows). This will remove that error, as vlc is not yet installed on your system.
If you are using Ubuntu or other Linux Kernel OS, then first install pip (and python) on your system using whatever your package manager is (if necessary), then do pip install python-vlc. | 0 | 1,605 | false | 0 | 1 | vlc import error in python script + ubuntu 14.04LTS | 47,785,613 |
2 | 3 | 0 | 1 | 2 | 1 | 0.066568 | 0 | I want to import the vlc from my python script, but it is getting error like follows:
Traceback (most recent call last):
File "test.py", line 3, in
import vlc
ImportError: No module named vlc
How to solve this problem?? | 0 | python,libvlc | 2016-01-06T05:49:00.000 | 0 | 34,626,477 | For people stumbling upon this answer in 2020 and using Debian Linux, try the following command:
sudo pip3 install python-vlc | 0 | 1,605 | false | 0 | 1 | vlc import error in python script + ubuntu 14.04LTS | 63,069,430 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I am new to Hyper-v,WMI and using WMIC i need to create a VM(virtual machine). Can anybody help me through a sample code or script to refer? Preferred script language is Python and I am using CentOS 6 to run wmic. And is there any way to create VM via wmic commands? I have gone through many scripts and code snippets but they were all in powershell and I dont want to use powershell. | 0 | python,linux,centos6,hyper-v,wmic | 2016-01-07T07:13:00.000 | 1 | 34,649,314 | There are so many scripts in C# to do the same work
if you dont want to use powershell scripts you can choose the best method to run a powershell command for each thing and run the powershell command from python script
i did the same in C# with Process Class | 0 | 605 | false | 0 | 1 | How to create virtual machine using WMIC on hyper-v with python script or any command? | 35,030,932 |
1 | 1 | 0 | 2 | 1 | 1 | 0.379949 | 0 | Ok, so I'm looking to switch to PyCharm from PyScripter for OS independent development. I also wanted to mention that I'm using Perforce for version control.
So what I currently do is double click a .py for editing in Perforce, and PyScripter opens up and I edit to my hearts desire. I can click on an imported function, and it'll open up the corresponding .py file and bring me right to the function. Awesome.
So I have yet to be able to achieve that on PyCharm. I'm using the community version which should be just fine for what I want, which is just an editor with some python checking & built in console.
When I set the default .py program to use in Perforce to PyCharm, I click on the .py and PyCharm fires up. Good so far. But my problem arises when I try to "ctrl + click" a function or method. I get the "Cannot find declaration to go to." I import the associated class & file.
(Just an example, not actual code). So in Transportation.py I have "import Cars", which is a .py. I do Cars.NumberOfDoors() and I get the above error. My folder structure is:
Scripts (folder)
Population.py (General support script)
Citybudget.py (General support script)
MassTransit (folder)
Transportation.py
Cars.py
So question boils down to, is how do I properly setup the root to be the Scripts folder when I click on a file from Perforce? How do I set it up that it recognizes where it's at in the folder structure? So if I'm in the MassTransit it'll set the root as Scripts folder, and same for if I'm accessing the general support scripts like Population.py? | 0 | python,pycharm,perforce,pyscripter | 2016-01-07T17:38:00.000 | 0 | 34,661,669 | Go to
File --> Open
in Pycharm and select your Scripts(folder) and open it. Then the Pycharm will treat it as a project and you will be able to ctrl + click a function. | 0 | 207 | false | 0 | 1 | Directory issues within Pycharm (free version) & Perforce | 34,661,837 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a script(RelayControlMainGH.py) that monitors temperature sensors and controls relays. It uses a while true statement with a time.sleep() and runs forever. I also created a script(GetTableTimes.py) that reads 3 database table files and when they get modified a script(CreateRelayControlConfig.py) re-creates the script(RelayControlMainGH.py). So anytime I change those 3 tables in my database this new config file needs to be made because of the path changes or temp changes or logic used on the relays.
What would be a good way to stop the script(RelayControlMainGH.py) from running and allow some time for the new script to be re-created and start it up again.
I tried using cron without the while loop but the script (RelayControlMainGH.py) will not run. I am sure if I put it in cron with the while loop I will have to find it in the system to start and stop it.
What would be the best way to do this?
I am using a raspberry pi with rasbian | 0 | python,cron | 2016-01-10T07:28:00.000 | 0 | 34,703,125 | What I did was to create a demean service for my script(RelayControlMainGH.py) and start it upon bootup or it can be started anytime. Then in my script that creates the config file I added a stop and start of the demean so it can get the new config file and keep going. It works great!! | 0 | 122 | true | 0 | 1 | start stop python script that gets created dynamically | 34,759,109 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a script(RelayControlMainGH.py) that monitors temperature sensors and controls relays. It uses a while true statement with a time.sleep() and runs forever. I also created a script(GetTableTimes.py) that reads 3 database table files and when they get modified a script(CreateRelayControlConfig.py) re-creates the script(RelayControlMainGH.py). So anytime I change those 3 tables in my database this new config file needs to be made because of the path changes or temp changes or logic used on the relays.
What would be a good way to stop the script(RelayControlMainGH.py) from running and allow some time for the new script to be re-created and start it up again.
I tried using cron without the while loop but the script (RelayControlMainGH.py) will not run. I am sure if I put it in cron with the while loop I will have to find it in the system to start and stop it.
What would be the best way to do this?
I am using a raspberry pi with rasbian | 0 | python,cron | 2016-01-10T07:28:00.000 | 0 | 34,703,125 | I would suggest that you'll put the values that were read by GetTableTimes.py&CreateRelayControlConfig in a json file and always read them in the RelayControlMainGH.py
This way your cron jobs will be simple.
I'm not sure you need a while True loop anyway since your cron will run the script every * minute/hour/day...
I hope this will help you structure your solution better | 0 | 122 | false | 0 | 1 | start stop python script that gets created dynamically | 34,703,928 |
1 | 4 | 0 | 0 | 16 | 1 | 0 | 0 | I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test.
Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program.
I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails.
Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour.
Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module)
Is there a way to reliably reload a package (including all sub-dependencies, etc)?
Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)? | 0 | python-3.x,pytest | 2016-01-10T11:09:00.000 | 0 | 34,704,684 | I have the same problem, and found three solutions:
reload(some_lib)
patch SUT, as the imported method is a key and value in SUT, you can patch the
SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2
import module, and use module.function. | 0 | 8,670 | false | 0 | 1 | restart python (or reload modules) in py.test tests | 57,849,472 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | I am using an C based OCR engine known as tesseract with Python interface library pytesseract to access its core features. Essentially, the library reads the local contents of the installed engine for use in a Python program. However, the library continues to look for the engine when distributed as an executable. How do I instead include the engine self-contained in the executable? | 0 | python,tesseract,py2exe,pyinstaller | 2016-01-10T14:53:00.000 | 1 | 34,706,795 | Reading the pytesseract docs, I have found the following section:
Install google tesseract-ocr from
http://code.google.com/p/tesseract-ocr/. You must be able to invoke
the tesseract command as "tesseract". If this isn't the case, for
example because tesseract isn't in your PATH, you will have to change
the "tesseract_cmd" variable at the top of 'tesseract.py'.
This means you need to have tersseract installed on your target machine independent of your script being exefied or not. Tesseract is a requirement for your script to work. You will need to ask your users to have tesseract installed or you use an "install wizzard" tool which will check if tesseract is installed and if not install it for your users. But this is not the task of pyinstaller. Pyinstaller only exefies your Python script. | 0 | 347 | false | 0 | 1 | Distributed C/C++ Engine with Python | 34,722,890 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I want to develop an app to track people's Whatsapp last seen and other stuff, and found out that there are APIs out there to deal with it, but the thing is they are writen in python and are normally run in Linux I think
I have Java and Android knowledge but not python, and wonder if there's a way to develop the most of the app in Java and get the info I want via calls using these python APIs, but without having to install a python interpreter or similar on the device, so the final user just has to download and run the Android app as he would do with any other
I want to know if it would be very hard for someone inexperienced as me (this is the 2nd and final year of my developing grade), for it's what I have in mind for the final project, thx in advance | 0 | java,android,python | 2016-01-10T19:50:00.000 | 0 | 34,710,059 | Instead of running it as one app, what about running the python script as separate from the original script? I believe it would bee possible, as android is in fact a UNIX based OS. Any readers could give their input on this idea an if it would work. | 0 | 49 | false | 1 | 1 | how to write an Android app in Java which needs to use a Python library? | 34,710,122 |
2 | 3 | 0 | 0 | 1 | 0 | 1.2 | 0 | I have a python script on a Raspberry Pi reading the temperature and humidity from a sensor. It works fine when started in IDLE, but when I try starting it in a terminal I get the message:sudo: unable to execute .thermostaatgui.py: No such file or directory. The first line in the script is: #! /usr/bin/python, the same as in other scripts that run without problems and the script is made executable with chmod +x.
In the script Adafruit_DHT, datetime and time are imported, other scripts that work do the same. | 0 | python,raspberry-pi,executable,sensors | 2016-01-10T23:01:00.000 | 1 | 34,711,799 | Well, still a little puzzled why it happened, but anyway this solved the problem:
As a workaround, I copied the contents of "thermostaatgui.py" over the contents of a working script ("mysimpletest.py"), saved it and it runs OK. | 0 | 395 | true | 0 | 1 | python executing in IDLE, but not in termnal | 35,046,201 |
2 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 0 | I have a python script on a Raspberry Pi reading the temperature and humidity from a sensor. It works fine when started in IDLE, but when I try starting it in a terminal I get the message:sudo: unable to execute .thermostaatgui.py: No such file or directory. The first line in the script is: #! /usr/bin/python, the same as in other scripts that run without problems and the script is made executable with chmod +x.
In the script Adafruit_DHT, datetime and time are imported, other scripts that work do the same. | 0 | python,raspberry-pi,executable,sensors | 2016-01-10T23:01:00.000 | 1 | 34,711,799 | +1 on the above solution.
To Debug
try this
Type "pwd" on your terminal. This will tell you where you are in the shell.
Then type "ls -lah" and look for your script. if you can not find it, then you need to "cd" to the directory where the script exists and then execute the script | 0 | 395 | false | 0 | 1 | python executing in IDLE, but not in termnal | 34,711,852 |
1 | 3 | 0 | 2 | 3 | 0 | 0.132549 | 0 | I did something very stupid. I was copying some self written packages to the python dist-packages folder, then decided to remove one of them again by just rewriting the cp command to rm. Now the dist-packages folder is gone. What do I do now? Can I download the normal contents of this folder from somewhere, or do I need to reinstall python completely. If so - is there something I need to be careful about?
The folder I removed is /usr/local/lib/python2.7 so not the one maintained by dpkg and friends. | 0 | python,debian,uninstallation,reinstall | 2016-01-12T10:10:00.000 | 1 | 34,740,756 | The directory you removed is controlled and maintained by pip. If you have a record of which packages you have installed with pip, you can force it to reinstall them again.
If not, too late to learn to make backups; but this doesn't have to be a one-shot attempt -- reinstall the ones you know are missing, then live with the fact that you'll never know if you get an error because you forgot to reinstall a module, or because something is wrong with your code. By and by, you will discover a few more missing packages which you failed to remember the first time; just reinstall those as well as you discover them.
As an aside, using virtualenv sounds like a superior solution for avoiding a situation where you need to muck with your system Python installation. | 0 | 4,302 | false | 0 | 1 | Accidentally removed dist-packages folder, what to do now? | 34,743,144 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am building a complex Python application that distributes data between very different services, devices, and APIs. Obviously, there is a lot of private authentication information. I am handling it by passing it with environmental variables within a Supervisor process using the environment= keyword in the configuration file.
I have also a test that checks whether all API authentication information is set up correctly and whether the external APIs are available. Currently I am using Nosetest as test runner.
Is there a way to run the tests in the Supervisor context without brute force parsing the supervisor configuration file within my test runner? | 0 | python,unit-testing,supervisord | 2016-01-12T22:55:00.000 | 0 | 34,755,334 | I decided to use Python Celery which is already installed on my machine. My API queries are wrapped as tasks and send to Celery. Given this setup I created my testrunner as just another task that runs the API tests.
The web application tests do not need the stored credentials but run fine in the Celery context as well. | 0 | 100 | false | 0 | 1 | How could I run unit tests in Supervisor context? | 34,911,457 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I'd like to run text processing Python scripts after submitting searchForms of my node.js application.
I know how the scripts can be called with child_process and spawn within js, but what should I set up on the app (probably some package.json entries?) so that it will be able to run Python after deploying to Bluemix?
Thanks for any help! | 0 | python,node.js,ibm-cloud | 2016-01-13T10:01:00.000 | 0 | 34,763,600 | I finally fixed this as adding an entry to dependencies in package.json of the project, which causes the call of npm install for the linked github repo. It is kinda straightforward but I found no explanation for that on Bluemix resources. | 0 | 358 | false | 1 | 1 | How to invoke python scripts in node.js app on Bluemix? | 34,790,983 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 | Does anyone know a tool to implement a Python SMPP server and some tips on how to proceed?
I found Pythomnic3k framework, but did not find material needed for me to use it as SMPP server ... | 0 | python,smpp | 2016-01-13T15:47:00.000 | 0 | 34,771,013 | Take a look at jasmin sms gateway, it's pythonic and has smpp server implementation. | 0 | 491 | false | 0 | 1 | Implementing an SMPP Server in Python | 34,810,025 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I'm building an RCP app that serves as an IDE for a custom domain. One of the things we do in that domain is write python scripts that use domain-specific commands which have been wrapped as python functions. I implemented hover text support integrated with PyDev, so that if there is any domain-specific hover text available, it calls a custom ITextHover instead of PyDev's.
I have this working, but I see that if I have a string literal argument to a function, the getTextHover() method is never called on the IHoverText instance. I traced this behavior to the partitioning implementation provided by getConfiguredDocumentPartitioning in PyEditConfiguration.
Is there a way I can use PyDev's partitioning scheme but somehow override the above behavior, so that getTextHover() is called for String literal arguments? I don't see anything in the preferences, and trying to follow the implementation in the PyDev source code was not successful.
EDIT: overriding TextSourceViewerConfiguration#getConfiguredDocumentPartitioning() to return IPythonPartitions.PY_DEFAULT solves the problem. But I'm not sure what the implications are of returning this rather than IPythonPartitions.PYTHON_PARTITION_TYPE, which is the behavior provided by PyEditCOnfigurationWithoutEditor. | 0 | python,eclipse-rcp,pydev | 2016-01-13T23:07:00.000 | 0 | 34,778,771 | You shouldn't change what you changed...
The proper way would be changing PyDev itself to support your use case.
You should provide your IPyHoverParticipant (instead of doing your own text hover) and create a pull request for PyDev so that the hover works in comments/strings (i.e.: skip the "if (!pythonCommentOrMultiline) {" in org.python.pydev.editor.hover.PyTextHover.getHoverInfo(ITextViewer, IRegion) if your hover implements IPyHoverParticipant2). | 0 | 88 | true | 0 | 1 | Pydev No Hover Text for String arguments | 34,788,905 |
2 | 2 | 0 | 6 | 10 | 0 | 1 | 1 | I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email.
But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails.
Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email. | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 2016-01-14T09:28:00.000 | 0 | 34,785,863 | I am also facing the same issue, in my case on every PUT event in S3 bucket a lambda should trigger, it triggers twice with same aws_request_id and aws_lambda_arn.
To fix it, keep track of the aws_request_id (this id will be unique for each lambda event) somewhere and have a check on the handler. If the same aws_request_id exist then do nothing, otherwise process as usual. | 0 | 5,712 | false | 1 | 1 | AWS Lambda function firing twice | 41,511,055 |
2 | 2 | 0 | 13 | 10 | 0 | 1.2 | 1 | I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email.
But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails.
Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email. | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 2016-01-14T09:28:00.000 | 0 | 34,785,863 | Yes, we have this as well and it's not linked to the email, it's linked to S3 firing multiple events for a single upload. Like a lot of messaging systems, Amazon does not guarantee "once only delivery" of event notifications from S3, so your Lambda function will need to handle this itself.
Not the greatest, but doable.
Some form of cache with details of the previous few requests so you can see if you've already processed the particular event message or not. | 0 | 5,712 | true | 1 | 1 | AWS Lambda function firing twice | 34,795,499 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | i'm a new Odoo developer and i need to send automatic email when i confirm a form request, and i can input manually sender and receiver the email.
Any one have a sample, tutorial or anyone can help me, i dont know the steps or configure of mail server because i use localhost, Thank you | 0 | python,openerp,odoo-8 | 2016-01-15T07:16:00.000 | 0 | 34,806,022 | Go to setting-> Technical setting-> Email-> Outgoing Mail Servers
Set SMTP server, SMTP port other credentials
eg:
SMTP Server: smtp.gmail.com
SMTP port: 587
connection security: TLS(STARTTLS)
Once done, Test the connection is setup properly or not by clicking Test connection button.
You can send mail by calling send_mail() | 0 | 720 | false | 1 | 1 | How i can send automatic email when i confirm a form request on Odoo 8? | 34,806,721 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | Not sure if this is possible but with libsass requiring gcc-c++ >= 4.7 and Centos 6 not having it, I was curious if libsass-python could use the system's libsass instead of compiling it if it exists. I have been able to build a libsass rpm for Centos 6 but python-libsass still tries to compile it itself.
I know that I can use devtoolset-1.1 to install python-libsass (that is how I managed to build the libsass rpm) but I am trying to do all of this with puppet. So I thought if the system had libsass then python-libsass wouldn't have to install it.
I considered adding an issue in the python-libsass git project but thought I should ask here first. | 0 | python,c++,libsass | 2016-01-15T17:54:00.000 | 1 | 34,816,964 | I did come up with a solution. I created my own packages to install gcc-4.8.2.
It was a lot of work and I am not sure if it breaks a bunch of other dependencies down the line. But it worked for the server stack that I needed at the time.
I had to create all of the the following packages to get it to work.
cpp-4.8.2-8.el6.x86_64.rpm
gcc-4.8.2-8.el6.x86_64.rpm
gcc-c++-4.8.2-8.el6.x86_64.rpm
gcc-gfortran-4.8.2-8.el6.x86_64.rpm
libgcc-4.8.2-8.el6.x86_64.rpm
libgfortran-4.8.2-8.el6.x86_64.rpm
libgomp-4.8.2-8.el6.x86_64.rpm
libquadmath-4.8.2-8.el6.x86_64.rpm
libquadmath-devel-4.8.2-8.el6.x86_64.rpm
libstdc++-4.8.2-8.el6.x86_64.rpm
libstdc++-devel-4.8.2-8.el6.x86_64.rpm
So again it was a lot of work, but it did work. But after figuring this out a few months later I was able to just upgrade to Centos 7. | 0 | 322 | true | 0 | 1 | Get libsass-python to use system libsass library instead of compiling it | 39,832,334 |
3 | 3 | 0 | 3 | 1 | 0 | 0.197375 | 0 | I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks | 0 | python,rabbitmq,rmq | 2016-01-15T18:07:00.000 | 0 | 34,817,150 | If you are using the default exchange for direct routing (exchange = ''), then you don't have to declare any bindings. By default, all queues are bound to the default exchange. As long as the routing key exactly matches a queue name (and the queue exists), the default exchange iw | 0 | 943 | false | 1 | 1 | Do I need rabbitmq bindings for direct exchange? | 41,491,616 |
3 | 3 | 0 | 1 | 1 | 0 | 1.2 | 0 | I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks | 0 | python,rabbitmq,rmq | 2016-01-15T18:07:00.000 | 0 | 34,817,150 | Always. In fact, even though queues are strictly a consumer-side entity, they should be declared & bound to the direct exchange by the producer(s) at the time they create the exchange. | 0 | 943 | true | 1 | 1 | Do I need rabbitmq bindings for direct exchange? | 34,817,271 |
3 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 0 | I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks | 0 | python,rabbitmq,rmq | 2016-01-15T18:07:00.000 | 0 | 34,817,150 | You have to bind a queue with some binding key to an exchange, else messages will be discarded.
This is how any amqp broker works, publisher publish a message to exchange with some key, amqp broker(RabbitMq) routes this message from exchange to those queue(s) which are binded with exchange with the given key.
However it's not mandatory to declare and bind a queue in publisher.
You can do that in subscriber but make sure you run your subscriber before starting your publisher.
If you think your messages are getting routed to queue without bindings than you are missing something. | 0 | 943 | false | 1 | 1 | Do I need rabbitmq bindings for direct exchange? | 34,846,505 |
1 | 2 | 0 | 0 | 16 | 1 | 0 | 0 | I can't use normal tools and technics to measure the performance of a coroutine because the time it takes at await should not be taken in consideration (or it should just consider the overhead of reading from the awaitable but not the IO latency).
So how do measure the time a coroutine takes ? How do I compare 2 implementations and find the more efficent ? What tools do I use ? | 0 | python,performance-testing,trace,python-asyncio | 2016-01-16T11:41:00.000 | 0 | 34,826,533 | If you only want to measure performance of "your" code, you could used approach similar to unit testing - just monkey-patch (even patch + Mock) the nearest IO coroutine with Future of expected result.
The main drawback is that e.g. http client is fairly simple, but let's say momoko (pg client)... it could be hard to do without knowing its internals, it won't include library overhead.
The pro are just like in ordinary testing:
it's easy to implement,
it measures something ;), mostly one's implementation without overhead of third party libraries,
performance tests are isolated, easy to re-run,
it's to run with many payloads | 0 | 5,793 | false | 0 | 1 | How to measure Python's asyncio code performance? | 34,839,535 |
2 | 2 | 0 | 9 | 3 | 1 | 1.2 | 0 | My intellij version is 15.0.2. But in the run context menu, there is no option regarding profiling a piece of code.
Anyone knows what goes wrong? | 0 | python,intellij-idea | 2016-01-16T18:24:00.000 | 0 | 34,830,522 | The Python profiler does not show up in IntelliJ IDEA Ultimate, if the UML plugin is not enabled. At least this worked for me. I had the same issue and asked JetBrains directly. | 0 | 730 | true | 0 | 1 | python profiler not available in Intellij 15.0.2 | 39,794,104 |
2 | 2 | 0 | 1 | 3 | 1 | 0.099668 | 0 | My intellij version is 15.0.2. But in the run context menu, there is no option regarding profiling a piece of code.
Anyone knows what goes wrong? | 0 | python,intellij-idea | 2016-01-16T18:24:00.000 | 0 | 34,830,522 | The Python profiling is only available in PyCharm Professional and in the version of the Python plugin for IntelliJ IDEA Ultimate. It's not available in IntelliJ IDEA Community Edition. | 0 | 730 | false | 0 | 1 | python profiler not available in Intellij 15.0.2 | 34,836,262 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 1 | I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible? | 0 | python,python-2.7,tweepy | 2016-01-17T18:11:00.000 | 0 | 34,841,822 | You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance | 0 | 1,461 | false | 0 | 1 | Is it Possible to Run a Python Code Forever? | 47,680,085 |
1 | 1 | 1 | 1 | 3 | 0 | 0.197375 | 0 | I have an .so file which I pulled from an Android APK (Not my app, so I don't have access to the source, just the library)
I want to use this shared object on my 32 bit Ubuntu machine, and call some functions from it (Preferably with Python) . Is it possible to convert an Android .so to a Linux .so?
Or is there any simple solution to accessing the functions in the .so without resorting to a hefty virtual machine or something?
Thanks | 0 | android,python,linux,shared-libraries | 2016-01-19T17:47:00.000 | 1 | 34,883,612 | Most likely not. It's very probably the Android you pull it from is running on the ARM architecture, and therefore the .so library was compiled for that architecture.
Unless your desktop machine is also on the ARM architecture (it's most likely x86 and it would have to be specific such as ARMv7) the .so binary will be incompatible on your desktop.
Depending on what the .so library actually is, you may be able to grab the source code and compile it for your x86 machine.
Disclaimer: Even if you obtain a library compiled for the same architecture as your desktop (from x86 phone), there is no guarantee it will work. It may rely on other libraries provided only by Android, and this may be the start of a very deep rabbit hole. | 0 | 1,289 | false | 0 | 1 | How to use Android shared library in Ubuntu | 34,883,727 |
1 | 1 | 0 | 3 | 1 | 1 | 0.53705 | 0 | I have Eclipse Luna Release (4.4.0), and I have been using it for years.
For the first time, only on a specific project, the PyDev search doesn't work properly. As an example, if I try to search for the name of a function that I know to be there, it doesn't find it, and there are no typos. This happens for most of the search, even if some of them give the expected result.
The weirdest thing is that if I use the file search then it works. Why is that? Do you know any ways to solve? | 0 | python,eclipse,pydev | 2016-01-20T07:45:00.000 | 0 | 34,894,261 | In Navigator view, right-click the project name and select PyDev > Set as source folder (add to PYTHONPATH).
After that I find PyDev search works. Very handy as a quicker search function. | 0 | 720 | false | 0 | 1 | PyDev Search doesn't work properly on Eclipse | 51,273,106 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | I have a few modules that i want to import dynamically.
These modules contain classes that in turn have their own methods.
Is there a way to list the classes methods and more over get access to their internal variables.
Thanks | 0 | python-2.7 | 2016-01-21T10:45:00.000 | 0 | 34,921,554 | Is the dir(ClassInQuestion) function the one you are looking for? You should get all Methods and Properties. | 0 | 29 | false | 0 | 1 | How to dynamically list all methods from a class whithin a module | 35,057,956 |
1 | 3 | 0 | 1 | 1 | 0 | 1.2 | 0 | Can pickle/dill/cpickle be used to pickle an imported module to improve import speed? The Shapely module for example takes 5 seconds on my system to find and load all of the required dependencies, which I'd really like to avoid.
Can I pickle my imports once, then reuse that pickle instead of having to do slow imports every time? | 0 | python,import,pickle,dill | 2016-01-22T05:06:00.000 | 0 | 34,939,388 | The import latency is most likely due to loading the dependent shared objects of the GEOS-library.
Optimising this could maybe done, but it would be very hard. One way would be to build a statically compiled custom python interpreter with all DLLs and extension modules built in. But maintaining that would be a major PITA (trust me - I do it for work).
Another option is to turn your application into a service, thus only incurring the runtime-cost of starting the interpreter up once.
It depends on your actual problem if this is suitable. | 1 | 627 | true | 0 | 1 | Can Python's pickle/cpickle/dill speed up imports? | 37,116,604 |
3 | 5 | 0 | 1 | 2 | 1 | 0.039979 | 0 | Trying to figure out the best way of controlling industrial PLC's with Raspberry Pi/linux server - specifically using python and pymodbus (modbusTCP) over ethernet...
Once the PLC internal registry is mapped properly to modbus, can software written in python take the place of ladder logic programming in the PLC and control it completely?
Or will ladder logic/ native PLC code still need to be written? | 0 | python,linux,raspberry-pi,modbus,plc | 2016-01-22T22:14:00.000 | 0 | 34,956,823 | I don't know if you can do this in the specific configuration you are discussing; in fact you don't say which PLC you are using so I doubt any respondant can tell you.
But under the assumption you can technically connect the pieces, you will probably discover the performance is not adequate to really carry out reliable mechanical control.
Normally PLCs run through their program hundreds of times per second, each time sampling inputs and computing new outputs. This is fast enough so mechanics effectively see "smooth" control. (5 Hz would likely cause mechanical chatter and jerky movements of hardware).
If you "involve" Python to compute that, somehow you have pay bus communication times to/from the PLC to the Python, the Python wakeup time, Python execution time, and Python message packing/unpacking time. I doubt you can achieve all of this at several hundred times per second reliably (what happens when the OS interrupts Python to write 10M of data onto the disk for some other background process)?
If you insist in involving Python somehow, it should act only in an advisory role. That is, the PLC does all the work (e.g., you need that "ladder logic/..." to be written) but the Python code sends occasional messages to the PLC to change its overall behavior, e.g, control mode, feed rates, etc. | 0 | 13,702 | false | 0 | 1 | can python software take place of logic ladder program in PLC through modbus? | 34,964,777 |
3 | 5 | 0 | 1 | 2 | 1 | 0.039979 | 0 | Trying to figure out the best way of controlling industrial PLC's with Raspberry Pi/linux server - specifically using python and pymodbus (modbusTCP) over ethernet...
Once the PLC internal registry is mapped properly to modbus, can software written in python take the place of ladder logic programming in the PLC and control it completely?
Or will ladder logic/ native PLC code still need to be written? | 0 | python,linux,raspberry-pi,modbus,plc | 2016-01-22T22:14:00.000 | 0 | 34,956,823 | Well let's assume that you have really efficient code. And you created some dictionaries, did some lambda. You can cycle through a logic set of 2000 IO points in 5ms.
I do this in Lua everyday. PLC hardware is FPGA based. But never scan faster than 10ms. Using data slows them down. And usually end up at a 25ms scan.
Python and Lua programmed correctly can scan at 1-2ms over 2600 lines of code.
You need a C wrapper to run the scan. Use TCP modbus devices. And never more than 32 IO per IP address. It's actually very easy.
Those who do not know PLC's or only know PLC's will steer you in the wrong direction. Do your homework. Learn Lua. And then prove them wrong.
Hope that helps. | 0 | 13,702 | false | 0 | 1 | can python software take place of logic ladder program in PLC through modbus? | 39,781,980 |
3 | 5 | 0 | 6 | 2 | 1 | 1.2 | 0 | Trying to figure out the best way of controlling industrial PLC's with Raspberry Pi/linux server - specifically using python and pymodbus (modbusTCP) over ethernet...
Once the PLC internal registry is mapped properly to modbus, can software written in python take the place of ladder logic programming in the PLC and control it completely?
Or will ladder logic/ native PLC code still need to be written? | 0 | python,linux,raspberry-pi,modbus,plc | 2016-01-22T22:14:00.000 | 0 | 34,956,823 | You should not replace PLC logic with your linux server. You need real time OS for that. Even running real time OS and controlling PLC with it is a bad idea. PLC-s have all kind of checks built in for controlling inputs/outputs, program cycle, internal diagnostics and so on. They are a tool meant specifically for that task. IMHO ladder logic is easier to learn than real time OS.
You should use your server as HMI - human machine interface, that sends control data to PLC and displays it back to the user.
If your project is for learning experience or personal project then you should of course do whatever you feel like. | 0 | 13,702 | true | 0 | 1 | can python software take place of logic ladder program in PLC through modbus? | 34,964,033 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I pretty new to python programming, as part of my learning i had decided to start coding for a simple daily task which would save sometime, I'm done with most part of the script but now i see a big challenge in executing it , because i need to execute it a remote server with the sudo user access. Basically what i need is,
login to remote system.
run sudo su - user(no need of password as its a SSH key based login)
run the code.
logout with the result assigned to varible.
I need the end result of the script stored in a variable so that i can use that back for verification. | 0 | python,unix,sudo,remote-server | 2016-01-24T18:41:00.000 | 1 | 34,979,846 | The other way is to use paramiko as below.
un_con=paramiko.SSHClient()
un_con.set_missing_host_key_policy(paramiko.AutoAddPolicy())
un_con.connect(host,username=user,key_filename=keyfile) stdin, stdout,
stderr = un_con.exec_command(sudo -H -u sudo_user bash -c 'command') | 0 | 1,341 | false | 0 | 1 | Run python script in a remote machines as a sudo user | 36,052,105 |
1 | 2 | 0 | 1 | 3 | 1 | 0.099668 | 0 | I am running a simulation in Python. The simulation's results are summarized in a list of number matrices. Is there a nice export format I can use to write this list, so that later I can read the file in Mathematica easily, and Mathematica will recognize it as a list of matrices automatically? | 0 | python-2.7,wolfram-mathematica | 2016-01-25T12:28:00.000 | 0 | 34,992,626 | How big are the matrices?
If they are not too large, the JSON format will work well. I have used this, it is easy to work with both in Python and Mathematica.
If they are large, I would try HDF5. I have no experience with writing this from Python, but I know that it can store multiple datasets, thus it can store multiple matrices of different sizes. | 0 | 633 | false | 0 | 1 | Save list of table of numbers from Python into format easily readable by Mathematica? | 34,994,799 |
2 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 1 | I'm trying to send an email using the Gmail API in python. I think I followed the relevant documentation and youtube vids.
I'm running into this error:
googleapiclient.errors.HttpError: HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission"
Here is my script:
#!/usr/bin/env python
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from email.mime.text import MIMEText
import base64
import errors
SCOPES = 'https://mail.google.com/'
CLIENT_SECRET = 'client_secret.json'
store = file.Storage('storage.json')
credz = store.get()
if not credz or credz.invalid:
flags = tools.argparser.parse_args(args=[])
flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES)
credz = tools.run_flow(flow, store, flags)
GMAIL = build('gmail', 'v1', http=credz.authorize(Http()))
def CreateMessage(sender, to, subject, message_text):
"""Create a message for an email.
Args:
sender: Email address of the sender.
to: Email address of the receiver.
subject: The subject of the email message.
message_text: The text of the email message.
Returns:
An object containing a base64url encoded email object.
"""
message = MIMEText(message_text)
message['to'] = to
message['from'] = sender
message['subject'] = subject
return {'raw': base64.urlsafe_b64encode(message.as_string())}
def SendMessage(service, user_id, message):
"""Send an email message.
Args:
service: Authorized Gmail API service instance.
user_id: User's email address. The special value "me"
can be used to indicate the authenticated user.
message: Message to be sent.
Returns:
Sent Message.
"""
try:
message = (service.users().messages().send(userId=user_id, body=message)
.execute())
print 'Message Id: %s' % message['id']
return message
except errors.HttpError, error:
print 'An error occurred: %s' % error
message = CreateMessage('[email protected]', '[email protected]', 'test_subject', 'foo')
print message
SendMessage(GMAIL, 'me', message)
I tried adding scopes, trying different emails, etc. I have authenticated by logging into my browser as well. (The [email protected] is a dummy email btw) | 0 | python,api,email,gmail,send | 2016-01-25T18:03:00.000 | 0 | 34,999,194 | Try deleting generated storage.json file and then try again afresh.
you might be trying this script with different scopes so "storage.json" might be having wrong details. | 0 | 2,231 | false | 0 | 1 | 403 error sending email with gmail API (python) | 35,799,866 |
2 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 1 | I'm trying to send an email using the Gmail API in python. I think I followed the relevant documentation and youtube vids.
I'm running into this error:
googleapiclient.errors.HttpError: HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission"
Here is my script:
#!/usr/bin/env python
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from email.mime.text import MIMEText
import base64
import errors
SCOPES = 'https://mail.google.com/'
CLIENT_SECRET = 'client_secret.json'
store = file.Storage('storage.json')
credz = store.get()
if not credz or credz.invalid:
flags = tools.argparser.parse_args(args=[])
flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES)
credz = tools.run_flow(flow, store, flags)
GMAIL = build('gmail', 'v1', http=credz.authorize(Http()))
def CreateMessage(sender, to, subject, message_text):
"""Create a message for an email.
Args:
sender: Email address of the sender.
to: Email address of the receiver.
subject: The subject of the email message.
message_text: The text of the email message.
Returns:
An object containing a base64url encoded email object.
"""
message = MIMEText(message_text)
message['to'] = to
message['from'] = sender
message['subject'] = subject
return {'raw': base64.urlsafe_b64encode(message.as_string())}
def SendMessage(service, user_id, message):
"""Send an email message.
Args:
service: Authorized Gmail API service instance.
user_id: User's email address. The special value "me"
can be used to indicate the authenticated user.
message: Message to be sent.
Returns:
Sent Message.
"""
try:
message = (service.users().messages().send(userId=user_id, body=message)
.execute())
print 'Message Id: %s' % message['id']
return message
except errors.HttpError, error:
print 'An error occurred: %s' % error
message = CreateMessage('[email protected]', '[email protected]', 'test_subject', 'foo')
print message
SendMessage(GMAIL, 'me', message)
I tried adding scopes, trying different emails, etc. I have authenticated by logging into my browser as well. (The [email protected] is a dummy email btw) | 0 | python,api,email,gmail,send | 2016-01-25T18:03:00.000 | 0 | 34,999,194 | I had the same problem.
I solved it by running again the quickstart.py that provides google and change SCOPE so that google can give you all permissions you want. After that don't need to have SCOPE or CLIENT_SECRET on your new code to send a message, just get_credentials(), CreateMessage() and SendMessage() methods. | 0 | 2,231 | false | 0 | 1 | 403 error sending email with gmail API (python) | 46,799,877 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I'm running Centos7 and it comes with Python2. I installed python3, however when I install modules with pip, python3 doesn't use them. I can run python3 by typing python3 at the CLI
python (2.x) is located in /usr/bin/python
python3 is located in /usr/local/bin/python3
I tried creating a link to python3 in /usr/bin/ as "python", but as expected, it didnt resolve anything. I renamed the current python to python2.bak It actually broke some command line functionality (tab to complete). I had to undo those changes to resolve.
Suggestions welcome. Thanks. | 0 | python,centos | 2016-01-25T23:32:00.000 | 1 | 35,004,466 | Do you have pip for python3, too? Try pip3 rather than pip. I assume your regular pip is just installing the modules for Python 2.x. | 0 | 2,685 | false | 0 | 1 | How to properly install python3 on Centos 7 | 35,004,508 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I make an astronomical visualization, which illustrates a birth of a planet from a cloud of particles (which i have over 130.000). Each particle, besides xyz-coordinate, has also a temperature value.
Is it possible to code it like Temperature minimum is green, Temperature maximum is magenta. Dear script, color my particles in scale between green and magenta?
I am working with Python (Blender).
Thank you in advance for any help! | 0 | python,colors,scale,blender,particles | 2016-01-26T22:02:00.000 | 0 | 35,024,898 | Should I say before t min (32.668837340451788) is green, t max (129.20671163699313) is magenta? So that the script knows, is the value "cold" or "warm".
First particle for example will be almost magenta.
Here are 4 first particles:
5.28964162682e+14 5.62257206698e+13 -2.9525300544e+14 128.332184907
5.23680422449e+14 9.33982452199e+13 -2.9525300544e+14 128.336966138
3 5.15787732694e+14 1.3010546441e+14 -2.9525300544e+14 128.346633243
5.05325414399e+14 1.66164504722e+14 -2.9525300544e+14 128.355079501 | 0 | 912 | false | 0 | 1 | color scale to illustrate the temperature, Python script | 35,025,985 |
1 | 1 | 0 | 2 | 2 | 0 | 0.379949 | 0 | I have a server instance (Ubuntu) running on AWS EC2. What's the best way to use GUI-based Python editor (e.g., Spyder, Sublimetext, PyCharm) with that server instance? | 0 | python-2.7,ubuntu,amazon-ec2 | 2016-01-27T02:13:00.000 | 1 | 35,027,646 | You could handle things a few ways, but I would simply mount the instance's filesystem locally, and keep a Putty (Windows) terminal open to execute commands remotely.
Trying to install a GUI on the EC2 instance is probably more trouble than it's worth, and a waste of resources.
In most cases, I build everything inside a local (small) Ubuntu Server VM while I'm working on it, until it's ready for some sort of deployment before moving to an EC2/DO Droplet/What-have-you. The principles are basically the same - having to work with a machine that you don't have immediate full command of - and it's cheaper, to boot. | 0 | 590 | false | 0 | 1 | Using Python GUI Editor on Ubuntu AWS | 35,028,687 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I'm wondering if there's any way to connect SFTP server with Windows' Command Prompt, by only executing batch file.
Do I need to install additional software? which software?
The purpose is to do pretty basic file operations (upload, delete, rename) on remote SFTP server by executing a batch file.
And by the way, I have heard about python's Fabric library, and I wonder whether it's better solution than the batch script for the mentioned basic file operations?
Thanks a lot! | 0 | python,windows,batch-file,sftp,fabric | 2016-01-27T09:09:00.000 | 1 | 35,032,994 | The built in FTP command doesn't have a facility for security. You can use winscp, an open source free SFTP client and FTP client for Windows. | 0 | 15,990 | false | 0 | 1 | Connecting to SFTP server via Windows' Command Prompt | 35,033,131 |
1 | 1 | 0 | 0 | 1 | 1 | 1.2 | 0 | So I just installed JModelica and with this Python 2.7 is included. When I use the IPython-console and try to import the following (it works):
from pymodelica import compile_fmu
However when I write this in the Python Shell program it says:
Traceback (most recent call last):
File "", line 1, in
from pymodelica import compile_fmu
ImportError: No module named pymodelica****
What is the problem here? I want to use the Python Shell since you can write scripts there.
Regards,
Jasir | 0 | shell,ipython,jmodelica | 2016-01-27T13:29:00.000 | 0 | 35,038,706 | The problem is that Python needs to know the paths to where JModelica store the Python package "pymodelica". If you use the IPython from the installation of JModelica, this automatically sets the correct paths. The same goes with the regular Python shell, if you use the link from the installation of JModelica, it should work while if you use the Python shell directly from your Python installation, it will not work. | 0 | 462 | true | 0 | 1 | JModelica: Python Shell and IPython trouble importing package | 35,055,447 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I am interested to do socket programming. I would like send and receive Ipv6 UDP server socket programming for raspberry (conneted with ethernet cable and opened in Putty). After surfing coulpe of sites I have got confusion with IPv6 UDP host address. Which type of host address should I use to send and receive message ipv6 UDP message.
is the link local address
example:
host ='fe80::ba27:ebff:fed4:5691';//link local address to Tx and Rx from Raspberry
or
host = 'ff02::1:ffd4:5691'
Thank you so much.
Regards,
Mahesh | 0 | python,sockets,udp,raspberry-pi,ipv6 | 2016-01-27T15:52:00.000 | 1 | 35,042,006 | You can use host ='fe80::ba27:ebff:fed4:5691', assuming you only have one link.
Link-Local addresses (Link-Local scope) are designed to be used for addressing on a single link for purposes such as automatic address configuration, neighbor discovery or when no routers are present. Routers must not forward any packets with Link-Local source or destination addresses to other links.
So if you are sending data from a server to a raspberry pi (1 link), you can use the link-local scope for you IPv6 address.
host = 'ff02::1:ffd4:5691' is the link-local multicast scope, unless you have a reason to send multicast, there is no need. | 0 | 422 | true | 0 | 1 | Ipv6 UDP host address for bind | 35,063,138 |
2 | 4 | 0 | 3 | 75 | 1 | 0.148885 | 0 | I installed pytest into a virtual environment (using virtualenv) and am running it from that virtual environment, but it is not using the packages that I installed in that virtual environment. Instead, it is using the main system packages. (Using python -m unittest discover, I can actually run my tests with the right python and packages, but I want to use the py.test framework.)
Is it possible that py.test is actually not running the pytest inside the virtual environment and I have to specify which pytest to run?
How to I get py.test to use only the python and packages that are in my virtualenv?
Also, since I have several version of Python on my system, how do I tell which Python that Pytest is using? Will it automatically use the Python within my virtual environment, or do I have to specify somehow? | 0 | python,virtualenv,pytest | 2016-01-27T18:16:00.000 | 0 | 35,045,038 | In my case I was obliged to leave the venv (deactivate), remove pytest (pip uninstall pytest), enter the venv (source /my/path/to/venv), and then reinstall pytest (pip install pytest). I don't known exacttly why pip refuse to install pytest in venv (it says it already present).
I hope this helps | 0 | 32,064 | false | 0 | 1 | How do I use pytest with virtualenv? | 39,231,653 |
2 | 4 | 0 | 95 | 75 | 1 | 1 | 0 | I installed pytest into a virtual environment (using virtualenv) and am running it from that virtual environment, but it is not using the packages that I installed in that virtual environment. Instead, it is using the main system packages. (Using python -m unittest discover, I can actually run my tests with the right python and packages, but I want to use the py.test framework.)
Is it possible that py.test is actually not running the pytest inside the virtual environment and I have to specify which pytest to run?
How to I get py.test to use only the python and packages that are in my virtualenv?
Also, since I have several version of Python on my system, how do I tell which Python that Pytest is using? Will it automatically use the Python within my virtual environment, or do I have to specify somehow? | 0 | python,virtualenv,pytest | 2016-01-27T18:16:00.000 | 0 | 35,045,038 | There is a bit of a dance to get this to work:
activate your venv : source venv/bin/activate
install pytest : pip install pytest
re-activate your venv: deactivate && source venv/bin/activate
The reason is that the path to pytest is set by the sourceing the activate file only after pytest is actually installed in the venv. You can't set the path to something before it is installed.
Re-activateing is required for any console entry points installed within your virtual environment. | 0 | 32,064 | false | 0 | 1 | How do I use pytest with virtualenv? | 54,597,424 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I tried to blind some big message using pythons RSA from Crypto.PublicKey. The problem is, even if i generate big key, like 6400 bits, key.blind() method still crushes with "message too large" error. I know, that my message can't be bigger than N in key, because every computation is in modulo N, but how can big things be blind signed then? | 0 | python,cryptography,rsa,public-key,pycrypto | 2016-01-27T23:16:00.000 | 0 | 35,050,000 | Just like normal signatures: first perform a cryptographic (one-way) hash over the message and blind & sign that instead of the message. | 0 | 175 | false | 0 | 1 | Python Crypto blinding big messages | 35,056,033 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | For the moment I've created an Python web application running on uwsgi with a frontend created in EmberJS. There is also a small python script running that is controlling I/O and serial ports connected to the beaglebone black.
The system is running on debian, packages are managed and installed via ansible, the applications are updated also via some ansible scripts. With other words, updates are for the moment done by manual work launching the ansible scripts over ssh.
I'm searching now a strategy/method to update my python applications in an easy way and that can also be done by our clients (ex: via webinterface). A good example is the update of a router firmware. I'm wondering how I can use a similar strategy for my python applications.
I checked Yocto where I can build my own linux with but I don't see how to include my applications in those builds, and I don't wont to build a complete image in case of hotfixes.
Anyone who has a similar project and that would like to share with me some useful information to handle some upgrade strategies/methods? | 0 | python,deployment,updates,beagleboneblack,yocto | 2016-02-01T17:02:00.000 | 1 | 35,136,140 | A natural strategy would be to make use of the package manager also used for the rest of the system. The various package managers of Linux distributions are not closed systems. You can create your own package repository containing just your application/scripts and add it as a package source on your target. Your "updater" would work on top of that.
This is also a route you can go when using yocto. | 0 | 141 | false | 1 | 1 | Update strategy Python application + Ember frontend on BeagleBone | 35,147,597 |
1 | 1 | 0 | 8 | 5 | 1 | 1 | 0 | I am getting following error:
raise ImportError('PILKit was unable to import the Python Imaging Library. Please confirm it's installed and available on your current Python path.')
ImportError: PILKit was unable to import the Python Imaging Library. Please confirm it's installed and available on your current Python path. | 0 | python,python-imaging-library | 2016-02-01T20:32:00.000 | 0 | 35,139,766 | You have to install PIL or pillow, try:
pip install pillow | 0 | 4,867 | false | 0 | 1 | PILKit was unable to import the Python Imaging Library | 52,045,391 |
1 | 1 | 0 | 0 | 4 | 0 | 1.2 | 1 | I use python to simply call api.github.gist. I have tried urllib2 at first which cost me about 10 seconds!. The requests takes less than 1 senond
I am under a cooperation network, using a proxy. Do these two libs have different default behavior under a proxy?
And I use fiddler to check the network. In both situation, the http request finished in about 40ms. So where urllib spends the time on? | 0 | python,python-requests,urllib | 2016-02-02T06:41:00.000 | 0 | 35,146,733 | It's most likely that DNS caching sped up the requests. DNS queries might take a lot of time in corporate networks, don't know why but I experience the same. The first time you sent the request with urllib2 DNS queried, slow, and cached. The second time you sent the request with requests, DNS needed not to be queried just retrieved from the cache.
Clear up the DNS cache and change the order, i.e. request with requests first, see if there is any difference. | 0 | 241 | true | 0 | 1 | Is urllib2 slower than requests in python3 | 35,147,116 |
2 | 3 | 0 | 5 | 10 | 1 | 0.321513 | 0 | I am using atom IDE for my python projects.
there are auto-complete suggestions in some cases but I'd like to know if it's possible to have a list of all possible functions that a imported module has, for instance if i import
import urllib
when I type urlib. and press (ctrl+tab) would like to see a list with the possible functions/methods to use.
Is that possible?
Thanks | 0 | python,autocomplete,ide,atom-editor | 2016-02-02T10:19:00.000 | 0 | 35,150,683 | Atom is getting various modifications. Autocomplete-python package is a handy package which helps code faster. The way to install it has changed.
In all new Atom editor go to File->Settings->install
search for autocomplete-python
and click on install. Voila its done, restart Atom is not required and you will see the difference with next time you edit python code.
Deb | 0 | 23,531 | false | 0 | 1 | python - atom IDE how to enable auto-complete code to see all functions from a module | 41,311,935 |
2 | 3 | 0 | 14 | 10 | 1 | 1 | 0 | I am using atom IDE for my python projects.
there are auto-complete suggestions in some cases but I'd like to know if it's possible to have a list of all possible functions that a imported module has, for instance if i import
import urllib
when I type urlib. and press (ctrl+tab) would like to see a list with the possible functions/methods to use.
Is that possible?
Thanks | 0 | python,autocomplete,ide,atom-editor | 2016-02-02T10:19:00.000 | 0 | 35,150,683 | I found the solution for my own question.
Actually I had the wrong plugin installed!
So, in the IDE, edit->preferences, and in the packages section just typed autocomplete-python and press install button.
After restart Atom, it should start work :) | 0 | 23,531 | false | 0 | 1 | python - atom IDE how to enable auto-complete code to see all functions from a module | 35,151,184 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I want to collect accelerometer data on my android phone and communicate it to my laptop over wifi.
A py script collects data on the phone with python for sl4a and another py script recieves data on the laptop. Both devices are on the same wifi network.
The principle looks pretty straightforward, but I have no clue on how to communicate between the two devices. Who should be server, who sould be client?
I'm not looking for a way to collect accelerometer data or somebody to write my script, I just can't find info on my wifi issues on the web.
Can anybody provide any help?
Thanks in advance | 0 | android,python,wifi,sl4a | 2016-02-02T20:49:00.000 | 0 | 35,163,521 | you mentioned two questions in your statement, 1 how to communicate via the same wifi network, 2 which one should be the server.
1, i have tried communicating two nodes using socket and multiproceseing manager, theyre really helpful for you to communicate that kind of over-network communication. you can communicate two nodes using manager or socket, but socket also provides helps for you to get the ip of node over the network, while the manager simplify the whole process.
2, if i were you, i would choose laptop as server as you would listen for certain port, bind port receiving data. One of the reason to choose laptop as server is that it would be more convenient if you want to add more smartphones to collect data
I do not know well about sl4a, but i did some projects communicating via network, heres just suggestion, hope it would be helpful and not too late for you. | 0 | 346 | false | 0 | 1 | sl4a python communicate with pc over wifi | 36,829,122 |
3 | 5 | 0 | 1 | 0 | 0 | 0.039979 | 0 | I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running? | 0 | python,linux,python-3.x | 2016-02-04T13:23:00.000 | 1 | 35,202,184 | You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script | 0 | 270 | false | 0 | 1 | How to check whether or not a python script is up? | 35,202,372 |
3 | 5 | 0 | 1 | 0 | 0 | 0.039979 | 0 | I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running? | 0 | python,linux,python-3.x | 2016-02-04T13:23:00.000 | 1 | 35,202,184 | Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine. | 0 | 270 | false | 0 | 1 | How to check whether or not a python script is up? | 35,202,314 |
3 | 5 | 0 | 1 | 0 | 0 | 0.039979 | 0 | I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running? | 0 | python,linux,python-3.x | 2016-02-04T13:23:00.000 | 1 | 35,202,184 | Try this and enter your script name.
ps aux | grep SCRIPT_NAME | 0 | 270 | false | 0 | 1 | How to check whether or not a python script is up? | 35,202,268 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | My program in python runs on RaspBerry Pi, and instantiates several objects (GPIO inputs and outputs, http server, webSocket, I2C interface, etc..., with thread).
When exiting my program, I try to clear all the resources, and delete all the instances.
For the network objects, I close listening sockets and so on.
I finish with a sys.exit() call, but program doe not exit and does not returns alone to linux console (I need to press ctrl+z).
Are there some objects that are not released, how to know, and how to force exit ?
Best regards. | 0 | python,exit,raspberry-pi2 | 2016-02-04T14:05:00.000 | 0 | 35,203,141 | I had a similar problem programming a simple GPIO app on the Pi. I was using the GPIOZero library, and as their code examples suggest, I was waiting for button pushes using signal.pause(). This would cause the behavior you describe - even sys.exit() would not exit!
The solution was, when it was time for the code to finish, to do this:
# Send a SIGUSER1 signal; this will cause signal.pause() to finish.
os.kill(os.getpid(), signal.SIGUSR1)
You don't even have to define a signal handler if you don't mind the system printing out "User defined signal 1" on the console.
HTH | 0 | 9,479 | false | 0 | 1 | How to exit python program on raspberry | 61,990,275 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | Situation:
A Pyro4 server gives a Pyro4 client a Pyro4 proxy.
I want to detect whether the client is still indeed using this proxy, so that the server can give the proxy to other clients.
My idea at the moment is to have the server periodically ping the client. To do this, the client itself need to host a Pyro Daemon, and give the server a Pyro4 proxy so that the Server can use this proxy to ping clients.
Is there a cleaner way to do this? | 0 | python,python-2.7,rpc,pyro | 2016-02-05T14:22:00.000 | 0 | 35,226,451 | I'd let the client report back to the server as soon as it no longer needs the proxy. I.e. don't overcomplicate your server with dependencies/knowledge about the clients. | 0 | 225 | false | 0 | 1 | How to check if Pyro4 client is still alive | 35,517,121 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | I'm writing an automated internet speed testing program and I've setup a secondary script, config.py, to make it simpler for the user to edit the configuration.
The program can send tweets when the internet speed results falls below a certain point and I want to give the users the ability to edit the tweet. However the user wil likely want to include the results in the tweet which will be defined in the script within which config.py is called.
How can I use the variables from the main script in config.py?
Edit: Should've mentioned the variables in the main script are also in functions. | 0 | python,twitter,config | 2016-02-06T11:14:00.000 | 0 | 35,240,337 | You can do from main_script import variable if the variables are not encapsulated into functions. | 0 | 51 | false | 0 | 1 | Using variables defined in main python script in imported script | 35,240,378 |
1 | 1 | 0 | 0 | 3 | 0 | 1.2 | 0 | All's in the title: I'd like to try using clang for compiling a C extension module for CPython on Linux (CPython comes from the distro repositories, and is built with gcc).
Do distutils/setuptools support this?
Does the fact that CPython and the extension are built with two different compilers matter?
Thanks. | 0 | clang,cpython | 2016-02-08T01:14:00.000 | 1 | 35,261,188 | There is a environment variable for that.
CC=clang python setup.py build
Both of compiled binaries are compatible with CPython | 0 | 189 | true | 0 | 1 | Is is possible to select clang for compiling CPython extensions on Linux? | 35,265,868 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | I am trying to insert millios of rows to redis .
I gone through the redis massinsertion tutorials and tried
cat data.txt | python redis_proto.py | redis-cli -p 6321 -a "myPassword" --pipe
here the redis_proto.py is the python script which reads the data.txt and convert to redis protocol.
I got some error like as below
All data transferred. Waiting for the last reply...
NOAUTH Authentication required.
NOAUTH Authentication required.
any help or suggestions would be appreciated ? | 0 | python,redis,redis-py | 2016-02-08T10:23:00.000 | 0 | 35,267,280 | I guess in your password there will be "$". If it is remove that It will work. | 0 | 3,200 | true | 0 | 1 | No Auth "Authentication required" Error in redis Mass Insertion? | 35,913,988 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to run a web app (built with flask-wtforms and using iGraph) on Pythonanywhere. As igraph isn't part of the already inculded modules, I try and install it using the bash console, as such:
pip install --user python-igraph
How ever, what I get is:
Could not download and compile the C core of igraph.
It usually means (according to other people having the same issue on Stackoverflow) that I need to first install:
sudo apt-get install -y libigraph0-dev
Except, apt-get isn't available on Pythonanywhere, as far as I know.
Is there any workaround to install the iGraph module for Python 2.7 on Pythonanywhere? | 0 | igraph,pythonanywhere | 2016-02-08T19:47:00.000 | 1 | 35,278,050 | python-igraph installed perfectly fine in my account. My guess is that you're facing a different issue to a missing library. Perhaps a network error or something like that. | 0 | 118 | false | 0 | 1 | iGraph install error with Python Anywhere | 35,338,580 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I can no longer run python on my mac. Upgraded to mac OS X 10.11.4 Beta and now if I run python it gets killed.
$python
Killed: 9
the system log shows:
taskgated[396]: killed pid 954 because its code signature is invalid (error -67030) | 0 | python,macos | 2016-02-09T01:13:00.000 | 1 | 35,282,336 | This seems to be fixed in TODAY's beta release: 15E39d | 0 | 493 | false | 0 | 1 | Why is mac OS X killing python? | 35,303,065 |
1 | 2 | 0 | 2 | 4 | 1 | 0.197375 | 0 | I need to use unittest in python to write some tests. I am testing the behavior of 2 classes, A and B, that have a lot of overlap in behavior because they are both subclasses of C, which is abstract. I would really like to be able to write 3 testing classes: ATestCase, BTestCase, and AbstractTestCase, where AbstractTestCase defines the common setup logic for ATestCase and BTestCase, but does not itself run any tests. ATestCase and BTestCase would be subclasses of AbstractTestCase and would define behavior/input data specific to A and B.
Is there a way to create an abstract class via python unittest that can take care of setup functionality by inheriting from TestCase, but not actually run any tests? | 0 | python,unit-testing | 2016-02-09T23:19:00.000 | 0 | 35,304,131 | I tried Łukasz’s answer and it works, but I don’t like OK (SKIP=<number>) messages. For my own desires and aims for having a test suite I don’t want me or someone to start trusting any particular number of skipped tests, or not trusting and digging into the test suite and asking why something was skipped, and always?, and on purpose? For me that’s a non-starter.
I happen to use nosetests exclusively, and by convention test classes starting with _ are not run, so naming my base class _TestBaseClass is sufficient.
I tried this in Pycharm with Unittests and py.test and both of those tried to run my base class and its tests resulting in errors because there’s no instance data in the abstract base class. Maybe someone with specific knowledge of either of those runners could make a suite, or something, that bypasses the base class. | 0 | 1,639 | false | 0 | 1 | python unittest inheritance - abstract test class | 50,380,006 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I am writing a script in Python that establishes more than 100 parallel SSH connections, starts a script on each machine (the output is 10-50 lines), then waits for the results of the bash script and processes it. However it is run on a web server and I don't know, whether it would be better to first store the output in a file on the remote machine (that way, I suppose, I can start more remote scripts at once) and later establish another SSH connection (1 command / 1 connection) and read from those files? Now I am just reading the output but the CPU usage is really high, and I suppose the problem is, that a lot of data comes to the server at once. | 0 | python,linux,bash,ssh,parallel-processing | 2016-02-10T05:52:00.000 | 1 | 35,307,829 | For creating lots of parallel SSH connections there is already a tool called pssh. You should use that instead.
But if we're really talking about 100 machines or more, you should really use a dedicated cluster management and automation tool such as Salt, Ansible, Puppet or Chef. | 0 | 51 | false | 0 | 1 | Script on a web server that establishes a lot of parallel SSH connections, which approach is better? | 35,316,821 |
1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | We're building off of the Tower app, which was built with dronekit-android, and flying the 3dr solo with it. We're thinking about adding some sort of collision detection with it.
Is it feasible to run some python script on the drone, basically reading some IR or ultrasound sensor via the accessory bay, and basically yell at the Android tablet when it detects something? That way, the tablet will tell the drone to fly backwards or something.
Otherwise, would we use the dronekit-python libs to do that? How would use a tablet / computer to have a Tower-like functionality with that?
Thanks a bunch. | 0 | dronekit-python,dronekit-android | 2016-02-10T06:12:00.000 | 0 | 35,308,103 | I don't really know what you are asking for but:
if the distance between the centers of two circles < the sum of their radii then they have collided. | 0 | 208 | false | 0 | 1 | Implementing collision detection python script on dronekit | 54,972,589 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a a requirement to stop/start some service using sudo service stop/start using python script. The script will be called by a webpage php code on server side running apache webserver.
One way I know is to give www-data sudoer permission to run the specific python script.
Is there other way without giving www-data specific permission. Example will cgi or mod_python work in this case. If yes what is the best implementation to all python script execution in LAMP server.
Thanks in advance. | 0 | php,python,apache | 2016-02-10T07:44:00.000 | 1 | 35,309,463 | I see no security issue with giving www-data the sudo right for a single restart command without any wildcards.
If you want to avoid using sudo at all, you can create a temporary file with php, and poll for this file from a shell script executed by root regularly.
But this may be more error prown, and leads to the same result. | 0 | 501 | true | 0 | 1 | How to run python script which require sudoer | 35,309,692 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a a requirement to stop/start some service using sudo service stop/start using python script. The script will be called by a webpage php code on server side running apache webserver.
One way I know is to give www-data sudoer permission to run the specific python script.
Is there other way without giving www-data specific permission. Example will cgi or mod_python work in this case. If yes what is the best implementation to all python script execution in LAMP server.
Thanks in advance. | 0 | php,python,apache | 2016-02-10T07:44:00.000 | 1 | 35,309,463 | You can run a python thread that listens to a stop/start request, and then this thread will stop/start the service. The thread should run as sudo, but it listens to tcp. The web server can send requests w/o any special permissions (SocketServer is a very simple out-of-the-box python tcp server).
You may want to add some security, e.g. hashing the request to this server with a secret, so only allowed services will be able to request the start/stop of the service, and apply iptables rules (requests from localhost where the web server is) | 0 | 501 | false | 0 | 1 | How to run python script which require sudoer | 35,310,045 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | What matrix multiplication library would you recommend for Raspberry Pi 2?
I think about BLAS or NumPy, What do you think?
I'm wondering if there is an external hardware module for matrix multiplication available.
Thank you! | 0 | python,c,raspberry-pi,matrix-multiplication,raspberry-pi2 | 2016-02-11T14:09:00.000 | 0 | 35,341,566 | Mathematica is part of the standard Raspbian distribution. It should be able to multiply matrices. | 1 | 357 | true | 0 | 1 | Raspberry pi matrix multiplication | 35,343,669 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I have a Python Script that every time I run it, collects data from different sites, stores them into a file and than runs some analysis.
What I want to do next is to somehow install Python and all the packages that I need on a server for example and create a task, let`s say that everyday at 3 p.m, the code I have written executes without me being around and the data and results should be stored into a table.
My questions would be is this is doable? | 0 | python,web | 2016-02-12T13:30:00.000 | 0 | 35,363,880 | Yes there is a way to do this. if you are on a server running some form of linux you can use crontab. As for server hostage, I don't know of any free servers but there is always servers for small fees. | 0 | 56 | false | 0 | 1 | Run Automated Offline Tasks | 35,363,981 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am currently working on an application ROFFLE, I may not be very good in terming correctly, What I am able to do right now?
User goes on a website, he clicks on a button and an ajax request is done to python file (test.py) but when he exits, the request is aborted and the processing done till yet has gone waste
What I want to do?
As user clicks the button, the processing starts. The script should not be killed even if the user leaves the webpage. In simple words, the Javascript part should be limited to trigger/queue the python script to execute (with input provided online) which has to be deployed by a web server that supports it via CGI
How can this be implemented?
Please note:
1. This is a web application and cannot be a software | 0 | javascript,python,ajax,cgi | 2016-02-12T13:40:00.000 | 0 | 35,364,084 | From the question I read that you have already managed to run a Python script in a web server via CGI and you already know how to do an HTTP (ajax) request from your JavaScript to that web service.
When you now close the page in your browser (or an excavator cuts your line), the backend python script is not terminated. In fact, how should the backend even know that you have closed the page? Your Python script is still running in the backend, but no one will be left to capture the HTTP response of the web server and display it to you.
However, when you want to start some kind of demon, a program that is supposed to run in the backend for a very long time, then your Python script should spin off that task via a Popen in a variant that keeps the child process alive, even when the script has returned it's HTTP response (and possibly even the web server has shut down).
This pattern is sometimes used to remote control little servers that mock IoT devices in test environments. Just start and stop the simulation via some fire-and-forget HTTP requests triggered from a simple interactive web page. | 0 | 68 | false | 1 | 1 | Asynchronous unblocked Execution/triggering of python script through javascript | 35,366,240 |
1 | 6 | 0 | 11 | 23 | 0 | 1 | 1 | When I send a message to my Telegram Bot, it responses with no problems.
I wanna limit the access such that me and only me can send message to it.
How can I do that? | 0 | telegram-bot,python-telegram-bot | 2016-02-12T17:18:00.000 | 0 | 35,368,557 | Filter messages by field update.message.from.id | 0 | 27,010 | false | 0 | 1 | How To Limit Access To A Telegram Bot | 35,375,185 |
1 | 2 | 0 | 2 | 3 | 0 | 1.2 | 0 | I'm using tox to run protractor tests which will test an application which uses django+angularjs, there is a glue library (django-protractor) which makes this easier, except that it makes the call to protractor inside a django management command, and relies on $PATH to show it where protractor is.
So if I set the $PATH properly before running tox, it works fine, but I'd rather not require all the devs to do that manually. | 0 | python,django,testing,protractor,tox | 2016-02-12T20:32:00.000 | 0 | 35,371,697 | I think it should work if you modify your path in the manage.py file to include django-protractor directory, because the Django management command line uses manage.py. | 0 | 1,975 | true | 1 | 1 | How can I add to $PATH with tox? | 35,371,901 |
2 | 2 | 0 | 2 | 9 | 1 | 0.197375 | 0 | I maintain the pi3d package which is available on pypi.python.org. Prior to v2.8 the latest version was always returned by a search for 'pi3d'. Subsequently v2.7 + v2.8 then v2.7 + v2.8 + v2.9 were listed. These three are still listed even though I am now at v2.10. i.e. the latest version is NOT listed and it requires sharp eyes to spot the text on the v2.9 page saying it's not the latest version!
NB all old versions are marked as 'hidden' I have tried lots of different permutations of hiding and unhiding releases, updating releases, switching on and off autohide old releases, editing the text of each release etc ad infinitum.
Is there some obvious cause of this behaviour that I have missed? | 0 | python,package,pypi | 2016-02-14T17:17:00.000 | 0 | 35,394,675 | Yes, it's just a problem with the pypi search engine, like Khush said. | 0 | 269 | false | 0 | 1 | on pypi.python.org what would cause hidden old versions to be returned by explicit search | 35,591,766 |
2 | 2 | 0 | 3 | 9 | 1 | 0.291313 | 0 | I maintain the pi3d package which is available on pypi.python.org. Prior to v2.8 the latest version was always returned by a search for 'pi3d'. Subsequently v2.7 + v2.8 then v2.7 + v2.8 + v2.9 were listed. These three are still listed even though I am now at v2.10. i.e. the latest version is NOT listed and it requires sharp eyes to spot the text on the v2.9 page saying it's not the latest version!
NB all old versions are marked as 'hidden' I have tried lots of different permutations of hiding and unhiding releases, updating releases, switching on and off autohide old releases, editing the text of each release etc ad infinitum.
Is there some obvious cause of this behaviour that I have missed? | 0 | python,package,pypi | 2016-02-14T17:17:00.000 | 0 | 35,394,675 | Upon searching pypi.python.org for pi3d I have found that when you go to the pi3d v2.9 page there is now a large bold warning saying that it isn't the latest version and gives a link to v2.10 which was probably put there between the time you asked this question and now. However the fact that the v2.10 was not listed for me shows that your problem is not a local one. Googling site:pypi.python.org pi3d shows pi3d v2.10 as the first result which means that something is wrong with the pypi search engine.
The answer to your question is no, there is not an obvious cause of that behaviour. The fact that when I use Google I get a result as opposed to the builtin search implies that their search backend needs to be reindexed. | 0 | 269 | false | 0 | 1 | on pypi.python.org what would cause hidden old versions to be returned by explicit search | 35,530,716 |
1 | 2 | 0 | 0 | 2 | 1 | 0 | 0 | I'm optimizing a Python program that performs some sort of calculation. It uses NumPy quite extensively. The code is sprinkled with logger.debug calls (logger is the standard Python log object).
When I run cProfile I see that Numpy's function that converts an array to string takes 50% of the execution time. This is surprising, since there is no handler that outputs messages as the DEBUG level, only INFO and above.
Why is the logger converting its arguments to string even though nobody is going to use this string? Is there a way to prevent it (other than not performing the logger calls)? | 0 | python,logging,optimization | 2016-02-15T13:56:00.000 | 0 | 35,411,265 | Use logger.debug('%s', myArray) rather than logger.debug(myArray). The first argument is expected to be a format string (as all the documentation and examples show) and is not assumed to be computationally expensive. However, as @dwanderson points out, the logging will actually only happen if the logger is enabled for the level.
Note that you're not forced to pass a format string as the first parameter - you can pass a proxy object that will return the string when str() is called on it (this is also documented). In your case, that's what'll probably do the array to string conversion.
If you use logging in the manner that you're supposed to (i.e. as documented), there shouldn't be the problem you describe here. | 1 | 2,623 | false | 0 | 1 | Python logger.debug converting arguments to string without logging | 35,420,774 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | I execute py.test like this : py.test -s -f, -f is looponfail mode and -s is --capture=no mode.
But print() statement is allowed only when the test is fail. If all tests succeeded, all print() in all codes doesn't work.
How could I enable print() statement even in looponfail mode?
Python 3.4
Py.test 2.7.2 | 0 | python,pytest | 2016-02-17T01:00:00.000 | 0 | 35,446,029 | You should just update to newer pytest. Looks like this problem was fixed in pytest=2.9.0. | 0 | 118 | false | 0 | 1 | How could I enable print statement in pytest looponfail mode? | 44,379,571 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | I am on Linux and wish to find the process spawned by a Python command.
Example: shutil.copyfile.
How do I do so?
Generally I have just read the processes from the terminal with ps however this command completes nearly instantaneously so I cannot do that for this without some lucky timing.
htop doesn't show the info, strace seems to show a lot of info but I can't seem to get the process in it. | 0 | python,linux,process | 2016-02-17T16:59:00.000 | 1 | 35,463,019 | Would running a filter in htop be quick enough?
Run htop, Press F5 to enter tree mode, then F4 to filter, and type in python... it should show all the python processes as they open/close | 0 | 155 | false | 0 | 1 | How to find the name of a process spawned by Python? | 35,463,485 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have a Splunk query which returns several JSON results and that I want to save as alert, sending regular emails to a list of people.
I have created a Python script which takes as input some JSONS like the ones from the Splunk logs and beautifies the results.
How can I configure the Splunk alert so that the users get by email the beautified results? Is it possible to configure Splunk to run the Python script on the query results and put the beautified output in the email body? Should I upload the script somewhere? | 0 | python,email,alert,splunk | 2016-02-18T22:12:00.000 | 0 | 35,493,485 | well stated @IvanStarostin
The script should always be located in : $SPLUNK_HOME/bin/scripts or in $SPLUNK_HOME/etc//bin/scripts in case of an app.
When an alert triggers you can select a script to be run in the following way:
Run the desired search and then click Save as Alert. Configure how often should your search run and the conditions according to which the alert should be triggered (e.g. when results is equal to 0).
Then Select Run a script from the Add Actions menu. Enter the file name of the script that you want to run and you are set up!
You can test you script in the search bar too by piping it after your query:
....|script commandname | 0 | 1,055 | true | 0 | 1 | How to configure Python script to change body for Splunk email alert? | 35,514,817 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I set up a django website via IIS manager, which is working fine, then I add a function by using GDAL libs and the function is working fine.
And also it is fine if I run this website by using CMD with this command
python path\manage.py runserver 8000
But it cannot run via IIS
I got error is DLL load failed: The specified module could not be found., which from from osgeo import gdal, osr
My guess is I need to set environment variables to FastCGI Settings of IIS
I set these to environment variables collections but does not work.
GDAL_DATA C:\Program Files (x86)\GDAL\gdal-data
GDAL_DRIVER_PATH C:\Program Files (x86)\GDAL\gdalplugins
Any help would be appreciated | 0 | python,django,iis,fastcgi,gdal | 2016-02-19T04:36:00.000 | 0 | 35,497,392 | Solved it by restart the machine | 0 | 134 | true | 1 | 1 | How to setup FastCGI setting of IIS with GDAL libs | 35,591,876 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am creating an api with AWS API Gateway with Lambda functions. I want to be able to make an API call with the following criteria:
In the method request of the API i have specified the Query String: itemid
I want to be able to use this itemid value within my lambda function
I am using Python in Lambda
I have tried putting the following in the Mapping template under the Method execution, however get an error:
-{ "itemid": "$input.params('itemid')" } | 0 | python,amazon-web-services,aws-lambda | 2016-02-19T12:12:00.000 | 0 | 35,505,089 | You also have to include the query string parameter in the section Resources/Method Request. | 0 | 1,250 | false | 1 | 1 | AWS Lambda parameter passing | 42,859,631 |
3 | 6 | 0 | 0 | 6 | 0 | 0 | 0 | I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com".
Any ideas? | 0 | python | 2016-02-21T11:19:00.000 | 0 | 35,535,422 | When working with python projects its always a good idea to create a so called virtual environment, this way your modules will be more organized and reduces the import errors.
for example lets assume that you have a script.py which imports multiple modules including pypiwin32.
here are the steps to solve your problem:
1. depending on you operating system you need to download and install virtualenv package, in debian its as simple as sudo apt install virtualenv .
2. after installing 'virtualenv' package go to your project/script folder and create a virtualenv folder with virtualenv venv it creates a folder named venv in that directory.
3. activate your virtualenv source /path/to/venv/bin/activate if your already in the directory where venv exists just issue source venv/bin/activate
4. after activating your venv install you project dependencies pip install pypiwin32 or pip install pywin
5. run your script, it wont throw that error again :) | 0 | 17,931 | false | 0 | 1 | No module named win32com | 59,456,007 |
3 | 6 | 0 | 8 | 6 | 0 | 1.2 | 0 | I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com".
Any ideas? | 0 | python | 2016-02-21T11:19:00.000 | 0 | 35,535,422 | As it is not built into Python, you will need to install it.
pip install pywin | 0 | 17,931 | true | 0 | 1 | No module named win32com | 35,535,450 |
3 | 6 | 0 | 1 | 6 | 0 | 0.033321 | 0 | I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com".
Any ideas? | 0 | python | 2016-02-21T11:19:00.000 | 0 | 35,535,422 | This will work as well
python -m pip install pywin32 | 0 | 17,931 | false | 0 | 1 | No module named win32com | 59,476,830 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I have used the "import existing project" option to import an existing project into workspace. However, eclipse actually makes copies of the original files and create a new project.
So, if I made a change on a file. It only affect on the copied file in workspace. The original file is untouched.
My question is how do I make my modification affected on the original files? | 0 | python,eclipse,ide | 2016-02-23T06:28:00.000 | 0 | 35,570,376 | The 'Import Existing Projects into Workspace' wizard has a 'Copy projects into workspace' check box on the first page. Unchecking this option will make Eclipse work on the original files. | 0 | 98 | true | 0 | 1 | eclipse modify imported project files | 35,571,869 |
1 | 2 | 0 | 2 | 5 | 0 | 0.197375 | 0 | The development environment, we use, is FreeBSD. We are evaluating Python for developing some tools/utilities. I am trying to figure out if all/most python packages are available for FreeBSD.
I tried using a CentOS/Ubuntu and it was fairly easy to install python as well as packages (using pip). On FreeBSD, it was not as easy but may be I'm not using the correct steps or am missing something.
We've some tools/utilities on FreeBSD that run locally and I want Python to interact with them - hence, FreeBSD.
Any inputs/pointers would be really appreciated.
Regards
Sharad | 0 | python,pip,freebsd | 2016-02-23T07:59:00.000 | 1 | 35,571,862 | The assumption that powerful and high-profile existing python tools use a lot of different python packages almost always holds true. We use FreeBSD in our company for quite some time together with a lot of python based tools (web frameworks, py-supervisor, etc.) and we never ran into the issue that a certain tool would not run on freeBSD or not be available for freeBSD.
So to answer your question:
Yes, all/most python packages are available on FreeBSD
One caveat:
The freeBSD ports system is really great and will manage all compatibility and dependency issues for you. If you are using it (you probably should), then you might want to avoid pip. We had a problem in the past where the package manager for ruby did not really play well with the ports database and installed a lot of incompatible gems. This was a temporary issue with rubygems but gave us a real headache. We tend to install everything from ports since then and try to avoid 3rd party package managers like composer, pip, gems, etc. Often the ports invoke the package managers but with some additional arguments so they ensure not to break dependencies. | 0 | 982 | false | 0 | 1 | Is Python support for FreeBSD as good as for say CentOS/Ubuntu/other linux flavors? | 35,946,582 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | Should you generally pass errors that occur in functions and class methods back to the caller to handle? What are cases when you might not? I'm asking because I am creating a module to perform the oauth dance, and if you get a negative response from the websites you are trying to access I'm not sure if I should pass it up to the caller, or handle it there. | 0 | python | 2016-02-23T18:07:00.000 | 0 | 35,585,002 | We usually raiseError when we expect a certain value or input from the user. For eg: If a program requires the user to enter a positive integer and they enter a negative integer, we raise an error and ask them to enter the A POSITIVE INTEGER.
We handle errors when it's not up to the user for it. For eg: If the website to access requires email verification and the email entered by the user is not recognized, you raiseError and ask them to put in a valid email address, but if, the website has a search bar and the string put in does not split properly for the search and we get a keyValueError, it's up to the programmer to handle it. | 0 | 51 | false | 0 | 1 | What is best practice regarding passing errors to the caller? | 35,585,229 |
2 | 2 | 0 | 3 | 1 | 1 | 1.2 | 0 | Should you generally pass errors that occur in functions and class methods back to the caller to handle? What are cases when you might not? I'm asking because I am creating a module to perform the oauth dance, and if you get a negative response from the websites you are trying to access I'm not sure if I should pass it up to the caller, or handle it there. | 0 | python | 2016-02-23T18:07:00.000 | 0 | 35,585,002 | It generally depends on the answer to two questions:
What layer has the information to explain the error, and present it to users or developers?
What layer can correct the error, in a way that the upper layer cannot tell it ever happened?
Examine the problem layer by layer. Find where the error can be caught, corrected and transparently handled. Failing that, find where the error can be explained in useful terms, and enriched with relevant information.
It's often the case that the function that actually encounters the error can neither explain it adequately nor correct it. It should raise an exception, delegating the decision to the upper layer, possibly attaching additional data to the error.
When the exception has climbed high enough, you'll find yourself in one of the 2 cases I described above, in a position where you can either correct the error transparently or report it in clear language, with the information needed to track down the cause.
In the case of your OAuth module, you should:
Decide whether retrying the action makes sense (eg network error)
Determine the cause of the problem (eg wrong credentials), and raise an exception that clearly conveys that. | 0 | 51 | true | 0 | 1 | What is best practice regarding passing errors to the caller? | 35,585,134 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I started my app with a gmail account, and have recently upgraded to Mandrill. I am not using the API, just changed my smtp settings through env variables.
When I add the new mandrill smtp provider, my in-app mails work perfectly, but allauth's mails do not work at all. (I can see they are not rejected or bounced through mandrill's data, they're just not sent).
Any help? | 0 | python,django,email,mandrill,django-allauth | 2016-02-24T07:41:00.000 | 0 | 35,596,059 | Turns out I needed to add DEFAULT_FROM_EMAIL to my settings.py file. I don't understand why it works with a gmail address and not a custom one, but this fixed it. | 0 | 86 | true | 1 | 1 | Django Allauth mails stop working when I change my smtp provider (mandrill) | 35,596,437 |