Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
1
2
0
10
7
0
1.2
0
I cannot use python in GVIM. When I type: :python print 1 it just closes GVIM without any message. I triend to run it with -V90logfile but I couldn't find any information about the crash. GVIM is compiled with python (:version shows +python/dyn +python3/dyn). GVIM version: 7.3.46 (32 bit with OLE). Python version: 2.7.3 Initially GVIM couldn't find python27.dll so I edited $MYVIMRC and added: let $Path = "C:\\Program Files (x86)\\Python27;".$Path Both GVIM and Python have been installed using corporate standards - not manually via installers. Asking here as IT were not able to help me and redirected to external support. I could reproduce the error on my personal computer, where I copied both GVIM & PYTHON without installing them. Any further suggestions?
0
python,vim,crash
2016-02-24T08:41:00.000
1
35,597,157
Finally solved the problem. It turned out that Python uses PYTHONPATH variable to resolve the PYTHON folder (used to load python libraries and so on). Here is the default value for Python 2.7: C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk The variable can be set using one of the following: 1. Windows registry Set the default value of HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\PythonPath key 2. Environment variable Create environment variable PYTHONPATH and set the value (same as you edit global PATH) 3. _vimrc file This is the most portable way. Edit your _vimrc (i.e. open vim and enter :e $MYVIMRC command) and set the variable: let $PYTHONPATH = "C:\\Python27\\Lib;C:\\Python27\\DLLs;C:\\Python27\\Lib\\lib-tk"
0
1,440
true
0
1
GVIM crashes when running python
35,620,795
1
2
0
1
0
1
0.099668
0
SimpleITK provides easy to use Python interface. Can I extend the class from there? I need to solve a registration problem, which requires me to write my customized registration class, especially the similarity metric. How can I extend SimpleITK in Python for my use?
0
python,itk,image-registration
2016-02-24T15:44:00.000
0
35,606,542
The wrapped SimpleITK interface for Python does not provide an interface to extend from or derive from. The options for the SimpleITK ImageRegistrationMethods are the options available. Deriving classes and tweaking algorithms is best done with ITK at the C++ level. You may be able to put together a little registration framework with components of SimpleITK and Python. For example you could use the ResampleImageFilter and the Transform classes from SimpleITK along with a scipy optimizer and a custom metric.
0
751
false
0
1
How to extend an ITK class in Python?
35,612,218
1
1
0
1
1
0
0.197375
0
I have a Raspberry Pi collecting data from sensors attached to it. I would like to have this data - collected every minute - accessible from an online DB (Amazon RDS | MySQL). Currently, a python script running on the Pi pushes this data to an Amazon RDS instance every 50 seconds (~per minute). However, I have no records when internet is down. I will appreciate any suggestions on how to fix this. Here are my thoughts so far: store data on a local MySQL DB, run a separate script that checks for differences between the online and local DB and updates the online one where needed. This will run every minute and write only one record to the online DB every minute if all is well. Utilize some sort of feature within MySQL itself - a replication job?
1
python,mysql,database,synchronization,raspberry-pi
2016-02-25T03:28:00.000
0
35,617,670
I went with my first thought: store the sensor data on a local DB (SQLite3 for its small footprint). Records are created every half minute. a separate script - run regularly via cron - compares the last timestamp entry in the cloud DB with the local one and updates the cloud DB. Even though the comparison would ideally mean a doubling of DB transactions (a read + a write), if the last timestamp recorded on the online DB is stored locally for reference the remote read becomes unnecessary, thus being more efficient.
0
476
false
0
1
Syncing locally collected regular data to online DB over unreliable internet connection
38,479,349
1
1
0
5
5
0
0.761594
0
When developing a Python web app (Flask/uWSGI) and running it on my local machine, *.pyc files are generated by the interpreter. My understanding is that these compiled files can make things load faster, but not necessarily run faster. When I deploy this same app to production, it runs under a user account that has no write permissions on the local file system. There are no *.pyc files committed to source control, and no effort is made to generate them during the deploy. Even if Python wanted to write a .pyc file at runtime, it would not be able to. Recently I started wondering if this has any tangible effect on the performance of the app, either in terms of the very first pageview after the process starts, or consistently throughout its entire lifetime. Should I throw a python -m compileall in as part of my deploy scripts?
0
python,deployment,pyc
2016-02-25T04:10:00.000
0
35,618,159
Sure, you can go ahead and precompile to .pyc's as it won't hurt anything. Will it affect the first or nth pageload? Assuming Flask/WSGI runs as a persistent process, not at all. By the time the first page has been requested, all of the Python modules will have already been loaded into memory (as bytecode). Thus, server startup time will be the only thing affected by not having the files pre-compiled. However, if for some reason a new Python process is invoked for each page request, then yes, there would (probably) be a noticeable difference in performance and it would be better to pre-compile. As Klaus said in the comments above, the only other time a pageload might be affected is if a function happens to try and import a module that hasn't already been imported. This will require the module to be parsed and converted to bytecode then loaded into memory before being able to continue.
0
2,979
false
1
1
Should I generate *.pyc files when deploying?
35,619,259
1
1
0
2
2
1
1.2
0
I've create a pydev project in eclipse. At the top level of my workspace I can see these two files: .project .pydevproject I can also see these in each of my subfolders that contain my actual projects. At the top of my workspace there is also a .metadata. folder. What should I commit to source control? Ie what can I delete and still be able to open the project with minimal effort (hopefully entirely automated regeneration of files)? If this was Visual Studios C++ project the answer would be to keep just the ".sln", "vcxproj" and "vcxproj.filters" because the "vs" folder and "suo" files will autogenerate on openning. I've tried to delete the ".metadata" folder, but after that nothing appears to load in my workspace. Also, I am working with someone not using an IDE. What eclipse files do we need to update to keep in sync?
0
python,eclipse,pydev
2016-02-26T10:12:00.000
0
35,648,909
Disclaimer: I am not familiar with PyDev, just with Eclipse in general. You definitely should not check in the .metadata folder. That one is for your Eclipse workspace as a whole and contains your personal configuration. (That's why your workspace appeared empty after you deleted that folder.) In fact, you should not check in your workspace folder at all, but just the several project folders within it. Whether to check in the .project files is sort of a matter of taste. Those contain project specific information and settings and with those its easier to import the project into Eclipse, but you can import the project without those, too, it's just a bit more work. If other developers are not using Eclipse, those are useless for them. In the worst case, your co-developers will delete those files from source control and when you update your project later, they are deleted on your end, too, messing up your project. About deleting the files: Note that there is a difference between not checking files into version control and deleting them locally. So in short: Do not commit those files into version control, but don't delete them locally, either. Depending on what sort of version control you are using, you can set it to ignore those files.
0
339
true
0
1
What to commit to source control in Eclipse Pydev
35,649,823
1
1
0
1
7
0
1.2
0
I have 2 code bases, one in python, one in c++. I want to share real time data between them. I am trying to evaluate which option will work best for my specific use case: many small data updates from the C++ program to the python program they both run on the same machine reliability is important low latency is nice to have I can see a few options: One process writes to a flat file, the other process reads it. It is non scalable, slow and I/O error prone. One process writes to a database, the other process reads it. This makes it more scalable, slightly less error prone, but still very slow. Embed my python program into the C++ one or the other way round. I rejected that solution because both code bases are reasonably complex, and I prefered to keep them separated for maintainability reasons. I use some sockets in both programs, and send messages directly. This seems to be a reasonable approach, but does not leverage the fact that they are on the same machine (it will be optimized slightly by using local host as destination, but still feels cumbersome). Use shared memory. So far I think this is the most satisfying solution I have found, but has the drawback of being slightly more complex to implement. Are there other solutions I should consider?
0
python,c++,ipc
2016-02-26T11:55:00.000
0
35,651,059
First of all, this question is highly opinion-based! The cleanest way would be to use them in the same process and get them communicate directly. The only complexity is to implement proper API and C++ -> Python calls. Drawbacks are maintainability as you noted and potentially lower robustness (both crash together, not a problem in most cases) and lower flexibility (are you sure you'll never need to run them on different machines?). Extensibility is the best as it's very simple to add more communication or to change existing. You can reconsider maintainability point. Can you python app be used w/o C++ counterpart? If not I wouldn't worry about maintainability so much. Then shared memory is the next choice with better maintainability but same other drawbacks. Extensibility is a little bit worse but still not so bad. It can be complicated, I don't know Python support for shared memory operation, for C++ you can have a look at Boost.Interprocess. The main question I'd check first is synchronisation between processes. Then, network communication. Lots of choices here, from the simplest possible binary protocol implemented on socket level to higher-level options mentioned in comments. It depends how complex your C++ <-> Python communication is and can be in the future. This approach can be more complicated to implement, can require 3rd-party libraries but once done it's extensible and flexible. Usually 3rd-party libraries are based on code generation (Thrift, Protobuf) that doesn't simplify your build process. I wouldn't seriously consider file system or database for this case.
0
1,529
true
0
1
Sharing information between a python code and c++ code (IPC)
35,651,676
1
2
0
0
5
0
0
0
In order to investigate some issues with unreleased system resources I'd like to force immediate garbage collection on an already running python script. Is this somehow possible, e.g. by sending some kind of signal that Python would understand as an order to run gc; or any other similar way? Thanks. I'm running Python 2.7 on a Linux server.
0
python,garbage-collection
2016-02-26T15:16:00.000
0
35,655,346
You can attach to a running python script with debugger and issue any command within it, like inside interactive console. I used PyCharm's debugger, but there are variety of them.
0
2,451
false
0
1
How to force Python garbage collection on a running script
38,498,181
1
1
0
0
1
0
0
0
I have this .sh which starts a python file. This python file generates a .txt when started via the commandline with sudo but doesn't when started via the .sh Why doesn't the pyhton file give me a .txt when started with the cron and .sh? When I use su -c "python /var/www/html/readdht.py > /var/www/html/dhtdata.txt" 2>&1 >/dev/null, .sh gives me output, but omits the newlines, so I get one big string. The python file creates a .txt correctly when started from the commandline with sudo python readdht.py. If the .sh the python file is started with su -c "python /var/www/html/readdht.py no .txt is created What's going on?
0
python,shell,cron
2016-02-28T21:37:00.000
1
35,688,599
Difficult to answer without more colour on your environment. Here's how to solve this though: do not redirect your output to /dev/null. Then read in your cron log what happened. It seems very likely that your script fails, and therefore does not return anything to standard out, so does not create a file. I highly suspect it is because you are using a python module, or python version or python path that is loaded in your bashrc. Crontab does not execute your bashrc, it's an independent environment, so you cannot assume that a script that runs correctly when you manually launch will work in your cron. Try sourcing your bashrc in your cron task, and it's very likely to solve your problem.
0
43
false
0
1
.sh started by cron does not create file via python
35,688,902
1
1
0
0
0
0
0
1
I'm trying to build a Twitter crawler that would crawl all the tweets of a specified user and would save them in json format. While trying to convert the Status object into json format using _json attribute of Status, I'm getting the following error : AttributeError : 'Status' object has no attribute '_json' Can anyone please help me with this?
0
python,json,tweepy
2016-03-01T04:51:00.000
0
35,714,894
The _json attribute started working once I upgraded my tweepy version to 3.5.0.
0
1,607
false
0
1
Tweepy : AttributeError : Status object has no attribute _json
35,715,292
1
1
0
1
5
0
1.2
0
Is it possible to run "python" script inside "Java" in an android app? The main app will be Java but some cryptography should be done in "python" Is it possible to do this?
0
java,android,python
2016-03-01T06:28:00.000
0
35,716,086
Running a python script inside android app is not practical at the moment, but what you can do is creating a HTTP web service for interpreting python and sending back the results to the android application. Then it's just Android app communicating with a HTTP web service which is simpler than packing an interpreter. This way it makes the app lighter too.
0
3,858
true
1
1
Run python script inside java on android
59,486,412
1
1
0
1
0
0
0.197375
1
I'm researching about WebSockets atm and just found Autobahn with Autobahn|Python. I'm not sure that I understand the function of that toolset (?) correctly. My intention is to use a WebSocket-Server for communication between a C program and a HTML client. The idea is to let the C program connect via WebSocket to the Server and send the calculation progress of the C program to every HTML client that is connected to that WebSocket-Server. Am I able to write a WebSocket Server with Autobahn|Python and then connect with an HTML5-client and a C program client?
0
python,c,html,websocket,autobahn
2016-03-02T09:08:00.000
0
35,742,708
Autobahn|Python provides both a WebSocket implementation for Python and an implementation for a client for the WAMP protocol on top of that. You can use the WebSocket part on its own to implement your WebSocket server.
0
59
false
1
1
Is the following principle the right for autobahn python?
35,795,470
1
5
0
3
9
0
0.119427
0
I am trying to interact with an API that uses a timestamp that starts at a different time than UNIX epoch. It appears to start counting on 2000-01-01, but I'm not sure exactly how to do the conversion or what the name of this datetime format is. When I send a message at 1456979510 I get a response back saying it was received at 510294713. The difference between the two is 946684796 (sometimes 946684797) seconds, which is approximately 30 years. Can anyone let me know the proper way to convert between the two? Or whether I can generate them outright in Python? Thanks Edit An additional detail I should have mentioned is that this is an API to a Zigbee device. I found the following datatype entry in their documentation: 1.3.2.7 Absolute time This is an unsigned 32-bit integer representation for absolute time. Absolute time is measured in seconds from midnight, 1st January 2000. I'm still not sure the easiest way to convert between the two
0
python,datetime,unix,timestamp,epoch
2016-03-03T04:42:00.000
1
35,763,357
Well, there are 946684800 seconds between 2000-01-01T00:00:00Z and 1970-01-01T00:00:00Z. So, you can just set a constant for 946684800 and add or subtract from your Unix timestamps. The variation you are seeing in your numbers has to do with the delay in sending and receiving the data, and could also be due to clock synchronization, or lack thereof. Since these are whole seconds, and your numbers are 3 to 4 seconds off, then I would guess that the clocks between your computer and your device are also 3 to 4 seconds out of sync.
0
13,703
false
0
1
Conversion from UNIX time to timestamp starting in January 1, 2000
35,763,677
1
1
0
0
0
0
0
0
I'm working on Python Image Recognition test for Android devices. It works on local; but when I try to build it for AWS, I always get the following error: copying M2Crypto\SSL__init__.py -> build\lib.win32-2.7\M2Crypto\SSL running build_ext building 'M2Crypto.__m2crypto' extension swigging SWIG/_m2crypto.i to SWIG/_m2crypto_wrap.c swig.exe -python -Ic:\python27\include -Ic:\python27\PC -Ic:\pkg\include -includeall -modern -builtin -outdir build\lib.win32-2.7\M2Crypto -o SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i error: command 'swig.exe' failed: No such file or directory I've already tried almost every solution I found on Internet but nothing changed. I'm using Windows 8.1 and Python 2.7 What should I do? How should I fix this problem? Thank you in advance.
0
python,amazon-web-services,m2crypto
2016-03-03T14:39:00.000
0
35,775,144
Device Farm requires your python test should be able to execute on Linux_X64 platform. You could create and package your test bundle on linux_x64 platform, then try to run it on Device Farm.
0
96
false
1
1
Python Mobile Test on AWS Device Farm (M2Crypto Issue)
35,782,735
1
1
0
0
0
1
1.2
0
I'm working on a package that uses data from external git repository. When it's doing its job, it first cloning git repository and then copies files from it to some other location. Where should I save this repository (and other non-python files) in my filesystem? Is there any standard place for that? Sure, I could just use site-packages/ directory for my files. But the problem is, git repository could contain python packages too, and I don't want them to be importable. Is there, maybe, some way to specifically exclude some folder from site-packages/? I think *.dist-info folders are excluded, should I create a fake one for my package? Thank you very much.
0
python
2016-03-04T14:44:00.000
0
35,798,844
Install them into a subdirectory of your package directory in site-packages. If the subdirectory doesn't have an __init__.py file or if it's name has a dash (-) or other character that isn't valid in a Python identifier it can't be imported using the import statement nor can any Python file located under it. So for example, if your package name is mypackage, you could use site-packages/mypackage/data-files as the location to store your data.
0
76
true
0
1
Standard directory for installed Python package's data
35,801,829
1
3
0
1
12
1
0.066568
0
I typically put the high-level documentation for a Python package into the docstring of its __init__.py file. This makes sense to me, given that the __init__.py file represents the package's interface with the outside world. (And, really, where else would you put it?) So, I was really quite surprised when I fired up Sphinx for the first time and saw this content buried near the very end of the package documentation, after the content for all of the submodules. This seems backward to me. The very first thing the user will see when he visits the page for a package is the documentation of the submodule that just happens to come first alphabetically, and the thing he should see first is right near the bottom. I wonder if there is a way to fix this, to make the stuff inside of __init__.py come out first, before all of the stuff in the submodules. And if I am just going about this in the wrong way, I want to know that. Thanks!
0
python-sphinx
2016-03-05T04:57:00.000
0
35,810,213
It is also possible to add this option in the conf.py file. Search in conf.py for the line where the string containing the sphinx-apidoc command is (located in a try section) and add the "--module-first" option. The new line will look like this: cmd_line_template = "sphinx-apidoc --module-first -f -o {outputdir} {moduledir}"
0
3,613
false
0
1
Can Sphinx emit the 'module contents' first and the 'submodules' last?
66,365,397
1
1
0
1
2
0
1.2
0
I have a python program that is an infinity loop and send some data to my database. I want this python script to run when I power my Intel Galileo. I tried to make a sh script python myprogram.py and made it run on startup in etc/init.d. When I restarted my Galileo, nothing happened-Linux didn't load, Arduino sketch didn't load and even my computer didn't recognize it. I guess this happened because the python program was an infinity loop. Is there a way that I can run my system without problems and run my python script on startup?
0
python,linux,startup,intel-galileo
2016-03-05T18:26:00.000
1
35,818,003
I made the myprogram.py run in background with python myprogram.py & and it worked. The & is used to run whatever process you want in background.
0
259
true
0
1
Run python program on startup in background on Intel Galileo
35,972,324
2
4
0
5
4
0
0.244919
0
I have python code that is running on raspberry pi B++ that uses the sounddevice library that lets you play and record sounds with python. I have successfully installed the modules. I can confirm through the python command line and enter import sounddevice as sd is works without errors. I have also confirmed by typing help ('modules') in python command line and sounddevice module appears. Only when I am running this code in an independent python program does the ImportError: No module name sounddevice appear. Hope some one can help. Here is the included code: import sounddevice as sd The error: ImportError: No module name sounddevice
0
python,audio
2016-03-07T01:39:00.000
0
35,834,903
Hello After alot of trial and error I final solved it on the pip install sounddevice --user. You need to remove --user part so that the command is: pip install sounddevice . This installs it through out the entire system and works.
0
14,295
false
0
1
Python import sounddevice as sd (ImportError: No module name sounddevice)
36,206,031
2
4
0
0
4
0
0
0
I have python code that is running on raspberry pi B++ that uses the sounddevice library that lets you play and record sounds with python. I have successfully installed the modules. I can confirm through the python command line and enter import sounddevice as sd is works without errors. I have also confirmed by typing help ('modules') in python command line and sounddevice module appears. Only when I am running this code in an independent python program does the ImportError: No module name sounddevice appear. Hope some one can help. Here is the included code: import sounddevice as sd The error: ImportError: No module name sounddevice
0
python,audio
2016-03-07T01:39:00.000
0
35,834,903
I had this same problem on Windows 10 even after eliminating the --user part of the pip install command. For some reason, installing pyaudio first resolved the problem with sounddevice. Sounddevice continues to work even after uninstalling pyaudio. They're both based on Portaudio so perhaps there's something shared in there, but I am not sure.
0
14,295
false
0
1
Python import sounddevice as sd (ImportError: No module name sounddevice)
62,490,236
2
2
0
2
8
0
0.197375
0
Can I use Python as a backend for my ionic app? i am new to ionic as well as backend development. If not python suggest some good language for backend development. I am working on a hybrid app.
0
python,ionic-framework,backend,hybrid-mobile-app
2016-03-07T10:38:00.000
0
35,841,555
Yes, you can use python using django rest framework as a backend for your ionic app....
0
12,424
false
0
1
Can I Use Python in Ionic for Backend work
35,842,202
2
2
0
6
8
0
1.2
0
Can I use Python as a backend for my ionic app? i am new to ionic as well as backend development. If not python suggest some good language for backend development. I am working on a hybrid app.
0
python,ionic-framework,backend,hybrid-mobile-app
2016-03-07T10:38:00.000
0
35,841,555
You can certainly work with Python. There is an awesome framework called Django which will easen up your development. However, if you are new to backend development and are already developing the ionic app I strongly recomend using NodeJS. It is Javascript running on the server machine. The reason is that you will be developing on the same languages on both sides, simplifying thelearning curve. NODEJS is a magnificent language that works a little different than others since it runs on the same process using an event loop to handle incoming requests. It is worth taking a look, you will be making serious functionality in very little time. Take a look at Sequelize to work with SQL databases in an abstracted ORM way (I dont know if you are familiar with databases but it brings clases and objects to talk to DB, so you for get about sql commands like select, join...). In NodeJS there are a lot of modules that you can just import like libraries in Java or C and call complex functionality through simple javascript code. Take a loop at Express framework for Node to make the server as a rest api. Your question was a little broad so I dont know what else you would like to know, if you have any further question I can certainly help you.
0
12,424
true
0
1
Can I Use Python in Ionic for Backend work
35,842,136
2
2
0
0
0
0
1.2
0
I am trying to run simple python code in atom using atom-runner package, but I am getting following error: Unable to find command: python Are you sure PATH is configured correctly? How can I configure PATH. (path to my python is C:\Python34)
0
python,windows,python-3.x,atom-editor
2016-03-09T19:16:00.000
1
35,900,628
Right click the start menu, and select System. Then, hit "Advanced system settings" > "Environment Variables". Click on path, and hit edit. Select "New" and add the folder that your python executable is in. That should fix the problem. Your other option is to reinstall python and select "add PYTHON to PATH" as Carpetsmoker suggested.
0
4,265
true
0
1
Run python3 in atom with atom-runner
35,900,933
2
2
0
0
0
0
0
0
I am trying to run simple python code in atom using atom-runner package, but I am getting following error: Unable to find command: python Are you sure PATH is configured correctly? How can I configure PATH. (path to my python is C:\Python34)
0
python,windows,python-3.x,atom-editor
2016-03-09T19:16:00.000
1
35,900,628
If this does not work guys uninstall Python and Atom. While reinstalling Python make sure you click on "Add Python to Path" so you will not have any problems with setting the paths at all!
0
4,265
false
0
1
Run python3 in atom with atom-runner
37,861,176
1
2
0
3
2
0
0.291313
1
I'm using py.test for REST API automation using python request library. How to get coverage using pytest-cov tool. I'm running the automation on build server and code is executed in application server.
0
python,pytest,coverage.py
2016-03-10T07:51:00.000
0
35,910,573
The usual coverage tools are built for the much more common case of the measured code being run inside the same process as the test runner. You not only are running in a different process, but a different machine. You can use coverage.py directly on the remote machine when you start the process running the code under test. How you would do that depends on how you start that process today. The simple rule of thumb is that wherever you had been saying, "python my_prog.py", you can say, "coverage run my_prog.py".
0
1,073
false
0
1
pytest-cov get automation coverage from remote server
35,939,160
1
1
0
0
0
0
1.2
0
I am working with z3 python api. When I solve constraints using z3 python api then the solver runs infinitely and no errors are thrown. But, when same constraints are dumped in the form of smtlib2 format and then are solved via z3 executable, it almost instantaneously gives sat or unsat. The smtlib2 dump is very large (around 1000 lines). Although for small number of constraints, z3 api works fine. Is there a bug in z3 python api for handling large number of constraints?
0
python,z3,smt,z3py
2016-03-10T09:07:00.000
0
35,911,900
This can happen, e.g., when the configuration between the two methods differs (even slightly), or when when the problems aren't exactly identical (e.g. different order of constraints). Some tactics are also non-deterministic (e.g. they use timers in the preprocessing) and the executable happens to be a bit faster/slower. To diagnose what exactly causes the difference we would need to see some of your problems, or at the very least some diagnostic output, for instance, add -v:10 on the command line and set the global "verbosity" option to 10.
0
159
true
0
1
Difference in output when smtlib2 solver is invoked through z3 python api and directly from executable?
35,916,711
2
2
1
0
0
0
1.2
0
I run python in my Android Terminal and want to run a .py file with: python /sdcard/myScript.py The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator). My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python? My direct called file "python" in /System/bin/ contains only a redirection like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh and so on to call python binary. Edit: I simply add the $1 parameter after every shell, Python is called through like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1 so is possible to call python /sdcard/myScript.py arg1 and in myScript.py as usual fetch with sys.argv thanks
0
android,python,shell,qpython
2016-03-10T21:58:00.000
1
35,928,155
I don't have experience in Android programming, so I can only give a general recommendation: Of course the naive solution would be to explicitly pass the arguments from script to script, but I guess you can't or don't want to modify the scripts in between, otherwise you would not have asked. Another approach, which I sometimes use, is to define an environment variable in the outermost scripts, stuff all my parameters into it, and parse it from Python. Finally, you could write a "configuration file" from the outermost script, and read it from your Python program. If you create this file in Python syntax, you even spare yourself from parsing the code.
0
373
true
0
1
Pass parameter through shell to python
35,935,344
2
2
1
0
0
0
0
0
I run python in my Android Terminal and want to run a .py file with: python /sdcard/myScript.py The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator). My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python? My direct called file "python" in /System/bin/ contains only a redirection like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh and so on to call python binary. Edit: I simply add the $1 parameter after every shell, Python is called through like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1 so is possible to call python /sdcard/myScript.py arg1 and in myScript.py as usual fetch with sys.argv thanks
0
android,python,shell,qpython
2016-03-10T21:58:00.000
1
35,928,155
I have similar problem. Runing my script from Python console /storage/emulator/0/Download/.last_tmp.py -s && exit I am getting "Permission denied". No matter if i am calling last_tmp or edited script itself. Is there perhaps any way to pass the params in editor?
0
373
false
0
1
Pass parameter through shell to python
36,178,959
1
1
0
1
0
0
0.197375
1
Another way I could ask this question is: How do I set pages served by Apache to have higher privileges? This would be similar to me setting an Application Pool in IIS to use different credentials. I have multiple Perl and Python scripts I am publishing through a web front end. The front end is intended to run any script I have in a database. With most of the scripts I have no issues... but anything that seems to utilize the network returns nothing. No error messages or failures reported. Running from CLI as ROOT works, run from WEB GUI as www-data calling same command fails. I am lumping Python and Perl together in this question because the issue is the same leading me to believe it isn't a code issue, it is a permissions issue. Also why I am not including code, initially. These are running on linux using Apache and PHP5. Python 2.7 and Perl5 I believe. Here are examples of apps I have that are failing: Python - Connecting out to VirusTotal API Perl - Connecting to Domains and Creating a Graph with GraphViz Perl - Performing a Wake On LAN function on a local network segment.
0
php,python,apache,perl,privileges
2016-03-12T18:07:00.000
0
35,961,414
So after I posted this I looked into Handlers like I use for IIS. That led me down the path of SUEXEC and through everything I tried I couldn't get Apache to load it. Even made sure that I set the bits for SETUID and SETGID. When I was researching that I ran across .htaccess files and how they can enable CGI scripts. I didn't want to put in .htaccess files so I just made sure the apache.conf was configured to allow CGI. That also did not help. So finally while I was studying .htaccess they referred to ScriptAlias. I believe this is what solved my issue. I modified the ScriptAlias section in an apache configuration file to point to my directory containing the script. After some fussing with absolute directories and permissions for the script to read/write a file I got everything to work except it isn't going through the proxy set by environment http_proxy. That is a separate issue though so I think I am good to go on this issue. I will attempt the same solution on my perl LAMP.
0
79
false
0
1
Perl/Python Scripts Fail to Access Internet/Network through Web GUI
35,962,761
1
1
0
1
0
1
1.2
0
I am currently writing a time consuming python program and decided to rewrite part of the program in fortran. However, the performance is still not good. For profiling purpose, I want to know how much time is spent in f2py wrappers and how much time is actual spent in fortran subroutines. Is there a convenient way to achieve this?
0
python,fortran,profiling,f2py
2016-03-13T09:35:00.000
0
35,968,682
At last I found out -DF2PY_REPORT_ATEXIT option can report wrapper performance.
0
82
true
0
1
How to obtain how much time is spent in f2py wrappers
35,970,843
1
1
0
0
0
0
1.2
0
We have two machines with the purpose to split our testing across machines to make testing faster. I would like to know of a way to tell behave to run half of the tests. I am aware of the --tags argument but this is too cumbersome as, when the test suite grows, so must our --tags argument if we wish to keep it at the halfway point. I would also need to know which of the other half of tests were not run so I can run those on the other machine. TL;DR Is there a simple way to get behave to run, dynamically, half of the tests? (that doesn't include specifying which tests through the use of --tags) And is there a way of finding the other half of tests that were not run? Thanks
0
python,bdd,python-behave
2016-03-14T03:25:00.000
0
35,979,063
No there is not, you would have to write your own runner to do that. But that would be complex to do as trying to piece together content of two separate test runs, which are half of each other would be rather complex if any errors are to show up. A better and faster solution will be to write a simple bash/python script that will traverse given directory for .feature files and then fire indivisdual behave process against it. Then with properly configured outputs it should be collision free in terms of outputs and if you separate your cases give you a much better boost than running half. And of course delegate that task to other machine by some means, be it bare SSH command or queues.
0
224
true
0
1
How to run only half of python-behave tests
35,988,255
1
1
0
2
1
0
1.2
0
Is there a way to configure RabbitMq to not close connections after missed heartbeats at all?
0
python-3.x,rabbitmq,pika
2016-03-15T13:16:00.000
0
36,012,450
No, but you can disable heartbeats eandersson is right, no you can't do that. but disabling heartbeats is probably the wrong idea, too. the point of a heartbeat is to tell you when your connection to the server drops, so you can take action as soon as possible. common actions include (but are not limited to): crash the app and restart, recreating the needed connection(s) re-create the connection(s) without restarting how you handle the missed heartbear / dropped connection is up to you, but ultimately, the missed heartbeat is a sign that your connection is already dropped, not a cause of dropped connections.
0
117
true
0
1
Is there a way to configure RabbitMq to not close connections after missed heartbeats?
36,022,628
1
2
0
4
4
0
0.379949
0
I am looking for a way to programmatically kill long running AWS EC2 Instances. I did some googling around but I don't seem to find a way to find how long has an instance been running for, so that I then can write a script to delete the instances that have been running longer than a certain time period... Anybody dealt with this before?
0
python,amazon-web-services,amazon-ec2,cron,aws-cli
2016-03-15T18:21:00.000
1
36,019,161
The EC2 service stores a LaunchTime value for each instance which you can find by doing a DescribeInstances call. However, if you stop the instance and then restart it, this value will be updated with the new launch time so it's not really a reliable way to determine how long the instance has been running since it's original launch. The only way I can think of to determine the original launch time would be to use CloudTrail (assuming you have it enabled for your account). You could search CloudTrail for the original launch event and this would have an EventTime associated with it.
0
3,416
false
1
1
Is there a way to determine how long has an Amazon AWS EC2 Instance been running for?
36,037,353
1
2
0
1
1
0
0.099668
0
Firstly I do apologies if my question might sound silly or irrelevant. I do not have a high level in any kind of coding language. I am currently using( playing around with python) and I was wondering if I can automate a process that I deal with every day and I currently do manually. I have an excel spreadsheet that I sort everyday based on the same criteria and I sent it afterwards via email . Any help.idea.tips.tricks more then welcome
0
python,excel
2016-03-18T15:27:00.000
0
36,088,267
Check out Openpyxl for the excel and SMTP for the emails, can use schedule to schedule it everyday, shouldn't be hard.
0
412
false
0
1
Sorting an excel file and sending a daily email using python
36,088,981
1
2
0
0
0
0
0
0
I have written a home automation script for controlling lights, music on kodi, and my TV. I have everything working except for turning off the TV. I have mapped my keyboard.xml file within kodi to recognize the "CECToggleState", and this works fine, but I would like to trigger this inside a python script. In the past, one could import the xbmc module, then use the command "xbmc.executebuiltin()" to run built in kodi functions, like "CECToggleState". The xbmc module has been replaced by the kodi module. I have seen examples where it is suggested that "kodi.executebuiltin()" can be used, but the kodi module does not appear to actually support "executebuiltin". When I try running this command in python, I get an error that reads: "AttributeError: 'module' object has no attribute 'executebuiltin'" Can anyone confirm that this is true, or offer what has replaced this command? Or, does anyone know of a good alternative to get the same effect -- ie, send "TV off" command programatically through HDMI CEC?
0
python,kodi
2016-03-18T22:02:00.000
0
36,094,957
You have to write xbmc.executebuitin("XBMC.CECToggleState()"), not kodi.executebuitin("XBMC.CECToggleState()"), and please upgrade your kodi from isengard to a stable version (i.e, Kyrpton).
0
1,149
false
0
1
kodi.executebuiltin() not working in Kodi: Isengard
45,115,405
2
2
0
-1
1
1
1.2
0
I'm writing my first test for a class where I imported an external packages ( installed via pip in the venv ). I'm using PyCharm as an IDE and it the package in question is listed there under the project interpreter ( the venv ) as well as when I type pip freeze in console. Now I want to run a run-tests.sh file and when my test is reached pytest is returning me an ERROR : E ImportError: No module named 'magic' The code line which fails the test is obviously the import statement in my class which I want to test. Any ideas? //Edit: for clarification: NOT the terminal itself is throwing the Error! PYTEST does!
0
python,python-3.x,import,virtualenv,pytest
2016-03-19T14:37:00.000
0
36,103,027
make sure that you have installed packages through pycharm, if it don't list your package, it means that you install your package in other place, go to file > setting > project:[NAME] >interpreter to check it. then yo can use "+" to install package
0
1,225
true
0
1
virtualenv doesnt find installed module when running tests [Pytest]
36,103,485
2
2
0
0
1
1
0
0
I'm writing my first test for a class where I imported an external packages ( installed via pip in the venv ). I'm using PyCharm as an IDE and it the package in question is listed there under the project interpreter ( the venv ) as well as when I type pip freeze in console. Now I want to run a run-tests.sh file and when my test is reached pytest is returning me an ERROR : E ImportError: No module named 'magic' The code line which fails the test is obviously the import statement in my class which I want to test. Any ideas? //Edit: for clarification: NOT the terminal itself is throwing the Error! PYTEST does!
0
python,python-3.x,import,virtualenv,pytest
2016-03-19T14:37:00.000
0
36,103,027
fixed it myself. for some dubious reason pytest and my venv did have a problem. reinstalled pytest within my virtual env via pip install pytest
0
1,225
false
0
1
virtualenv doesnt find installed module when running tests [Pytest]
36,103,970
1
1
0
0
0
0
0
0
Does Z3 have the ability to do power mod arithmetic? For instance, if I'm placing in expressions of the sort x ** y % z, is there a way to tell Z3 that it is this type of expression, similar to how python has the function pow(x,y,z)? My assumption is that would open up solving options (such as modular inverse).
0
python,python-3.x,z3,z3py
2016-03-19T20:48:00.000
0
36,106,958
Interesting point. There isn't any particular support for this in Z3. What are the known techniques in this area?
0
1,168
false
0
1
Z3 Power Modulo Statements
36,107,304
1
1
0
1
6
0
0.197375
1
My 10 year old and I are implementing a project which calls for audio to be played by a Chromecast Audio after a physical button is pressed. She is using python and pychromecast to connect up to a chromecast audio. The audio files are 50k mp3 files and hosted over wifi on the same raspberry pi running the button tools. They are hosted using nginx. Delay from firing the play_media function in pychromecast to audio coming out of the chromecast is at times in excess of 3 seconds, and never less than 1.5 seconds. This seems, anecdotally, to be much slower than casting from spotify or pandora. And, it's definitely too slow to make pushing the button 'fun'. File access times can matter on the pi, but reading the entire file using something like md5sum takes less than .02 seconds, so we are not dealing with filesystem lag. Average file download times for the mp3 files from the pi is 80-100ms over wifi, so this is not the source of the latency. Can anyone tell me What the expected delay is for the chromecast audio to play a short file If pychromecast is particularly inefficient here, and if so, any suggestions for go, python or lisp-based libraries that could be used. Any other tips for minimizing latency? We have already downconverted from wav files thinking raw http speed could be an issue. Thanks in advance!
0
python,audio,raspberry-pi,chromecast
2016-03-21T03:51:00.000
0
36,122,859
I've been testing notifications with pychromecast. I've got a delay of 7 sec. Since you can't play a local file, but only a file hosted on a webserver, I guess the chromecast picks up the file externally. Routing is via google's servers, which is what google does with all its products.
0
657
false
1
1
Expected Chromecast Audio Delay?
41,686,041
1
2
0
1
0
0
0.099668
0
I'm looking over a number of images with missing aspects, namely missing either red, green or blue channels (which have been removed accidentally by an automated process before I was give the images). I need to fine the valid images. Is there a quick way of checking to see if an image has all three (R, G & B) channels? Alpha channels (if included) are ignored. I've been using PIL up until this for image processing in Python point (I realise it might not be the way forward). I've not tried anything yet as I'm not sure the best way forward: My first guess, and this may be long winded would be to loop over every pixel and working out if all the Red, Green or Blue data is zero (presumed missing) However I 've a feeling there's a faster method.
0
python-2.7,image-processing,python-imaging-library,rgb
2016-03-21T16:31:00.000
0
36,136,637
Pretty much any image processing library provides means for reading pixel values. The simplest and most efficient way is indeed iterating over all pixels checking if any value is 0 for all pixels. Of course many libraries also provide convenient tools for extracting color planes and calculating average pixel values. But internally, they do nothing but iterating over pixels. How else should any algorithm know if all values are zero if not by checking every value? So your feeling is wrong, unless the pixel reading function is poorly implemented and the algorithm is using someething more efficient, which is quite unlikely. So you're doing nothing wrong either way.
0
212
false
0
1
Validate existance of RGB channels
36,137,059
1
1
0
0
0
0
0
0
I'm trying to analyse an asymmetric FIR filter with complex coefficients. I know I can use the numpy function freqz to analyse the frequency response of an FIR or IIR-filter with real coefficients. At the moment I'm just using a regular FFT of the FIR filter and I use fftshift to put the negative frequencies in front of 0 and then I do fftfreqs to calculate the frequency bins and finally, I add the carrier frequency to all the frequencies in the array which is given by fftfreqs. Anyway, I'm pretty sure that that's the wrong way.
0
python,numpy,scipy
2016-03-22T12:59:00.000
0
36,155,061
Never mind, it is absolutely possible to pass a set of complex coefficients to freqz, I got confused because I tried to plot the response without specifying I wanted the absolute value of h, which rendered the warning: ComplexWarning: Casting complex values to real discards the imaginary part. Trap for young players like myself!
1
631
false
0
1
How to use freqz to get response of complex FIR filter
36,157,336
1
4
0
1
6
1
0.049958
0
I'd like to a log some information to a file/database every time assert is invoked. Is there a way to override assert or register some sort of callback function to do this, every time assert is invoked? Regards Sharad
0
python,pytest
2016-03-22T17:01:00.000
0
36,160,713
I don't think that would be possible. assert is a statement (and not a function) in Python and has a predefined behavior. It's a language element and cannot just be modified. Changing the language cannot be the solution to a problem. Problem has to be solved using what is provided by the language There is one thing you can do though. Assert will raise AssertionError exception on failure. This can be exploited to get the job done. Place the assert statement in Try-expect block and do your callbacks inside that block. It isn't as good a solution as you are looking for. You have to do this with every assert. Modifying a statement's behavior is something one won't do.
0
2,627
false
0
1
Is there a way to override default assert in pytest (python)?
36,162,206
1
4
0
1
3
0
0.049958
0
I work with Raspberry Pi 2 model B v1.1 and I searched about a RTC to keep time even in the case of a power outage or a loss of internet connection. I found that I must buy a RTC chip with a battery. But recently I heard that my Raspberry Pi already contains a RTC. But is it right? If so, where is its location? I don't see anything called RTC on my chip.
0
python,raspberry-pi,raspbian
2016-03-22T17:26:00.000
0
36,161,221
None of the Raspberry Pi models have a built-in real time clock.
0
547
false
0
1
Does the Raspberry Pi 2 model B v1.1 have an internal RTC?
36,163,083
2
3
0
1
5
1
0.066568
0
What is a good practice to model Python properties in a UML class diagram? Properties themselves are class objects, their getter and setter are class functions. From Outside the class they look like instance attributes. So, how would you suggest to present that in my class diagram?
0
python,properties,uml
2016-03-23T11:58:00.000
0
36,177,561
Good practice is what works on your project. In order to model Python properties you can add stereotyped getter and setter operations which indicate their use. The link between attribute and operation is usually done via a naming convention. Some tools offer internal linkage to make attributes properties with getters and setters. If you are not using code generation you can also stereotype the attribute to indicate their use as properties (thus telling the coder to use @property) and leave away the operations. If you are using your own code generator this would work analogously. Tool embedded code generators might need the additional operations as described above.
0
3,265
false
0
1
How to model python properties in UML diagram
36,178,271
2
3
0
0
5
1
0
0
What is a good practice to model Python properties in a UML class diagram? Properties themselves are class objects, their getter and setter are class functions. From Outside the class they look like instance attributes. So, how would you suggest to present that in my class diagram?
0
python,properties,uml
2016-03-23T11:58:00.000
0
36,177,561
It really depends what kind of template your UML tool uses. In some tools there is a Properties box along the common Attributes and Methods boxes. The UML notation states that attributes are written in lower camelcase, you could write properties in upper camelcase. They would also differ visually because of the public access modifier (+). Do you have the need to specify different access modifiers for the getter and setter? Im not sure how i would go about that. Keep in mind the level of abstraction necessary. Remember that UML is mainly a set of defined standards. If the standard needs a slight tweak to fit your needs dont hesitate. The important thing is that your team and stakeholders understand the syntax.
0
3,265
false
0
1
How to model python properties in UML diagram
36,177,829
1
2
0
1
0
0
0.099668
1
I recently came to know about protractor framework which provides end to end testing for angular applications. I would like to know, which test framework suits better for following webstack, either selenium or protractor Angular, Python and MongoDB. I am going to use mozilla browser only. Can anyone please provide your valueable suggestions
0
python,angularjs,selenium,testing,protractor
2016-03-23T12:26:00.000
0
36,178,187
Protractor is based off using Selenium webdrivers. If you have an Angular app for your entire front-end, I would go with Protractor. If you are going to have a mixed front-end environment, you may want to go with Selenium only.
0
548
false
1
1
For E2E Testing: which is better selenium or protractor for following web stack (Angular, Python and MongoDB)?
36,178,367
1
1
0
1
1
0
0.197375
0
I have performed an FFT of a wav file, plotted a graph of this, and have used peak utils to get the peaks and printed these out one after the other. How do I go from here to getting the songs BPMs (beats per minute)? Do I need to perform an IFFT? As I am assuming I need to get back to a time context. Or is there another way to get back to a time context? I'm not after any code, I just want a push in the right direction of the next step.
0
python,signal-processing,fft
2016-03-23T13:48:00.000
0
36,180,049
Measure the peak to peak distance in seconds and divide 60 by that number. For instance 0.5 seconds peak to peak = 60/0.5 = 120 bpm. This will work to some extent on regular dance music but on other types will be not so good Sorry I completely misunderstood your question. The above is how to do it on a wave form like you'd see in audacity. With an FFT as it is in frequency domain. Look for peaks in the lower frequency range. If the music is in the range of 60 to 180 bpm then this corresponds to frequencies of 1 to 3 Hz. So look for peaks in this frequency range To convert the frequency of the peak to bpm multiply by 60. So 2 Hz * 60 = 120 bpm
0
1,519
false
0
1
How to get the BPM once FFT and peak detection has been performed
36,180,355
1
2
0
0
0
0
0
0
I installed last Cuckoo version on my physical machine Ubuntu 15.10 and I configured cuckoo following official guide. I have problem with web gui: TemplateDoesNotExist at / and it tries to search dashboard template in usr/lib/python2.7/dist-packages/django/contrib/auth/templates/dashboard/index.html (with File does not exist error) instead of to search it in ~/cuckoo/web/templates/dashboard/ I tried to search a solution in cuckoo official support but it seems to be deserted.
0
django,python-2.7,sandbox,malware-detection
2016-03-23T17:49:00.000
0
36,185,274
Do you have TEMPLATE_DIR and TEMPLATE_LOADERS in your settings.py file. I faced the same issues too. Once you do this, it will work.
0
587
false
1
1
Cuckoo Error: TemplateDoesNotExist at /
36,680,726
1
1
0
3
4
0
1.2
0
I have project with very simple configuration matrix, described in tox: py{27,35}-django{18,19} I'm using TeamCity as the CI-server, run tests with py.test with installed teamcity-messages. I've tried to run every configuration like tox -e py27-django18 in different steps. But Teamcity didn't summarize tests and didn't accumulate coverage for files, it's only count coverage for last run and Tests passed:​ ... show tests from only one build. How testing with multiple Python configurations can be integrated into Teamcity? upd. Find out, that coverage counts correctly, just forgot to add --cov-append option to py.test.
0
python,teamcity,pytest,tox
2016-03-25T19:19:00.000
0
36,226,500
TeamCity counts the tests based on their names. My guess is since your tests in the tox matrix have the same name, they are counted as one test. This should be visible on the test page of your build, where you can see invocation counts of each test. For TeamCity to report number of tests correctly, test names must differ in different configurations. Perhaps, you could include configuration details in the reported test name
0
864
true
1
1
Testing python project with Tox and Teamcity
36,237,069
1
2
0
1
0
1
0.099668
0
I write a simple factorial function in python to know the factorial of n . As like we given n and it shows the factorial of n . But the problem is , i use raw_input function to get the value of n. But can't work with that . What can i do to do with raw_input ?
0
python,python-2.7,raw-input
2016-03-26T05:53:00.000
0
36,232,088
Your factorial function probably takes an integer as input. However, raw_input returns a string. So you either need to convert the returned string to an integer. By using int(). Or you can directly use input() which returns an integer in Python2.
0
188
false
0
1
Input from keyboard in python
36,232,129
1
1
0
1
0
0
1.2
1
I am using NameCheap to host my domain, and I use their privateemail.com to host my email. I'm looking to create a python program to retrieve specific/all emails from my inbox and to read the HTML from them (html instead of .body because there is a button that has a hyperlink which I need an is only accessible via html). I had a couple questions for everyone. Would the best way to do this be via IMAPlib? If it is, how do I find out the imap server for privateemail.com? I could do this via selenium, but it would be heavy and I would prefer a lighter weight and faster solution. Any ideas on other possible technologies to use? Thanks!
0
python,email
2016-03-26T11:24:00.000
0
36,234,690
Well, just a little bit of testing with telnet will give you the answer to the question 'how do I find the imap server for privateemail.com'. mail.privateemail.com is their IMAP server.
0
411
true
0
1
Retrieving emails from NameCheap Private Email
36,242,008
1
2
0
2
5
0
0.197375
1
Has anyone had trouble verifying/submitting code to the google foobar challenges? I have been stuck unable to progress in the challenges, not because they are difficult but because I literally cannot send anything. After I type "verify solution.py" it responds "Verifying solution..." then after a delay: "There was a problem evaluating your code." I had the same problem with challenge 1. I waited an hour then tried verifying again and it worked. Challenge 2 I had no problems. But now with challenge 3 I am back to the same cryptic error. To ensure it wasn't my code, I ran the challenge with no code other than "return 3" which should be the correct response to test 1. So I would have expected to see a "pass" for test 1 and then "fail" for all the rest of the tests. However it still said "There was a problem evaluating your code." I tried deleting cookies and running in a different browser. Neither changed anything. I waited overnight, still nothing. I am slowly running out of time to complete the challenge. Is there anything I can do? Edit: I've gotten negative votes already. Where else would I put a question about the google foobar python challenges? Also, I'd prefer not to include the actual challenge or my code since it's supposedly secret, but if necessary I will do so.
0
python,python-2.7,google-chrome
2016-03-28T15:44:00.000
0
36,265,728
Re-indenting the file seemed to help, but that might have just been coincidental.
0
9,217
false
1
1
Error with the google foobar challenges
36,270,465
1
2
0
0
0
0
0
0
i have created python script mail.py which include code of sending mail when gpio 4 is pressed..my gpio 4 is PULLED UP switch ,but problem is that when i directly run the script it runs means it send mail but when i press that switch it doesn't run before pressing switch it goes outside the loop and script doen't run,also email doesn't send..i have also put delay for that..i think problem is when i press the switch one time it have to store the state of switch so after 10 second it read the state but i can't store the state of switch..if any suggestion plz tell me..thanks in advance..
0
python,windows,raspberry-pi,putty
2016-03-29T09:24:00.000
0
36,280,254
you can write the listener inside a loop and also the mail functionality as function like GPIO.setup(23, GPIO.IN, pull_up_down = GPIO.PUD_DOWN) GPIO.setup(24, GPIO.IN, pull_up_down = GPIO.PUD_UP) while True: if(GPIO.input(23) ==1): # button pressed then cal the mail() function #after excutting the function if need you can again set the pinpoint back to 0 if(GPIO.input(24) == 0): print(“Button 2 pressed”) GPIO.cleanup() try the example
0
1,325
false
0
1
Raspberry pi:send email when PULLED UP switch is pressed
36,300,075
1
2
0
0
1
0
0
0
I have a sensor attached to a drill. The sensor outputs orientation in heading roll and pitch. From what I can tell these are intrinsic rotations in that order. The Y axis of the sensor is parallel to the longitudinal axis of the drill bit. I want to take a set of outputs from the sensor and find the maximum change in orientation from the final orientation. Since the drill bit will be spinning about the pitch axis, I believe it can be neglected. My first thought would be to try to convert heading and roll to unit vectors assuming pitch is 0. Once I have their vectors, call them v and vf, the angle between them would be Θ=arccos(v . vf) It should then be fairly straight forward to have python calculate Θ for a given set of orientations and pull the largest out. My question is, is there a simpler way to do this using python, and, if not what is the most efficient way to convert these intrinsic rotations to unit vectors.
0
python,math,trigonometry,angle,euler-angles
2016-03-29T18:25:00.000
0
36,292,230
Suppose u(1), u(2), ..., u(m), v are all unit vectors. You want to determine i such that the angle between u(i) and v is maximized. This is equivalent to finding the i such that np.dot(u(i), v) is minimized. So if you have a matrix U where the rows are the u(i), you can simply do i = np.argmin(np.dot(U, v)) to find the i that has the angle between u(i) and v maximized.
1
914
false
0
1
I need to find the angle between two sets of Roll and Yaw angles
36,292,344
1
1
0
0
0
1
1.2
0
I have a twitter bot which is reading a text file and tweeting. Now, a free Heroku dyno sleeps after every 18 hours for 6 hours, after which it restarts with the same command. So, the text file is read again and the tweets are repeated. To avoid this, everytime a line was read out of the list of lines from the file, I was removing the line from the list (after tweeting) and putting the remaining list into a new file which is then renamed to the original file. I thought this might work, but when the dyno restarted, it started from the beginning. Am I missing something here? It would be great if someone could help me with this.
0
python,heroku
2016-03-30T07:47:00.000
0
36,302,677
When the dyno restarts, it's a new one. The filesystem on Heroku is ephemeral and is not persisted across dynos; so your file is lost. You need to store it somewhere more permanent - either somewhere like S3, or one of the database add-ons. Redis might be suitable for this.
0
103
true
1
1
Twitter Bot is restarting after Heroku dyno recharges
36,302,969
2
2
0
0
2
1
0
0
I am in the process of creating a text-based adventure game in Python. The game will not (and has no reason to) access the internet. When creating a save file, I want to securely encrypt and store the file. I would take all the data to be saved, run it through some encryption function, and create a text file storing the output of that encryption function. The file would be stored on the player's local computer storage (hard disk, user documents, user desktop, whatever). My question is this: is there any way that I can create an encryption function inside the program that virtually uncrackable (whilst being decryptable)? Can any local encryption (even outside of Python) ever be "truly" secure? Just thinking about it/based on my knowledge of cryptography, I would say that it's not possible, but I've been wrong before. I know that I can create pseudo-cryptography that's essentially obfuscation, but that, very obviously, is quite easy to crack. Also, I understand that most people wouldn't bother to or have the knowledge to edit the save file, but it doesn't take someone with too awfully much knowledge to do what I've stated in the following paragraph (if the individual is motivated enough, which most cheaters are). The reason I want to encrypt the save file is because without encrypting it, it is so very easy to cheat the game by simply editing the (unencrypted) save file. Even encrypting the file leaves the encryption algorithm in plain sight in the python code; python does not need to be compiled into an executable, so the raw/natural python code is outright for anyone who simply looks at the contents of the game. I highly doubt that secure encryption such as this is even possible, but if it is, please explain how and, if possible, provide some example Python code for incorporating it into Python (if it's possible for Python).
0
python,python-3.x,encryption,cryptography,save
2016-03-30T16:40:00.000
0
36,314,815
For good encryption to be secure, the key (and only the key) must be secure. But if an application cannot retrieve the key from someplace else, it must store it in its own data or generate every time its needed. And that is not secure. It's like putting your house key under the doormat outside. So obfuscation is the best you can hope for. And that depends on how sophisticated your public is. Compressing a savefile with one of the compression modules from the standard library and then cutting off the header before writing the data to disk will defeat casual lookers because it looks like random data without the identifying header. But if they can read and understand the source code of the program, they can easily make the text readable again. Another approach would be to do nothing with the savefile, but create a checksum (say SHA1) of the savefile contents and store that somewhere else. Refuse to open a savefile whose contents don't match the checksum. Again, anyone who is capable and willing to go through the program's source code can "break" this protection.
0
172
false
0
1
Can encryption/cryptography be secure inside a local, internet-nonaccessing program?
36,345,400
2
2
0
0
2
1
0
0
I am in the process of creating a text-based adventure game in Python. The game will not (and has no reason to) access the internet. When creating a save file, I want to securely encrypt and store the file. I would take all the data to be saved, run it through some encryption function, and create a text file storing the output of that encryption function. The file would be stored on the player's local computer storage (hard disk, user documents, user desktop, whatever). My question is this: is there any way that I can create an encryption function inside the program that virtually uncrackable (whilst being decryptable)? Can any local encryption (even outside of Python) ever be "truly" secure? Just thinking about it/based on my knowledge of cryptography, I would say that it's not possible, but I've been wrong before. I know that I can create pseudo-cryptography that's essentially obfuscation, but that, very obviously, is quite easy to crack. Also, I understand that most people wouldn't bother to or have the knowledge to edit the save file, but it doesn't take someone with too awfully much knowledge to do what I've stated in the following paragraph (if the individual is motivated enough, which most cheaters are). The reason I want to encrypt the save file is because without encrypting it, it is so very easy to cheat the game by simply editing the (unencrypted) save file. Even encrypting the file leaves the encryption algorithm in plain sight in the python code; python does not need to be compiled into an executable, so the raw/natural python code is outright for anyone who simply looks at the contents of the game. I highly doubt that secure encryption such as this is even possible, but if it is, please explain how and, if possible, provide some example Python code for incorporating it into Python (if it's possible for Python).
0
python,python-3.x,encryption,cryptography,save
2016-03-30T16:40:00.000
0
36,314,815
There is no secure (as in cryptographically secure) way of achieving this. As the comments already say: your program must be able to decrypt the file again, so the knowledge how to decrypt it, must be there. However, you can raise the bar that cheaters need to cross. The simplest way is to just encode the file instead of encrypting it. This can be base64 or even something simple as gzip. Your users would need to use an extra tool instead of just using a text editor. Everything else needs your creativity. You might use obfuscated code to obtain the encryption key, like the hash of a python source file, or whatever. A cheater would then need to use a python debugger, which increases the needed knowledge. But the question remains: If you don't play online, whom are your users going to cheat?
0
172
false
0
1
Can encryption/cryptography be secure inside a local, internet-nonaccessing program?
36,344,091
1
1
0
0
0
0
0
0
I'm attempting to add a test to my unit tests that is significantly more complicated and takes longer to perform. The idea would be to run this longer test infrequently. However, the test itself takes longer than the 10 minute timeout that codeship currently has, and since it doesn't fail/pass within 10 minutes my codeship will show as failing. Is there any way to get py.test to print out a heartbeat or something every x minutes to keep codeship happy? Obviously any of my output and logging gets gobbled up by py.test itself, so that isn't helpful. Thanks!
0
python,python-3.x,pytest,codeship
2016-04-01T17:13:00.000
0
36,362,122
Not sure If I understood your question correctly but if your concern was gobbling of output by Py.Test they run the pytest using -s option.
0
158
false
1
1
py.test timeout/keepalive/heartbeat?
38,299,299
1
1
0
0
0
1
0
0
After installing the module "win32" and then importing pythoncom I got the error listed above. Any idea why this is happening? I got this message after install : close failed in file object destructor: sys.excepthook is missing lost sys.stderr The installation directory: **C:\Python27\Lib\site-packages**
0
python,python-2.7,module,importerror,python-module
2016-04-02T17:09:00.000
0
36,376,317
It means that win32 is not installed, try to install it again
0
3,211
false
0
1
"ImportError: No module named pywintypes"
36,377,248
1
1
0
1
0
0
0.197375
0
I'm trying to extract the text from PDF by subjects. in order to do so im trying to identify the labels \ headlines in the PDF. So far I have converted the PDF into xml file, in order to get the text data more easily, and then using the font \ size of each in to deiced if a line is a label or not. the main problem with this way, is that each PDF can have its own build, and not necessarily what works for one PDF will work for the other. I will be glad if someone have an idea how to overcome this problem so that it will be possible to extract the labels (text by subjects) without depending on the PDF (most of the PDFs I work with are articles \ books) different ways to extract text by subjects also welcome. (As the tag indicates, I'm trying to do this in Python) Edit: At the moment im doing 2 things: check font of each line check each line text size i concluded that: regular text will have the most lines with its font (there are more than x10 lines with this font than all other texts), and that if you look at the median of text size, it will be the size of the regular text. From the first i can remove all regular text, and from the second i can take all texts that are bigger and all the labels will be in this list. The problem now is to extract only the labels from this list since usually there is text that is bigger than the regular text yet isn't a label. I tried to use the amount of time each fonts shows in the text to identify the labels fonts, but without much success. For each PDF the amount can vary. I'm looking for ideas how to solve this problem, or if someone know a tools that can do it more easily.
0
python,pdf
2016-04-03T11:54:00.000
0
36,385,070
I would suggest studying many pdfs and write down every pdf label text size. Then, you can average the top 5 highest fonts and average the top 5 lowest fonts. Now, you can make a range between them and check text if it is in that text size range. This method will not work always, but, will cover the majority of pdfs. (The more pdfs you study the better)
0
106
false
0
1
Extracting PDF text by subjects
36,385,200
1
1
0
0
0
0
1.2
0
I am coding a MPU-6050 accelerometer/gyro to provide me with the accelerations and rotational velocities. Now the code works as far as to provide me with all accelerations and angular velocities. But the results are giving me marginally weird results. If I point the accelerometer that the positive z-axis points up I get marginally lower than expected readings FOR my altitude (should be around 9.7 but I get around 8.9). Now if i turn the accelerometer so that the positive z-axis I get larger than expected readings (I get over 10.1). The same is for all the other axis if I point them along gravity. The low readings didnt alarm me at first because I thought the accelerometer is not placed perfectly straight. But the higher than expected readings are definitely alarming. This means that the accelerometer neutral point seems somehow wrong (it under reads on the one side and over reads on the other). Do I need to calibrate the accelerometer? This seems nearly impossibly wince one will never get the accelerometer perfectly straight. Please advise. Do you want to see my code?
0
python-2.7,accelerometer,raspberry-pi2
2016-04-03T20:27:00.000
0
36,390,762
After many many hours of research and fiddling I found out that nearly all electronic sensors have a bias. (bias is an offset) Now even the accelerometers apparently have serious offsets. So what I ended up doing is building a small test stand that was balanced by four screws. By running an active while loop, and outputting live data from the accelerometer I was able to nearly zero the "other two" axis that were not pointing along gravity all the while gathering data. Once level, I ran the program for several minutes, averaged the results and thus found the bias. I did this 4/5 times per axis and found that the remaining error was due to noise and not recoverable. I had to obviously do this for all 3 axis. Additionally I found that after I zeros the bias the readings were too high. I am not sure if what I did here was correct but it seemed the logical way to progress. Once all axis all the 3 axis were calibrated, they gave me different gravity readings respectively. All I did then was to add a correction factor to get the gravity reading I expect to get at my altitude. These correction factors were very small (eg 0.966.....) but in my opinion still significant. Hope this helps anyone who was lost as I was
0
197
true
0
1
MPU-6050 impossible readings Raspberry
36,732,250
1
2
0
4
2
0
1.2
0
I'm working with Iot on the Intel Galileo with a Yocto image, I have it that a python script will execute 'aplay audio.wav', But I want it to also get the PID of that aplay proccess in case the program will have to stop it. Sorry for being very short and brief.
0
python,linux,alsa
2016-04-03T21:50:00.000
1
36,391,651
The pid attribute of the subprocess.Popen object contains its PID, but if you need to terminate the subprocess then you should just use the terminate() method. You should consider using pyao or pygst/gst-python instead though, if you need finer control over audio.
0
451
true
0
1
Start a process with python and get the PID (Linux)
36,391,683
1
1
0
1
0
1
1.2
0
Coming from programming in python, I am familiar with modules. What is the equivalent in c++?
0
python,c++,module
2016-04-04T03:31:00.000
0
36,394,150
The concept in c++ is more complicated than it is with python, from what I remember of python, a module will work without having to take care of the architecture the module was developed. In C++ (as in C) you have the build process (compile, link) which is important to know when developping with these languages. In C/C++ you have libraries and header files. To make it simple the header shows the interface of the library (which contains the real compiled code). The thing here is that as libraries are compiled, you will need a different version depending on architecture and the compiler you are using. A Mingw built library won't be compliant with MSVC compiler. Namespaces can be thought as modules but not in the same way as we call python modules. In C++ namespaces just allow you to "concat" a prefix to what there is in the namespace to avoid names collision (rough example here, the real mechanism behind isn't just concat) and to order the code logically. You cannot just include a namespace as you import a module in python. I advise you look a tutorial on how the C/C++ build process work, that will explain in detail what are headers, what are libraries and how to use them ;)
0
534
true
0
1
Python has modules, what does c++ have?
36,400,365
6
11
0
0
73
0
0
0
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. aws lambda create-function --region us-west-2 --function-name HelloPython --zip-file fileb://hello_python.zip --role arn:aws:iam::my-acc-account-id:role/default --handler hello_python.my_handler --runtime python2.7 --timeout 15 --memory-size 512
0
python,amazon-web-services,boto,aws-sdk,aws-lambda
2016-04-05T07:09:00.000
0
36,419,442
Most people end up in this error because of giving the wrong Role ARN in CloudFormation while creating the Lambda Function. Make sure the role is completed first by using "DependsOn" and use the intrinsic function """{ "Fn::GetAtt" : [ "your-role-logical-name", "Arn" ] }"""
0
78,602
false
0
1
The role defined for the function cannot be assumed by Lambda
55,502,906
6
11
0
2
73
0
0.036348
0
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. aws lambda create-function --region us-west-2 --function-name HelloPython --zip-file fileb://hello_python.zip --role arn:aws:iam::my-acc-account-id:role/default --handler hello_python.my_handler --runtime python2.7 --timeout 15 --memory-size 512
0
python,amazon-web-services,boto,aws-sdk,aws-lambda
2016-04-05T07:09:00.000
0
36,419,442
For me, the issue was that I had set the wrong default region environment key.
0
78,602
false
0
1
The role defined for the function cannot be assumed by Lambda
44,550,586
6
11
0
45
73
0
1
0
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. aws lambda create-function --region us-west-2 --function-name HelloPython --zip-file fileb://hello_python.zip --role arn:aws:iam::my-acc-account-id:role/default --handler hello_python.my_handler --runtime python2.7 --timeout 15 --memory-size 512
0
python,amazon-web-services,boto,aws-sdk,aws-lambda
2016-04-05T07:09:00.000
0
36,419,442
I'm also encountering this error. Have not got a definitive answer (yet) but figured I'd pass along a couple of hints that may help you and/or anyone else hitting this problem. A) If you build the Role ARN by putting together your account ID and role name, I think the account ID needs to be without any dashes B) If you just created the role, and possibly added policies to it, there seems to be a (small) window of time in which the role will trigger this error. Sleeping 5 or 6 seconds between the last operation on the role and the create-function call allowed me to bypass the issue (but of course, the timing may be variable so this is at best a work-around).
0
78,602
false
0
1
The role defined for the function cannot be assumed by Lambda
37,438,525
6
11
0
5
73
0
0.090659
0
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. aws lambda create-function --region us-west-2 --function-name HelloPython --zip-file fileb://hello_python.zip --role arn:aws:iam::my-acc-account-id:role/default --handler hello_python.my_handler --runtime python2.7 --timeout 15 --memory-size 512
0
python,amazon-web-services,boto,aws-sdk,aws-lambda
2016-04-05T07:09:00.000
0
36,419,442
I got this problem while testing lambda function. What worked for me was formatting JSON.
0
78,602
false
0
1
The role defined for the function cannot be assumed by Lambda
62,650,143
6
11
0
0
73
0
0
0
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. aws lambda create-function --region us-west-2 --function-name HelloPython --zip-file fileb://hello_python.zip --role arn:aws:iam::my-acc-account-id:role/default --handler hello_python.my_handler --runtime python2.7 --timeout 15 --memory-size 512
0
python,amazon-web-services,boto,aws-sdk,aws-lambda
2016-04-05T07:09:00.000
0
36,419,442
It could be that the Lambda is missing an execution role. Or this role has been deleted. In console you can see the status at Lambda > Functions > YourFunction > Permissions. Even an IAM empty role with no policies is enough to make it work.
0
78,602
false
0
1
The role defined for the function cannot be assumed by Lambda
65,418,455
6
11
0
2
73
0
0.036348
0
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. aws lambda create-function --region us-west-2 --function-name HelloPython --zip-file fileb://hello_python.zip --role arn:aws:iam::my-acc-account-id:role/default --handler hello_python.my_handler --runtime python2.7 --timeout 15 --memory-size 512
0
python,amazon-web-services,boto,aws-sdk,aws-lambda
2016-04-05T07:09:00.000
0
36,419,442
I had this error simply because I had a typo in the role ARN. I really wish the error was more explicit and said something along the lines of "this role doesn't exist", but alas.
0
78,602
false
0
1
The role defined for the function cannot be assumed by Lambda
65,979,727
1
1
0
1
0
0
1.2
1
I'm working on a graph search problem that can be distilled to the following simpler example: Updated to clarify based on response below The Easter Bunny is hopping around the forest collecting eggs. He knows how many eggs to expect from every bush, but every bush has a unique number of eggs. It takes the Easter Bunny 30 minutes to collected from any given bush. The easter bunny searches for eggs 5 days a week, up to 8 hours per day. He typically starts and ends in his burrow, but on Tuesday he plans to end his day at his friend Peter Rabbit's burrow. Mrs. Bunny gave him a list of a few specific bushes to visit on specific days/times - these are intermediate stops that must be hit, but do not list all stops (maybe 1-2 per day). Help the Easter Bunny design a route that gives him the most eggs at the end of the week. Given Parameters: undirected graph (g), distances between nodes are travel times, 8 hours of time per day, 5 working days, list of (node,time,day) tuples (r) , list of (startNode, endNode, day) tuples (s) Question: Design a route that maximizes the value collected over the 5 days without going over the allotted time in any given day. Constraints: visit every node in r on the prescribed time/day. for each day in s, start and end at the corresponding nodes, whose collection value is 0. Nodes cannot be visited more than once per week. Approach: Since there won't be very many stops, given the time at each stop and the travel times (maybe 10-12 on a large day) my first thought was to brute force all routes that start/stop at the correct points, and just run this 5 times, removing all visited nodes. From there, separately compute the collected value of each allowable route. However, this doesn't account for the fact that my "best" route on day one may ruin a route that would be best on day 5, given required stops on that day. To solve that problem I considered running one long search by concatenating all the days and just starting from t = 0 (beginning of week) to t = 40 (end of week), with the start/end points for each day as intermediate stops. This gets too long to brute force. I'm struggling a little with how to approach the problem - it's not a TSP problem - I'm only going to visit a fraction of all nodes (maybe 50 of 200). It's also not a dijkstra's pathing problem, the shortest path typically would be to go nowhere. I need to maximize the total collected value in the allotted time making the required intermediate stops. Any thoughts on how to proceed would be greatly appreciated! Right now I've been approaching this using networkx in python. Edit following response In response to your edit - I'm looking for an approach to solve the problem - I can figure out the code later, I'm leaning towards A* over MDFS, because I don't need to just find one path (that will be relatively quick), I need to find an approximation of the best path. I'm struggling to create a heuristic that captures the time constraint (stay under time required to be at next stop) but also max eggs. I don't really want the shortest path, I want the "longest" path with the most eggs. In evaluating where to go next, I can easily do eggs/min and move to the bush with the best rate, but I need to figure out how to encourage it to slowly move towards the target. There will always be a solution - I could hop to the first bush, sit there all day and then go to the solution (there placement/time between is such that it is always solvable)
0
python,graph-theory,networkx
2016-04-05T22:47:00.000
0
36,438,428
The way the problem is posed doesn't make full sense. It is indeed a graph search problem to maximise a sum of numbers (subject to other constraints) and it possibly can be solved via brute force as the number of nodes that will end up being traversed is not necessarily going to climb to the hundreds (for a single trip). Each path is probably a few nodes long because of the 30 min constraint at each stop. With 8 hours in a day and negligible distances between the bushes that would amount to a maximum of 16 stops. Since the edge costs are not negligible, it means that each trip should have <<16 stops. What we are after is the maximum sum of 5 days harvest (max of five numbers). Each day's harvest is the sum of collected eggs over a "successful" path. A successful path is defined as the one satisfying all constraints which are: The path begins and ends on the same node. It is therefore a cycle EXCEPT for Tuesday. Tuesday's harvest is a path. The cycle of a given day contains the nodes specified in Mrs Bunny's list for that day. The sum of travel times is less than 8 hrs including the 30min harvesting time. Therefore, you can use a modified Depth First Search (DFS) algorithm. DFS, on its own can produce an exhaustive list of paths for the network. But, this DFS will not have to traverse all of them because of the constraints. In addition to the nodes visited so far, this DFS keeps track of the "travel time" and "eggs" collected so far and at each "hop" it checks that all constraints are satisfied. If they are not, then it backtracks or abandons the traversed path. This backtracking action "self-limits" the enumerated paths. If the reasoning is so far inline with the problem (?), here is why it doesn't seem to make full sense. If we were to repeat the weekly harvest process for M times to determine the best visiting daily strategy then we would be left with the problem of determining a sufficiently large M to have covered the majority of paths. Instead we could run the DFS once and determine the route of maximum harvest ONCE, which would then lead to the trivial solution of 4*CycleDailyHarvest + TuePathHarvest. The other option would be to relax the 8hr constraint and say that Mr Bunny can harvest UP TO 8hr a day and not 8hr exactly. In other words, if all parameters are static, then there is no reason to run this process multiple times. For example, if each bush was to give "up to k eggs" following a specific distribution, maybe we could discover an average daily / weekly visiting strategy with the largest yield. (Or my perception of the problem so far is wrong, in which case, please clarify). Tuesday's task is easier, it is as if looking for "the path between source and target whose time sum is approximately 8hrs and sum of collected eggs is max". This is another sign of why the problem doesn't make full sense. If everything is static (graph structure, eggs/bush, daily harvest interval) then there is only one such path and no need to examine alternatives. Hope this helps. EDIT (following question update): The update doesn't radically change the core of the previous response which is "Use a modified DFS (for the potential of exhaustively enumerating all paths / cycles) and encode the constraints as conditions on metrics (travel time, eggs harvested) that are updated on each hop". It only modifies the way the constraints are represented. The most significant alteration is the "visit each bush once per week". This would mean that the memory of DFS (the set of visited nodes) is not reset at the end of a cycle or the end of a day but at the end of a week. Or in other words, the DFS now can start with a pre-populated visited set. This is significant because it will reduce the number of "viable" path lengths even more. In fact, depending on the structure of the graph and eggs/bush the problem might even end up being unsolvable (i.e. zero paths / cycles satisfying the conditions). EDIT2: There are a few "problems" with that approach which I would like to list here with what I think are valid points not yet seen by your viewpoint but not in an argumentative way: "I don't need to just find one path (that will be relatively quick), I need to find an approximation of the best path." and "I want the "longest" path with the most eggs." are a little bit contradicting statements but on average they point to just one path. The reason I am saying this is because it shows that either the problem is too difficult or not completely understood (?) A heuristic will only help in creating a landscape. We still have to traverse the landscape (e.g. steepest descent / ascent) and there will be plenty of opportunity for oscillations as the algorithm might get trapped between two "too-low", "too-high" alternatives or discovery of local-minima / maxima without an obvious way of moving out of them. A*s main objective is still to return ONE path and it will have to be modified to find alternatives. When operating over a graph, it is impossible to "encourage" the traversal to move towards a specific target because the "traversing agent" doesn't know where the target is and how to get there in the sense of a linear combination of weights (e.g. "If you get too far, lower some Xp which will force the agent to start turning left heading back towards where it came from". When Mr Bunny is at his burrow he has all K alternatives, after the first possible choice he has K-M1 (M1 The MDFS will help in tracking the different ways these sums are allowed to be created according to the choices specified by the graph. (Afterall, this is a graph-search problem). Having said this, there are possibly alternative, sub-optimal (in terms of computational complexity) solutions that could be adopted here. The obvious (but dummy one) is, again, to establish two competing processes that impose self-control. One is trying to get Mr Bunny AWAY from his burrow and one is trying to get Mr Bunny BACK to his burrow. Both processes are based on the above MDFS and are tracking the cost of MOVEAWAY+GOBACK and the path they produce is the union of the nodes. It might look a bit like A* but this one is reset at every traversal. It operates like this: AWAY STEP: Start an MDFS outwards from Mr Bunny's burrow and keep track of distance / egg sum, move to the lowestCost/highestReward target node. GO BACK STEP: Now, pre-populate the visited set of the GO BACK MDFS and try to get back home via a route NOT TAKEN SO FAR. Keep track of cost / reward. Once you reach home again, you have a possible collection path. Repeat the above while the generated paths are within the time specification. This will result in a palette of paths which you can mix and match over a week (4 repetitions + TuesdayPath) for the lowestCost / highestReward options. It's not optimal because you might get repeating paths (the AWAY of one trip being the BACK of another) and because this quickly eliminates visited nodes it might still run out of solutions quickly.
0
662
true
0
1
Graph search - find most productive route
36,454,065
1
2
0
2
0
0
1.2
0
I am working with byte arrays in Java 1.7. I am using java.util.zip's Inflater and Deflater classes to compress the data. I have to interface with data generated by Python code. Does Python have the capability to compress data that can be uncompressed by Java's Inflater class, and the capability to decompress data that has been compressed by Java's Deflater class?
0
java,python
2016-04-06T13:24:00.000
0
36,452,520
If you meant if there is something in python to handle ZIP format, there is. It is the module zipfile. Python comes with all batteries included.
0
594
true
1
1
Does Python have a zip class that is compatible with Java's java.util.zip Inflater and Deflater classes?
36,452,775
1
1
0
0
0
0
0
0
I have created a python program using struct, that saves data in files. The data consists of a header (300 chars) and data (36000 int float pairs). On ubuntu this works and i can unpack the data for my php setup. I unpack the data in php by loading the content into a string and using unpack. I quickly found that 1 pair off int float, consumed the same as 8 chars in the php string. when I then moved this to windows, the data didn't take as much space, and when i try to unpack them in php, they seem to get unaligned from the binary string quickly. Is there any way to get the struct in php to use the architecture to produce the same output as ubuntu? I have tried the alligment options with struct (<,>,!,=). My ubuntu dev setup is 64bit and the server is also 64bit. I have tried using both 32bit python and 64bit python on the windows server.
0
php,python,struct
2016-04-06T21:43:00.000
1
36,462,908
It ended up being python gzip, that shiftet all bytes. Destroying the data.
0
50
false
0
1
Python struct on windows
36,464,750
1
5
0
-2
12
0
-0.07983
0
I'm juggling code branches that were partly done a few months ago, with intertwined dependencies. So the easiest way to move forward is to mark failing tests on a particular branch as pending (the rspec way) or to be skipped, and deal with them after everything has been merged in. In its final report, behave reports the number of tests that passed, the # failed, the # skipped, and the # untested (which are non-zero when I press Ctrl-C to abort a run). So behave as a concept of skipped tests. How do I access that?
0
bdd,python-behave
2016-04-07T16:45:00.000
0
36,482,419
You can use predefined "@skip" tag for the scenarios or features that you would like to skip and behave will automatically skip testing the scenario or the whole feature.
0
15,075
false
0
1
How do I skip a test in the behave python BDD framework?
37,288,247
1
3
0
2
5
0
0.132549
1
I am starting to make a python program to get user locations, I've never worked with the twitter API and I've looked at the documentation but I don't understand much. I'm using tweepy, can anyone tell me how I can do this? I've got the basics down, I found a project on github on how to download a user's tweets and I understand most of it.
0
python,python-2.7,twitter,tweepy
2016-04-08T01:40:00.000
0
36,490,085
Once you have a tweet, the tweet includes a user, which belongs to the user model. To call the location just do the following tweet.user.location
0
13,479
false
0
1
How to get twitter user's location with tweepy?
42,499,529
1
2
0
0
0
0
0
0
I have a web application which is written with python (Pyramid) and in the apache server, inside of the one of the Python we are launching a SH file which is a service to sending SMS. The problem is always the permission is denied. we tried the run the SH file by login into the root and it works. we changed the owner of the both files Python one and SH one to 'root' but not works! any ideas?!
0
python,bash,apache,server,pyramid
2016-04-08T08:05:00.000
1
36,494,553
Well you changed the owner of the files to root, and then you ran as root, and it worked, so that makes sense. The problem is that root isn't necessarily the user executing the script in your webapp. You need to find which user is trying to execute the script, and then change the files' ownership to that user (depending on how the scripts are invoked, you may need to chmod them as well to make sure they are executable)
0
39
false
0
1
Permission denied or Host key problems
36,502,436
1
1
0
0
1
0
0
0
I am trying Appium using Python language. I have written a simple Login script in Python,it executes perfectly in one android device/emultor using Appium. But i have no idea how to run in multiple device/emulators..i read some forums but did not get any solutions(i am very new to Automation-Appium). Please help me with detailed steps or procedure. Thank you.
0
android,python,python-appium
2016-04-11T05:06:00.000
0
36,540,220
You need to create multiple Udid and modify the same in the code.Inorder to launch in multiple devices you need to create multiple instances of appium server opening in Different ports(eg.4000,4001....etc)
0
447
false
1
1
How to run Appium script in multiple Android device/emulators?
59,561,249
1
1
0
0
0
0
0
1
I am using 3rd party to send and receive SMS, which includes text plus url of image. Is there any way that latest smartphones shows picture instead of link? Like the downloadable content.
0
c#,php,python,sms,sms-gateway
2016-04-11T05:27:00.000
0
36,540,431
If you want to show the image content from the URL all you can do is write a notification Application Which will read from the SMS that you are sending and (using the thirdparty number from which you are sending the sms) and notify the user with image (by reading the content of the SMS and downloading it from URL). But then all your users will have to download your app.and will need read permissions for SMS.
0
107
false
0
1
SMS with picture link
36,540,940
1
1
0
-2
3
0
-0.379949
1
If I retweet a tweet, is there a way of finding out how many times "my retweet" has been retweeted / favorited? Are there provisions in the Twitter API to retrieve that? There is no key in retweeted_status which gives that information. What am I missing here?
0
python,twitter,tweepy
2016-04-11T07:50:00.000
0
36,542,813
Yes you can track it. Get the stats(favorite, retweet_count) of your retweet(time when you are retweeting it.) and save this stats somewhere as a check-point. Next time when someone is going to retweet it again you will get an updated stats of the your previous retweet and do compare latest stats with the existing check-point.
0
1,191
false
0
1
Find the number of retweets of a retweet using Tweepy?
36,543,167
1
2
1
0
0
1
0
0
I have a python code. I need to execute the python script from my c# program. After searching a bit about this, I came to know that there is mainly two ways of executing a python script from c#. One by using 'Process' command and the other by using Iron Python. My question might seem dumb, is there any other way through which I can execute a python script? To be more specific, can I create a class , lets say 'Python' in c# and a member function 'execute_script' which doesn't use any api like iron python or doesn't create a process for executing the script, so that if call 'execute_scipt(mypythonprogram.py)' , my script gets executed. Sorry if this seems dumb. If this is possible, please do help me. Thanks in advance.
0
c#,python
2016-04-12T03:15:00.000
0
36,563,002
Can you create a C# class that calls a Python script without using Iron Python and without using any external API? No. That is not possible. You have a few other choices: Integrate the Python runtime into your program. Smead already described one way to do this. It will work, and it does avoid creating another process, but it will be a lot of work to get it running, and it is still technically using an API. I do not recommend this for a single Python script where you don't need to pass data back and forth, but it's good to know that option exists if your other options don't pan out. Use the Process module. This is probably what I would do. Process has security concerns when a malicious user can cause you to execute bogus shell commands, or if the malicious user can replace the contents of the Python script. It is quite safe when you can lock down those two things. The speed is unlikely to be a concern. It will literally only take a few minutes to set up a C# program with a process call, so if your mentor is concerned about speed, just write it and measure the speed to see if it's actually a problem. Consider rewriting the script in C# C# is a very expressive language with a very strong standard library, so assuming your script is not thousands of lines long, and does not use any obscure Python libraries, this might actually not be much work. If you really must not use Process, this would be the next solution I would consider.
0
74
false
0
1
Creating a class that executes python in c#
36,647,773
1
1
0
0
0
0
0
1
I am currently developing a python program for a Raspberry Pi. This Raspberry is meant to control a solar panel. In fact, there will be many Raspberry(ies) controlling solar panels and they will be connected to each others by RJ wires. The idea is that every Raspberry has the same status, there is not any "server" Raspberry and "client" Raspberry. The program will receive GPS data, i.e. position, time... Except from the GPS data, the Raspberry(ies) will not have direct internet access. However, it will be possible to plug a 3G key in order to gain access to internet. The problem is the following : I want to update my python program remotely, by internet provided by my 3G key (the solar panels are in a field, and I'm home for instance so I do not want to drive a hundred miles to get my Raspberry(ies) back and update them manually...). How is it possible to make the update remotely considering that I do not have a real "server" in my network of Raspberry(ies)?
0
python,linux,gps,updates,working-remotely
2016-04-12T13:48:00.000
0
36,575,268
I think you however need a server(or it can be just file-share service). If I got it correctly you need to control(or just update) Raspberry(ies), that connected to internet via 3G. So, there are options I see: Connect them into VPN; Write script that always be checking for new update for your app from a http\ftp file-sharing server; Use reverse-shell, but working depends on NAT specs that uses 3G provider.
0
223
false
0
1
How can I update a python program remotely on linux?
36,577,750
1
1
0
0
1
0
1.2
1
I'm writing a script that uses paramiko to ssh onto several remote hosts and run a few checks. Some hosts are setup as fail-overs for others and I can't determine which is in use until I try to connect. Upon connecting to one of these 'inactive' hosts the host will inform me that you need to connect to another 'active' IP and then close the connection after n seconds. This appears to be written to the stdout of the SSH connection/session (i.e. it is not an SSH banner). I've used paramiko quite a bit, but I'm at a loss as to how to get this output from the connection, exec_command will obviously give me stdout and stderr, but the host is outputting this immediately upon connection, and it doesn't accept any other incoming requests/messages. It just closes after n seconds. I don't want to have to wait until the timeout to move onto the next host and I'd also like to verify that that's the reason for not being able to connect and run the checks, otherwise my script works as intended. Any suggestions as to how I can capture this output, with or without paramiko, is greatly appreciated.
0
python,ssh,paramiko
2016-04-12T14:23:00.000
0
36,576,158
I figured out a way to get the data, it was pretty straight forward to be honest, albeit a little hackish. This might not work in other cases, especially if there is latency, but I could also be misunderstanding what's happening: When the connection opens, the server spits out two messages, one saying it can't chdir to a particular directory, then a few milliseconds later it spits out another message stating that you need to connect to the other IP. If I send a command immediately after connecting (doesn't matter what command), exec_command will interpret this second message as the response. So for now I have a solution to my problem as I can check this string for a known message and change the flow of execution. However, if what I describe is accurate, then this may not work in situations where there is too much latency and the 'test' command isn't sent before the server response has been received. As far as I can tell (and I may be very wrong), there is currently no proper way to get the stdout stream immediately after opening the connection with paramiko. If someone knows a way, please let me know.
0
391
true
0
1
Paramiko get stdout from connection object (not exec_command)
36,601,638
1
1
0
1
0
0
0.197375
0
I am writing a basic python script and I am trying to use the Github API. Because I am new to the development scene, I am unsure of what I can share with other developers. Do I generate a new personal access token (that I assume can be revoked) or do I give them Client ID and Client Secret? Can someone explain how OAuth (Client ID and Client Secret) is different from a personal access keys? Does this logic work across all APIs (not just on Github's)?
0
python,api,github,oauth-2.0,github-api
2016-04-12T23:36:00.000
0
36,585,941
The Short, Simple Answer You should probably give them none of those things. They are equivalent to handing over your username and password to someone. The Longer Answer It depends... Personal Access Tokens Your personal access token is a unique token that authorises and represents you during API calls, the same way that logging via the web interface authorises you to perform actions there. So when you call an API function with a personal access token, you are performing that API action as if you yourself had logged in and performed the same action. Therefore, if you were to give someone else your token, they would have the same access to the site as they would have if you gave them you username and password combination. Personal access tokens have attached scopes. Scopes control exactly how much access to GitHub a particular token has. For example, one token my have access to all private repositories, but another token only to public ones. Client IDs A client ID represents your application, rather than you. So when you create an application, GitHub gives you an ID that you use to identify your application to GitHub. Chiefly this allows someone logging into your application using OAuth to see on the GitHub web interface that it's your particular application requesting access to their account. Client Secrets A client secret is a random, unguessable string that is used to provide an extra layer of authentication between your application and GitHub. If you think of the client ID as the username of your application, you can think of the client secret as the password. Should I Share Them? Whether you wish to share any of these things depends largely on how much you trust the other developers. If you are all working on the same application, it's likely that you will all know the client ID and client secret. But if you want to develop an open-source application that people will install on their own machines, they should generate their own client ID and secrets for their own instances of the app. It's unlikely that you should ever share a personal access token, but if you have a bot account used by the whole team, then sharing the tokens could also be okay.
0
430
false
0
1
Personal Access Tokens, User Tokens
39,495,778
1
1
0
1
1
0
1.2
0
So I am trying to perform a frequency shift on a set of real valued points. In order to achieve a frequency shift, one has to multiply the data by a complex exponential, making the resulting data complex. If I multiply by just a cosine I get results at both the sum and difference frequencies. I want just the sum or the difference. What I have done is multiply the data by a complex exponential, use fft.fft() to compute the fft, then used fft.irfft() on only the positive frequencies to obtain a real valued dataset that has only a sum or difference shift in frequency. This seems to work great, but I want to know if there are any cons to doing this, or maybe a more appropriate way of accomplishing the same goal. Thanks in advance for any help you can provide!
0
python,numpy,fft,ifft
2016-04-13T18:15:00.000
0
36,606,390
What you are doing is perfectly fine. You are generating the analytic signal to accommodate the negative frequencies in the same way a discrete Hilbert transform would. You will have some scaling issues - you need to double all the non-DC and non-Nyquist signals in the real frequency portion of the FFT results. Some practical concerns are that this method imparts a delay of the window size, so if you are trying to do this in real-time you should probably examine using a FIR Hilbert transformer and the appropriate sums. The delay will be the group delay of the Hilbert transformer in that case. Another item of concern is that you need to remember that the DC component of your signal will also shift along with all the other frequencies. As such I would recommend that you demean the data (save the value) before shifting, zero out the DC bin after you FFT the data (to remove whatever frequency component ended up in the DC bin), then add the mean back to preserve the signal levels at the end.
1
581
true
0
1
In python, If I perform an fft on complex data, then irfft only the positive frequencies, how does that affect the data?
36,609,298
2
2
0
2
3
0
1.2
0
I can't seem to find any information on what TastyPie throttles based on. Is it by the IP of the request, or by the actual Django user object?
0
python,django,tastypie,throttling
2016-04-15T21:23:00.000
0
36,657,049
Throttle key is based on authentication.get_identifier function. Default implementation of this function returns a combination of IP address and hostname. Edit Other implementations (i.e. BasicAuthentication, ApiKeyAuthentication) returns username of the currently logged user or nouser string.
0
203
true
1
1
TastyPie throttling - by user or by IP?
36,657,503
2
2
0
2
3
0
0.197375
0
I can't seem to find any information on what TastyPie throttles based on. Is it by the IP of the request, or by the actual Django user object?
0
python,django,tastypie,throttling
2016-04-15T21:23:00.000
0
36,657,049
Tomasz is mostly right, but some of the authentication classes have a get_identifier method that returns the username of the currently logged in user, otherwise 'nouser'. I plan on standardizing this soon.
0
203
false
1
1
TastyPie throttling - by user or by IP?
36,659,688
1
1
0
0
0
0
0
1
I"m using python 2.7 and paramiko 1.16 While attempting an SSH to el capitan, paramiko throws the exception no acceptable kex algorithm. I tried setting kex, cyphers in sshd_config, but sshd can't be restarted for some reasons. I tried some client side fixes, but upgrading paramiko did not fix the problem.
0
python,ssh,paramiko
2016-04-15T22:53:00.000
0
36,658,093
Workaround from another stack overflow issue by putting the following cipher/mac/kex settings to sushi_config: Ciphers [email protected],[email protected],aes256-ctr,aes128-ctr MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1 KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1
0
677
false
0
1
paramiko no acceptable kex algorithm while ssh to el capitan
36,700,655
1
3
0
6
7
1
1.2
0
If you want to run your python script, let's say every day at 6 pm, is it better to go with a crontab entry or with a Advanced Python Scheduler solution regarding to power, memory, cpu ... consumption? In my eyes doing a crone job is therefore better, because I do not see the advantage of permanently running an Advanced Python Scheduler.
0
python,cron
2016-04-16T09:24:00.000
0
36,662,393
You should probably use cron if two conditions are met; It is available on all platforms your code needs to run on. Starting a script on a set time is sufficient for your needs. Mirroring these are two reasons to build your own solution: Your program needs to be portable across many operating systems, including those that don't have cron available. (like ms-windows) You need to schedule things in a way other than on a set start time. E.g. on a set interval, or if some other condition it met.
0
8,160
true
0
1
Cron job vs Advanced Python Scheduler
36,662,797
1
2
0
0
0
0
0
0
I am new for LIRC programming. Right now I am using GPIO 18 for lirc implementation. But I want to implement multiple IR emitters with different GPIOs and working all as different remotes. This is because I have two same brand TVs in different rooms to control.
0
python,raspberry-pi,raspbian,lirc
2016-04-18T04:27:00.000
0
36,685,482
Instead of running two instances on my pi I opted to make what is essentially a transistor switchboard (on a breadboard). I call each send command from a script which first runs another script that turns on one of three GPIOs, activating one of three transistors, and thus exposing one of three IR transmitters to receive signal from the single LIRC gpio. This actually works very well, and I was able to put this together in less time than it takes to read the tutorials on multiple instances and drivers. I needed this ability because I have multiple components which are of the same make and therefore receive some of the same codes such as power. If each device didn't have it's own transmitter I wouldn't be able to control one device without the other non intended device also responding to the command.
0
800
false
0
1
How to get multiple instances of LIRC working and each using different GPIO with raspberry pi?
49,105,962
1
2
0
0
0
0
0
0
I was writing a huge file output.txt (around 10GB) on a server thorugh a python script using the f.write(row) command but because the process was too long I decided to interrupt the program using kill -9 pid The problem is that this space is still used on the server when I check with the command df -h How can I empty the disk occupied by this buffer that was trying to write the file? the file output.txt was empty (0 Byte) when I killed the script, but I still deleted it anyway using rm output.txt but the space in the disk doesn't become free, I still have 10 GB wasted..
0
python,bash,unix,kill-process,diskspace
2016-04-18T13:00:00.000
1
36,694,745
If you delete a file which is opened in some processes, it's marked as deleted, but the content remains on disk, so that all processes still can read it. Once all processes close corresponding descriptors (or simply finish), the space will be reclaimed.
0
62
false
0
1
Delete an unfinished file
36,695,061
1
1
0
0
0
0
1.2
0
I'm trying to connect a burlap java server with a python client but I can't find any detail whatsoever regarding how to use burlap with python or if it even is implemented for python. Any ideas? Can I build burlap python clients? Any resources? Would using a hessian python client work with a java burlap server?
0
java,python,server,client,hessian
2016-04-18T13:10:00.000
0
36,694,973
Burlap and Hessian are 2 different (but related) RPC protocols, with Burlap being XML based and Hessian being binary. They're both also pretty ancient, so if you have an opportunity to use something else, I'd highly recommend it. If not, then you're going to have to find a Burlap lib for Python. Since it seems that a Burlap lib for Python simply doesn't exist (at least anymore), your best choice is probably to make a small Java proxy that communicates with a more recent protocol with the Python side and in Burlap with the Java server.
0
205
true
1
1
Burlap java server to work with python client
36,695,144
1
2
0
0
0
0
0
0
I'm using pytest with the xdist plugin to run a large suite of tests. These tests can take a few hours to run, so I'd like to see certain information while they are running. The items I'd like to see are the errors when tests fail, how many tests are still left, and more. To do this, I'd like to have a setup where detailed errors go to one file while basic info like how many tests are left will go to another file. Is there a pytest plugin that would allow this or a way to hook up the internal pytest logger to do this? Thanks for your time.
0
python,logging,pytest,xdist,pytest-xdist
2016-04-18T18:02:00.000
0
36,701,182
pytest-sugar does it for example at the sprint in june we hope to enhance the api further
0
567
false
0
1
Can pytest xdist tests log to the same configuration?
36,989,662
3
6
1
3
18
0
0.099668
0
I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause?
0
python,g++,theano
2016-04-19T15:31:00.000
0
36,722,975
This is the error that I experienced in my mac running jupyter notebook with a python 3.5 kernal hope this helps someone, i am sure rggir is well sorted at this stage :) Error Using Theano backend. WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string. Cause update of XCode (g++ compiler) without accepting terms and conditions, this was pointed out above thanks Emiel Resolution: type g++ --version in the mac terminal "Agreeing to the Xcode/iOS license requires admin privileges, please re-run as root via sudo." is output as an error launch Xcode and accept terms and conditions return g++ --version in the terminal Something similar to the following will be returned to show that Xcode has been fully installed and g++ is now available to keras Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 8.0.0 (clang-800.0.42.1) Target: x86_64-apple-darwin15.6.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin Restart you machine… I am sure there are some more complicated steps that someone smarter than me can add here to make this faster Run the model.fit function of the keras application which should run faster now … win!
0
31,119
false
0
1
theano g++ not detected
40,705,647
3
6
1
7
18
0
1
0
I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause?
0
python,g++,theano
2016-04-19T15:31:00.000
0
36,722,975
I had this occur on OS X after I updated XCode (through the App Store). Everything worked before the update, but after the update I had to start XCode and accept the license agreement. Then everything worked again.
0
31,119
false
0
1
theano g++ not detected
39,568,992
3
6
1
6
18
0
1
0
I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause?
0
python,g++,theano
2016-04-19T15:31:00.000
0
36,722,975
On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw.
0
31,119
false
0
1
theano g++ not detected
37,846,308
1
3
0
0
2
1
0
0
I wrote some codes in Maya using Maya Python to render over 2,000 pictures. Since there are a lot of work for Maya to finish, during the long process of rendering, Maya may get crashed. So I have to make a module to monitor Maya. If Maya get stuck, the module has to keep Maya going and modify the mistakes. I want to know what tools can I use to achieve this function. What kind of language should I use to code this module?
0
python,mfc,rendering,maya,monitor
2016-04-21T03:27:00.000
0
36,759,068
Use some renderfarm program as Deadline or something else.
0
330
false
0
1
How to monitor a maya program running in my computer?
36,766,982
3
3
0
1
1
0
0.066568
0
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat. Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
0
java,python,performance,optimization
2016-04-23T00:32:00.000
0
36,805,233
The crucial question is this one: "Java's static typing including seems to make it less prone to errors on a larger scale". The crucial word here is "seems." Sure, Java will help you catch this one particular type of error. But how important is that, and what do you have to pay for it? The overhead imposed by Java's type system means that you have to write more lines of code, which means reduced productivity. I've used both and I have no doubt that I'm more productive in Python. I have found that type-related bugs in Python are generally easy to find and fix. Keep in mind that in a professional environment you're not going to ship code without testing it pretty carefully. The bottom line for a programming environment is productivity - usable functionality per unit of effort, not the number of bugs you found and fixed during development. My advice: if you have a working project written in Python, don't rewrite it unless you're certain there's a benefit.
0
648
false
1
1
Rewrite Python project to Java - worth it?
36,806,181
3
3
0
0
1
0
0
0
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat. Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
0
java,python,performance,optimization
2016-04-23T00:32:00.000
0
36,805,233
Java is inherently object oriented. Alternatively python is procedural. As far as the ability of the language to handle large projects you can make do with either. As far as producing more usable products I would recommend java script as opposed to java because of its viability in the browser. By embedding your js in a publicly hosted website you allow people with no coding knowledge to run your project seamlessly in the browser. Further more all the GUI design features of HTML are available at your disposal. That said any language has it's ups and downs and anything I've said here is simply my perception.
0
648
false
1
1
Rewrite Python project to Java - worth it?
36,805,273
3
3
0
2
1
0
1.2
0
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat. Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
0
java,python,performance,optimization
2016-04-23T00:32:00.000
0
36,805,233
It's only worth it if it solves a real problem, note, that problem could be I want to learn something better I need it to go faster to reduce power requirements in my colo. I need to hire more people and the talent pool for [insert language here] is too small. Insert innumerable real problems here. Python and Java are both suitable for production. Write it in whatever makes it easiest to solve the problems you and or your team are facing and if you want to preempt some problems make sure you've done your homework. Plenty of projects have died because they chose C/C++ believing performance was going to be a major factor without thinking about the extra effort involved in using these language well. You mentioned maintainability. You're likely to require more code to rewrite it in Java and there's a direct correlation between Bugs and LOC. It's up for debate which one is easier to maintain. I'm sure both camps believe theirs is. Of the two which one do you enjoy coding with the most?
0
648
true
1
1
Rewrite Python project to Java - worth it?
36,805,510
1
2
0
0
4
1
0
0
I have a python package where all my unittest test classes are stored in modules in a subpackage mypkg.tests. In the tests/__init__.py file I have a function called suite. I normally run these tests by calling python setup.py test which has test_suite='satpy.tests.suite'. Is it possible to run this test suite from pycharm? The reason I have the suite function is that it only contains tests that are ready to be run from my continuous integration, but other failing tests exist in the directory (from older versions of the package). I could also see this being useful for selecting quick unittests versus long running tests. I've tried running as a script, function as nosetest or unittest configurations. I've tried adding if __name__ == "__main__": and other types of command line running methods with no success. Is there a way to run only some tests from a pycharm run configuration?
0
python,unit-testing,pycharm,nose,python-unittest
2016-04-25T17:35:00.000
0
36,847,349
One thing I found it that in the particular case my Test class was from a subclass of unittest.TestCase that is defined in a local module. There is a known bug in pycharm that has been around for years that it sometimes does not fully see a local module that is in your virtualenv in some cases like marking the imports as unknown. There is a workaround for that which is to add either the egg for that local project or its source path to as a source in the project using it. When I did that workaround for the other bug the problem went away. So it seems pycharm machinery did not recognize my Test class as a unittest.TestCase due to the other issue.
0
1,707
false
0
1
PyCharm run select unittests
59,145,768
1
1
0
0
2
0
0
0
Scenario : I have an OTP generation API. As of now , if I do POST with contact number in body, it will be generating OTP code irrespective of how many times, it gets invoked by same ip. There is no security at code level and nginx level. Suggestions are accepted whether blocking IP should be done at code level or Nginx. I want to restrict access to api 5 times in a day from same IP .
0
python,node.js,django,nginx
2016-04-26T09:51:00.000
0
36,861,358
You really should move away from using the IP as the restriction. Not only can the IP be changed allowing for an intermediary to replay the OTP. A combination of the visiting IP along with additional unique vectors would serve as a better method of identifying the visitor and associating the OTP with their access. Because of this the throttling you wish to implement would be better served at the code or application level vs. your web server. You should also be doing that anyways in order to better protect the OTP and the best practices associated with them; expiring, only using them once etc. etc.
0
219
false
0
1
Securing OTP API at code level or Nginx level?
36,862,095
1
1
0
0
0
0
1.2
0
During a pytest fixture, what is the best way to robustly get the location of a text file for users that may specify different working directories at runtime e.g. I want a person using the cmd line in the test fixture directory find the file as well as an integration server which may work in the project's root. Can I somehow include the text file in a module? What are best practices for including and getting access to non .py files? I am aware of BASE_DIR = os.path.dirname(os.path.dirname(__file__)), but I am not sure if this will always refer to the same directory given a particular way of running the test suite.
0
python-2.7,unit-testing,pytest
2016-04-27T10:48:00.000
0
36,887,637
os.path.dirname(os.path.abspath(__file__)) (which is what I think you meant above) worked fine for me so far - it should work as long as Python can figure the path of the file, and with pytest I can't imagine a scenario where that wouldn't be true.
0
1,067
true
0
1
pytest: robust file path of a txt file used in tests
36,905,061
1
1
0
1
0
0
0.197375
0
I am trying to write an application that uses ZeroMQ to recieve messages from clients. I receive the message from the client in the main loop, and need to send an update to a second socket (general idea is to establish a 'change feed' on objects in the database the application is built on). Receiving the message works fine, and both sockets are connected without issue. However, sending the request on the outbound port simply hangs, and the test server meant to receive the message does not receive anything. Is it possible to use both a REQ and REP socket within the same application? For reference, the main application is C++ and the test server and test client communicating with it are written in Python. They are all running on Ubuntu 14.40. Thanks! Alex
0
python,c++,sockets,zeromq
2016-04-28T01:30:00.000
1
36,903,698
And this is what happens when you forget to call connect() on the socket...
0
97
false
0
1
C++ ZeroMQ Single Application with both REQ and REP sockets
36,903,756