Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
48,088,137
2018-01-04T03:10:00.000
0
0
0
0
0
python,excel,pivot-table,openpyxl
0
52,813,212
0
3
0
false
0
0
Worksheets("SheetName").PivotTables("PivotTableName").PivotCache().Refresh()
1
0
0
0
I have a workbook that has several tabs with pivot tables. I can put data on the tab that holds the data for each pivot. My problem is that I don't know how to refresh the pivot tables. I would assume that I would need to cycle through each sheet, check to see if there is a pivot table, and refresh it. I just can't find how to do that. All of the examples I find use win32 options, but I'm using a Mac and Linux. I would like to achieve with openpyxl if possible.
Refresh Excel Pivot Tables
0
0
1
1
0
3,004
48,092,110
2018-01-04T09:20:00.000
1
0
1
0
0
python,python-3.x,exe,executable
0
48,123,001
0
1
0
false
0
1
They should all work. Py2exe and Py2app are the ones that don't. If they don't work they you haven't used them properly. Particularly cx_Freeze that requires you to "tune things manually". Here are some debug steps that will help you resolve your error: When freezing for the first time don't hide the console. This will hide any errors that occur. You need to see those. When building look for any errors that appear at the end. These may give you a clue as to how to solve the problem. If you have errors the terminal will appear shortly then close. Run the executable through the terminal and the terminal will stay open allowing you to read the messages. This can be done in the following way: C:\Location>cd \Of\App C:\Location\Of\App>NameOfExecutable cd is a command that stands for change dictionary and assuming your .exe is called NameOfExecutable. Under PowerShell you would use the same but ./NmeOfExecutable to execute instead. See what errors that appear. If you get an error that says a package is missing includes often does the trick (remember to include the top level package as well as the exact one missing. If you use external files or images remember to use include_files to add them along as well. Note that you can add runtimes (or DLLs) in this way too Attempt a build folder before going for an msi. Get the build folder working first then go for the msi.
1
1
0
0
I am kind a stuck. I have tried to make Tetris game with music to .exe, but I really don't know how to do it. Can someone give some tips, how to make .py to .exe? I have tried Pyinstaller, cx_Freeze and none of them work.
Converting Python 3.6.1 Tetris game with music, to exe
0
0.197375
1
0
0
88
48,092,543
2018-01-04T09:45:00.000
1
0
0
0
0
python,html,python-2.7,gtk,webkitgtk
0
48,123,502
0
1
0
true
0
1
Done. I've used Flask, Socket.io and gtk to make an app, showing a html file in full screen, with python variables in it.
1
0
0
0
for a project i have to make a GUI for python. It should show some variables (temp etc). But I don't know how I can pass variables trough GTK to the window. Any answers appreciated :) some info: I am using a RPi3, but that's nothing which is important, or is it? I have a 7" display attached, on which the program should be seen in full screen. In the end, there should stand sth like temp, humidity, water etc I don't exactly know which GTK i use, but it's in python. So I think it's pygtk Thanks for reading, Fabian
Give python variables to GTK+ with an html file
0
1.2
1
0
0
104
48,093,726
2018-01-04T10:55:00.000
1
0
1
1
1
python,cygwin
1
48,096,083
0
1
0
true
0
0
just make sure you are in admin mode. i.e. right click on Cygwin, select running as administrator. then install your package specifically using pip3, for python3. i.e. pip3 install your_package with updated version, do pip3 install --upgrade your_package
1
1
0
0
I'm working on a windows 7 and using Cygwin for unix-like functionality. I can write and run Python scripts fine from the Cygwin console, and the installation of Python packages using pip installis successful and the installed package appears under pip list. However, if I try to run a script that imports these packages, for example the 'aloe' package, I get the error "no such module named 'aloe'". I have discovered that the packages are being installed to c:\python27\lib\site-packages, i.e. the computer's general list of python packages, and not to /usr/lib/python3.6/site-packages, i.e. the list of python packages available within Cygwin. I don't know how to rectify this though. If I try to specify the install location using easy_install-3.6 aloe I get the error [Errno 13] Permission denied: '/usr/lib/python3.6/site-packages/test-easy-install-7592.write-test'. In desperation also tried directly copying the 'aloe' directory to the Cygwin Python packages directory using cmd with cp -r \python27\lib\site-packages\aloe \cygwin\lib\python3.6\site-packages and the move was successful, but the problem persists and when I check in the Cygwin console using ls /usr/lib/python3.6/site-packages I can't see 'aloe'. I have admin rights to the computer in general (sudo is not available in Cygwin anyway) so really can't figure out what the problem is. Any help would be greatly appreciated. Thanks.
permission denied when installing python packages through cygwin
0
1.2
1
0
0
3,417
48,122,283
2018-01-05T22:38:00.000
1
0
0
0
0
python,apache,flask,virtualhost
0
48,142,441
0
2
0
false
1
0
It seems that having SERVER_NAME set in the os environment was causing this problem in conjunction with subdomains in blueprint registration. I removed SERVER_NAME from /etc/apache2/envvars and the subdomain logic and it worked.
1
2
0
0
I have a Flask app and want it to work for www.domain-a.net and www.domain-b.net behind Apache + WSGI. I can get it to work for one or the other, but can't find a way to get it to work for both. It seems that the domain which registers first is the only one that works. Preferably this would work by having two Apache VirtualHosts set up to use the same WSGI config. I can get that part to work. But Flask just returns 404 for everything sent from the second VirtualHost.
how do I get Flask blueprints to work with multiple domains?
0
0.099668
1
0
0
392
48,126,838
2018-01-06T11:25:00.000
-2
0
0
0
0
python,computational-geometry,intersection,plane
0
48,129,094
0
3
0
false
0
0
This is solved by elementary vector computation and is not substantial/useful enough to deserve a library implementation. Work out the math with Numpy. The line direction is given by the cross product of the two normal vectors (A, B, C), and it suffices to find a single point, say the intersection of the two given planes and the plane orthogonal to the line direction and through the origin (by solving a 3x3 system). Computation will of course fail for parallel planes, and be numerically unstable for nearly parallel ones, but I don't think there's anything you can do.
1
8
1
0
I need to calculate intersection of two planes in form of AX+BY+CZ+D=0 and get a line in form of two (x,y,z) points. I know how to do the math, but I want to avoid inventing a bicycle and use something effective and tested. Is there any library which already implements this? Tried to search opencv and google, but with no success.
Plane-plane intersection in python
0
-0.132549
1
0
0
8,472
48,140,731
2018-01-07T19:23:00.000
0
0
1
0
0
python,conda,gurobi
0
48,141,155
1
1
0
false
0
0
I suggest the following: conda list to see what's in the environment, and how it was installed. If gurobi was installed as a conda package then use conda uninstall gurobi, if using pip then use pip uninstall gurobi.
1
0
0
0
I am using gurobi with anaconda and python, and recently downloaded an updated version (7.5.2) to update the already installed 7.0.2 version that is on my computer. I can find the conda command line prompts to remove the conda installed package, but cannot find any code anywhere to remove the 7.0.2 version from my computer so that it doesn't keep referencing 7.0.2 when I try to install new version via conda again. If anyone can offer any advice it would be much appreciated! Interesting that there is nothing in the gurobi docs that states how to do this.
How to remove old version of gurobi in Windows
0
0
1
0
0
1,726
48,162,075
2018-01-09T05:44:00.000
1
0
0
0
0
python,amazon-rds,apache-nifi
0
48,170,299
0
2
0
true
0
0
Question might not have been clear based on the feedback, but here is the answer to get a NiFi (running on an AWS EC2 instance) communicating with an Amazon RDS instance: On the EC2 instance, download the latest JDBC driver (wget "https://driver.jar") (If needed) Move the JDBC driver into a safe folder. Create the DBCPConnectionPool, referencing the fully-resolved file path to the driver.jar (helpful: use readlink -f driver.jar to get the path). Don't forget -- under your AWS Security Groups, add an inbound rule that allows your EC2 instance to access RDS (under Source, you should put the security group of your EC2 instance).
1
0
0
0
I have a FlowFile and I want to insert the attributes into RDS. If this was a local machine, I'd create a DBCPConnectionPool, reference a JDBC driver, etc. With RDS, what am I supposed to do? Something similar (how would I do this on AWS)? Or am I stuck using ExecuteScript? If it's the later, is there a Python example for how to do this?
How best to interact with AWS RDS Postgres via NiFi
0
1.2
1
1
0
1,019
48,189,504
2018-01-10T14:22:00.000
0
0
0
0
0
python,turtle-graphics
0
48,210,844
0
1
0
false
0
1
OK, I have solved the situation by using: speed(0); turtle.tracer(False); turtle.bye(). The graphics window is intialized but suddenly closed.
1
0
0
0
everyone! I want to implement the Turtle into my application just for the purpose of coordinate generation. The problem is that I need to get the coordinates of turtle move (etc.) without any "pop-up window" containing the graphics or even worse animation. Is it possible somehow to disable initialization of turtle graphics? Thanks a lot!
How to disable initializing of graphics window of Python Turtle
1
0
1
0
0
60
48,206,010
2018-01-11T11:29:00.000
1
0
0
1
0
python,logging,flask,uwsgi
0
48,211,175
0
1
0
false
1
0
There is no "stop" event in WSGI, so there is no way to detect when the application stops, only when the server / worker stops.
1
0
0
0
I have a Flask app that I run with uWSGI. I have configured logging to file in the Python/Flask application, so on service start it logs that the application has been started. I want to be able to do this when the service stops as well, but I don't know how to implement it. For example, if I run the uwsgi app in console, and then interrupt it with Ctrl-C, I get only uwsgi logs ("Goodbye to uwsgi" etc) in console, but no logs from the stopped python application. Not sure how to do this. I would be glad if someone advised on possible solutions. Edit: I've tried to use Python's atexit module, but the function that I registered to run on exit is executed not one time, but 4 times (which is the number of uWSGI workers).
Logging uWSGI application stop in Python
0
0.197375
1
0
0
183
48,209,706
2018-01-11T14:43:00.000
0
0
1
0
0
python,regex
0
48,210,007
0
3
0
false
0
0
Ok, thanks. I find another solution. lista = re.findall(("PROGRAM S\d\d\S+") To find any character after the digit as repetition.
1
0
0
0
My code is as follow: list = re.findall(("PROGRAM S\d\d"), contents If I print the list I just print S51 but I want to take everything. I want to findall everything like that "PROGRAM S51_Mix_Station". I know how to put the digits to find them but I don´t know how to find everything until the next space because usually after the last character there is an space. Thanks in advance.
Regular expression help to find space after a long string
0
0
1
0
0
50
48,211,634
2018-01-11T16:21:00.000
0
0
1
1
0
python-3.x
0
51,157,311
0
1
0
false
0
0
Most of the external packages are still not supporting 3.6. Try with cx_Freeze in 3.6 else go with pyinstaller but python version should be 3.5(this works fine for me)
1
1
0
0
I need to convert my .py files into .exe files for Python 3.6.4. I have tried almost everything on Google and YouTube and none of it seems to work for me. It seems as though a lot of the explanations either gloss over the most technical aspects of installing any modules that convert .py files into .exe files or they are outdated. Can someone give me a step by step example of how to convert my .py files into .exe files for Python 3.6.4.? I was able to convert the .py files easily for Python 3.4 but not 3.6.4. My file path is: This Pc > C: > Users > XXXX > AppData > Local > Programs > Python > Python36-32
How to convert a .py file into a .exe file for Python 3.6.4
0
0
1
0
0
133
48,219,121
2018-01-12T03:24:00.000
8
0
0
0
0
python,tensorflow,neural-network,conv-neural-network
0
48,223,162
0
1
0
true
0
0
tf.layers.conv1d is used when you slide your convolution kernels along 1 dimensions (i.e. you reuse the same weights, sliding them along 1 dimensions), whereas tf.layers.conv2d is used when you slide your convolution kernels along 2 dimensions (i.e. you reuse the same weights, sliding them along 2 dimensions). So the typical use case for tf.layers.conv2d is if you have a 2D image. And possible use-cases for tf.layers.conv1d are, for example: Convolutions in Time Convolutions on Piano notes
1
2
1
0
What is the difference in the functionalities of tf.layers.conv1d and tf.layers.conv2d in tensorflow and how to decide which one to choose?
Difference between tf.layers.conv1d vs tf.layers.conv2d
0
1.2
1
0
0
4,923
48,237,510
2018-01-13T06:37:00.000
0
0
0
0
0
java,android,python
0
48,238,627
0
1
0
false
1
0
If you have access to python from java android app. You can write python output in file, then in java code read that file. Or if python output is in web, you will need some web service that provides json/xml output, and then in java code you should call that web service.
1
0
0
0
I have to develop an Android application which makes use of machine learning algorithms at the back end. Now, for developing the Android app, I use Java and for implementing the machine learning algorithms I use Python. My question is how to link the Python code to an Android app written in Java. That is supposed my Python code generates an output, now how to send this data to an Android application?
How to link python code to android application developed in java?
0
0
1
0
0
347
48,265,821
2018-01-15T15:18:00.000
0
1
0
0
0
python,c,popen
0
48,266,333
0
1
0
false
0
0
Probably not a full answer, but I expect it gives some hints and it is far too long for a comment. You should think twice about your requirements, because it will probably not be that easy depending on your proficiency in C and what OS you are using. If I have correctly understood, you have a sensor that sends data (which is already weird unless the sensor is an intelligent one). You want to write a C program that will read that data and either buffer it, and retain only last (you did not say...) and at the same time will wait for requests from a Python script to give it back what it has received (and kept) from the sensor. That probably means a dual thread program with quite a bit of synchronization. You will also need to specify the communication way between C and Python. You can certainly use the subprocess module, but do not forget to use unbuffered output in C. But you could also imagine an independant program that uses a FIFO or a named piped with a well defined protocol for external requests, in order to completely separate both problems. So my opinion is that this is currently too broad for a single SO question...
1
0
0
0
I have a flow sensor that I have to read with c because python isn't fast enough but the rest of my code is python. What I want to do is have the c code running in the background and just have the python request a value from it every now and then. I know that popen is probably the easiest way to do this but I don't fully understand how to use it. I don't want completed code I just want a way to send text/numbers back and forth between a python and a c code. I am running raspbian on a raspberry pi zero w. Any help would be appreciated.
How to read a sensor in c but then use that input in python
0
0
1
0
0
215
48,272,093
2018-01-15T23:26:00.000
1
0
0
0
0
python,amazon-redshift,etl,emr,amazon-emr
0
48,276,498
0
1
0
false
0
0
I suppose your additional columns are measures, not dimensions. So you can keep the dimensions in the individual columns and include them into sort key, and store measures in JSON, accessing them whenever you need. Also if you can distinguish between frequently used measures vs. occasional you can store the frequently used ones in columns and the occasional ones in JSON. Redshift has native support for extracting the value given the key, and you also have the ability to set up Python UDFs for more complex processing.
1
0
0
0
Scenario: I have a Source which maintains the transactions data. They have around 900 columns and based on the requirements of the new business, they add additional columns. We are a BI team and we only extract around 200 columns which are required for our reporting. But when new business is launched / new analysis is required, sometimes users approach us and request us to pull extra columns from the source. Current Design: We have created a table with extra columns for future columns as well. We are maintaining a 400 column table with the future column names like str_01, str_02...., numer_01, numer_02... date_01, date_02... etc. We have a mapping table which maps the columns in our table and columns in Source table. Using this mapping table, we extract the data from source. Problem: Recently, we have reached the 400 column limit of our table and we won't be able to onboard any new columns. One approach that we can implement is to modify the table to increase the columns to 500 (or 600) but I am looking for other solutions on how to implement ETL / design the table structure for these scenarios.
ETL for a frequently changing Table structure
1
0.197375
1
1
0
131
48,273,001
2018-01-16T01:45:00.000
0
0
0
1
0
python,parallel-processing,fabric
0
48,341,150
0
1
0
true
0
0
The task1 did not run at all because running a command with & in Fabric does not work. It is because, in linux when you log out of a session all the processes associated with it are terminated. So if you want to make sure a command keeps running even after you log out of the session you need to run it like this: run('nohup sh command &')
1
0
0
0
For my automation purposes, I'm using Fabric. But I could not run 2 tasks at the same time? For example, I want to run task 1 to collect data in the tmp folder. I want to run task 2 which will generate data and put in tmp. Tas1 2 will be running a bit before task 2. Here is my sudo code: output1 = run("./task1_data_logger &") output2 = run("./task2_main_program") RESULT: Task2_main_program is running fine but I didn't see task1_data_logger running at all. I thought I put the & so that Task1 can be run in the background. I've read Parallel execution document but it is more for running parallel in multiple host, which is not my case. Anyone knows how to 2 tasks simultaneously instead of serially? Thank you.
Running 2 tasks at the same time using Fabric
0
1.2
1
0
0
114
48,273,710
2018-01-16T03:33:00.000
0
0
1
0
0
python,compare,difflib
0
48,273,885
0
2
0
false
0
0
You may use below set(str1).intersection(set(str2)) which will give you the difference of the two list.
1
0
0
0
I'm relatively new to python and I am using difflib to compare two files and I want to find all the lines that don't match. The first file is just one line so it is essentially comparing against all the lines of the second file. When using difflib, the results show the '-' sign in front of the lines that don't match and it doesn't show anything in front of the line that does match. (I thought it would show a '+'). For the lines that have a '-' in front, how can I just write those lines to a brand new file (without the '-' in front) ? Below is the code snippet I am using for the difflib. Any help is greatly appreciated. f=open('fg_new.txt','r') f1=open('out.txt','r') str1=f.read() str2=f1.read() str1=str1.split() str2=str2.split() d=difflib.Differ() diff=list(d.compare(str2,str1)) print ('\n'.join(diff))
difflib and removing lines even without + in front of them python
0
0
1
0
0
252
48,302,876
2018-01-17T13:57:00.000
1
0
0
0
1
python,pandas
0
48,303,455
0
1
0
false
0
0
I think the problem is related to the fact that I was trying to assign a None to a bool Series, then it just tries to convert to a different type (why not object?) Fixed changing the dtype to object first: dataframe.foo = dataframe.foo.astype(object). Works like a charm now.
1
0
1
0
I'm facing a weird issue on Pandas now, not sure if a pandas pitfall or just something I'm missing... My pd.Series is just foo False False False > a.foo.dtype dtype('bool') When I use a dataframe.set_value(index, col, None), my whole Series is converted to dtype('float64') (same thing applies to a.at[index, col] = None). Now my Series is foo NaN NaN NaN Do you have any idea on how this happens and how to fix it? Thanks in advance. :) Edit: Using 0.20.1.
dtype changes after set_value/at
0
0.197375
1
0
0
37
48,309,776
2018-01-17T20:50:00.000
0
1
0
0
0
python,selenium,automated-tests,modular
0
48,311,473
0
1
0
false
0
0
You don't really want your tests to be sequential. That breaks one of the core rules of unit tests where they should be able to be run in any order. You haven't posted any code so it's hard to know what to suggest but if you aren't using the page object model, I would suggest that you start. There are a lot of resources on the web for this but the basics are that you create a single class per page or widget. That class would hold all the code and locators that pertains to that page. This will help with the modular aspect of what you are seeking because in your script you just instantiate the page object and then consume the API. The details of interacting with the page, the logic, etc. all lives in the page object is exposed via the API it provides. Changes/updates are easy. If the login page changes, you edit the page object for the login page and you're done. If the page objects are properly implemented and the changes to the page aren't severe, many times you won't need to change the scripts at all. A simple example would be the login page. In the login class for that page, you would have a login() method that takes username and password. The login() method would handle entering the username and password into the appropriate fields and clicking the sign in button, etc.
1
0
0
0
Edited Question: I guess I worded my previous question improperly, I actually want to get away from "unit tests" and create automated, modular system tests that build off of each other to test the application as whole. Many parts are dependent upon the previous pages and subsequent pages cannot be reached without first performing the necessary steps on the previous pages. For example (and I am sorry I cannot give the actual code), I want to sign into an app, then insert some data, then show that the data was sent successfully. It is more involved than that, however, I would like to make the web driver portion, 'Module 1.x'. Then the sign in portion, 'Module 2.x'. The data portion, 'Module 3.x'. Finally, success portion, 'Module 4.x'. I was hoping to achieve this so that I could eventually say, "ok, for this test, I need it to be a bit more complicated so let's do, IE (ie. Module 1.4), sign in (ie. Module 2.1), add a name (ie Module 3.1), add an address (ie. Module 3.2), add a phone number (ie Module 3.3), then check for success (ie Module 4.1). So, I need all of these strung together. (This is extremely simplified and just an example of what I need to occur. Even in the case of the unit tests, I am unable to simply skip to a page to check that the elements are present without completing the required prerequisite information.) The issue that I am running into with the lengthy tests that I have created is that each one requires multiple edits when something is changed and then multiplied by the number of drivers, in this case Chrome, IE, Edge and Firefox (a factor of 4). Maybe my approach is totally wrong but this is new ground for me, so any advice is much appreciated. Thank you again for your help! Previous Question: I have found many answers for creating unit tests, however, I am unable to find any advice on how to make said tests sequential. I really want to make modular tests that can be reused when the same action is being performed repeatedly. I have tried various ways to achieve this but I have been unsuccessful. Currently I have several lengthy tests that reuse much of the same code in each test, but I have to adjust each one individually with any new changes. So, I really would like to have .py files that only contain a few lines of code for the specific task that I am trying to complete, while re-using the same browser instance that is already open and on the page where the previous portion of the test left off. Hoping to achieve this by 'calling' the smaller/modular test files. Any help and/or examples are greatly appreciated. Thank you for your time and assistance with this issue. Respectfully, Billiamaire
Is it possible to make sequential tests in Python-Selenium tests?
1
0
1
0
1
150
48,323,434
2018-01-18T14:20:00.000
2
0
1
0
0
javascript,python,regex,regex-negation,regex-group
0
48,323,646
0
2
0
false
0
0
(^|\s)(\w*(\.))+ - this may satisfy the sample text you've posted. You can find all '.' in third group UPDATE: if in your text you have words, started with any other symbol, for instance, #asd.qwe.zxc, you can improve your reg exp: (^|\s)[^@]?(\w*(\.))+
1
1
0
0
assuming the following sentence: this is @sys.any and. here @names hello. and good.bye how would I find all the '.' besides the ones appearing in words that start with @? disclaimer, been playing at regex101 for over 2 hours now after reading a few answers on SO and other forums.
Regex find char '.' except for words starting with @
0
0.197375
1
0
0
98
48,330,515
2018-01-18T21:21:00.000
1
0
0
0
1
python,flask
0
48,330,614
0
3
0
false
1
0
to make an application works with public host you have to make sure enabling port forwarding in your modem device, you can etablish a cnx with the nginx server
2
0
0
0
So I've currently got a flask app that I'm using to run a testing app, (this works on local host) but I cant work out how to launch it so I can test the connectivity from other devices (public). can someone explain how I can go about launching it, or at least point me in the right direction to some documentation about how to make it public. I don't think I'm either port forwarding it correctly or i need a web server like xampp to run it. thanks
Launching a flask app/website so other networks can connect
0
0.066568
1
0
0
243
48,330,515
2018-01-18T21:21:00.000
0
0
0
0
1
python,flask
0
48,330,687
0
3
0
false
1
0
If you change the ip address of the flask server from the default 0.0.0.0 to your the ip address of your computer (eg 192.168.1.2) the other clients on your local network can connect. If you want to expose your app the whole of the internet you should get a host that (eg try heroku.com) that has a fixed ip assigned and is reachable from the internet.
2
0
0
0
So I've currently got a flask app that I'm using to run a testing app, (this works on local host) but I cant work out how to launch it so I can test the connectivity from other devices (public). can someone explain how I can go about launching it, or at least point me in the right direction to some documentation about how to make it public. I don't think I'm either port forwarding it correctly or i need a web server like xampp to run it. thanks
Launching a flask app/website so other networks can connect
0
0
1
0
0
243
48,344,081
2018-01-19T15:09:00.000
0
0
0
0
1
python,minimization,simulated-annealing
0
48,346,637
0
1
0
false
0
0
Gonna answer my own question here. I climbed into the actual .cpp code and found the answers. In Corana's method, you select how many total iterations N of annealing you want. Then the minimization is a nested series of loops where you vary the step sizes, number of step-size adjustments, and temperature values at user-defined intervals. In PAGMO, they changed this so you explicitly specify how many times you will do these. Those are the n_* parameters and bin_size. I don't think bin_size is a good name here, because it isn't actually a size. It is the number of steps taken through a bin range, such that N=n_T_adj * n_range_adj * bin_range. I think just calling it n_bins or n_bins_adj makes more sense. Every bin_size function evaluations, the stepsize is modified (see below for limits). In Corana's method you specify the multiplicative factor to decrease the temperature each time it is needed; it could be that you reach the minimum temp before running out of iterations, or vice versa. In PAGMO, the algorithm automatically computes the temperature-change factor so that you reach Tf at the end of the iteration sequence: r_t=(Tf/Ts)**(1/n_T_adj). The start_range is, I think, a bad name for this variable. The stepsize in the alorithm is a fraction between 0 and start_range which defines the width of the search bins between the upper and lower bounds for each variable. So if stepsize=0.5, width=0.5*(upper_bound-lower_bound). At each iteration, the step size is adjusted based on how many function calls were accepted. If the step size grows larger than start_range, it is reset to that value. I think I would call it step_limit instead. But there you go.
1
0
1
0
I'm using the PYGMO package to solve some nasty non-linear minimization problems, and am very interested in using their simulated_annealing algorithm, however it has a lot of hyper-parameters for which I don't really have any good intuition. These include: Ts (float) – starting temperature Tf (float) – final temperature n_T_adj (int) – number of temperature adjustments in the annealing schedule n_range_adj (int) – number of adjustments of the search range performed at a constant temperature bin_size (int) – number of mutations that are used to compute the acceptance rate start_range (float) – starting range for mutating the decision vector Let's say I have a 4 dimensional geometric registration (homography) problem with variables and search ranges: x1: [-10,10] (a shift in x) x2: [10,30] (a shift in y) x3: [-45,0] (rotation angle) x4: [0.5,2] (scaling/magnification factor) And the cost function for a random (bad) choice of values is 50. A good value is around zero. I understand that Ts and Tf are for the Metropolis acceptance criterion of new solutions. That means Ts should be about the expected size of the initial changes in the cost function, and Tf small enough that no more changes are expected. In Corana's paper, there are many hyperparameters listed that make sense: N_s is the number of evaluation cycles before changing step sizes, N_T are the number of step-size changes before changing the temperature, and r_T is the factor by which the temp is reduced each time. However, I can't figure out how these correlate to pygmo's parameters of n_T_adj, n_range_adj, bin_size, and start_range. I'm really curious if anyone can explain how pygmo's hyperparameters are used, and how they relate to the original paper by Corana et al?
PAGMO/PYGMO: Anyone understand the options for Corana’s Simulated Annealing?
0
0
1
0
0
175
48,349,091
2018-01-19T20:32:00.000
0
0
1
0
0
python,c,gcc
0
48,350,932
0
1
0
false
0
0
Well assuming you want to handle all type of projects and their dependencies (which is not easy) the best way is to have a module that generates a Makefile for the project and use it to compile and solve all dependencies
1
0
0
0
I'm trying to build a simple IDE that is web based in Python. For now, this IDE will support C only. I know it is possible to call the gcc with Python to compile and run a single C file. But what if I would like to compile and run multiple C files from a single project (i.e. linking .h files and .c files), is this possible? If yes, can you please tell me how?
Calling gcc to compile multiple files with Python
1
0
1
0
0
330
48,353,544
2018-01-20T07:00:00.000
0
0
0
0
0
python,python-3.x,amazon-redshift,aws-glue
0
54,622,059
0
3
0
false
0
0
AWS Glue should be able to process all the files in a folder irrespective of the name in a single job. If you don’t want the old file to be processed again move it using boto3 api for s3 to another location after each run.
1
1
1
0
Within AWS Glue how do I deal with files from S3 that will change every week. Example: Week 1: “filename01072018.csv” Week 2: “filename01142018.csv” These files are setup in the same format but I need Glue to be able to change per week to load this data into Redshift from S3. The code for Glue uses native Python as the backend.
Aws Glue - S3 - Native Python
0
0
1
0
0
292
48,370,499
2018-01-21T18:54:00.000
0
1
0
0
0
python,django,django-apps
0
48,372,180
0
2
1
false
1
0
as a simple idea the inserting of new msg in database should be with a condition to limit their numbers (the count of the previous msg isn't > max ) another method : you will show the input of the msg jsut when (selet * form table where userid=sesion and count(usermsg)< max )
1
3
0
0
I have been using Django Postman for a few weeks now, and in order to limit the number of messages sent by each user, I have been wondering what would be the best way to limit the number of messages a user can send a day, a week... using Django-postman? I have been browsing dedicated documentation for weeks too in order to find an answer for the how, but I think this is not a usecase for now, and I do not really know how to manage that. Of course I am not looking for a well cooked answer, but I would like to avoid writing labyrinthine code, so maybe just a few ideas about it could help me to see clear through that problematic. Many thanks for your help on that topic!
Is it possible to limit the number of messages a user can send a day with Django postman?
1
0
1
0
0
427
48,386,293
2018-01-22T16:37:00.000
3
0
0
0
0
python,arrays,algorithm
0
48,389,275
0
1
0
false
0
0
Here is my approach that I managed to come up with. First of all we know that the resulting array will contain N+M elements, meaning that the left part will contain (N+M)/2 elements, and the right part will contain (N+M)/2 elements as well. Let's denote the resulting array as Ans, and denote the size of one of its parts as PartSize. Perform a binary search operation on array A. The range of such binary search will be [0, N]. This binary search operation will help you determine the number of elements from array A that will form the left part of the resulting array. Now, suppose we are testing the value i. If i elements from array A are supposed to be included in the left part of the resulting array, this means that j = PartSize - i elements must be included from array B in the first part as well. We have the following possibilities: j > M this is an invalid state. In this case it means we still need to choose more elements from array A, so our new binary search range becomes [i + 1, N]. j <= M & A[i+1] < B[j] This is a tricky case. Think about it. If the next element in array A is smaller than the element j in array B, this means that element A[i+1] is supposed to be in the left part rather than element B[j]. In this case our new binary search range becomes [i+1, N]. j <= M & A[i] > B[j+1] This is close to the previous case. If the next element in array B is smaller than the element i in array A, the means that element B[j+1] is supposed to be in the left part rather than element A[i]. In this case our new binary search range becomes [0, i-1]. j <= M & A[i+1] >= B[j] & A[i] <= B[j+1] this is the optimal case, and you have finally found your answer. After the binary search operation is finished, and you managed to calculate both i and j, you can now easily find the value of the median. You need to handle a few cases here depending on whether N+M is odd or even. Hope it helps!
1
1
1
0
I'm working on a competitive programming problem where we're trying to find the median of two sorted arrays. The optimal algorithm is to perform a binary search and identify splitting points, i and j, between the two arrays. I'm having trouble deriving the solution myself. I don't understand the initial logic. I will follow how I think of the problem so far. The concept of the median is to partition the given array into two sets. Consider a hypothetical left array and a hypothetical right array after merging the two given arrays. Both these arrays are of the same length. We know that the median given both those hypothetical arrays works out to be [max(left) + min(right)]/2. This makes sense so far. But the issue here is now knowing how to construct the left and right arrays. We can choose a splitting point on ArrayA as i and a splitting point on ArrayB as j. Note that len(ArrayB[:j] + ArrayB[:i]) == len(ArrayB[j:] +ArrayB[i:]). Now we just need to find the cutting points. We could try all splitting points i, j such that they satisfy the median condition. However this works out to be O(m*n) where M is size of ArrayB and where N is size of ArrayA. I'm not sure how to get where I am to the binary search solution using my train of thought. If someone could give me pointers - that would be awesome.
How to find the median between two sorted arrays?
0
0.53705
1
0
0
711
48,408,263
2018-01-23T18:13:00.000
2
0
1
0
0
python,python-3.x,list,loops
1
48,408,389
0
1
0
false
0
0
You only need to remember the sum and the number of inputs in two variables that are updated when the user writes a number. When the user enters 'done', compute the mean (sum / number_of_inputs).
1
0
0
0
I'm a beginner and my textbook just covered iterations and loops in Python. Lists have only been given cursory coverage at this point. The exercise I'm struggling with is this: Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered print out the total, count and average of all the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number. All of this I can manage, except for how to get the program to store multiple user inputs. No matter what I write I only end manipulating the last number entered. Considering we haven't formally covered lists yet I find it hard to believe I should be using append, and therefore must be overthinking this problem to death. Any and all advice is much appreciated.
Possible to write a loop in Python that stores user input for future manipulation without using append?
0
0.379949
1
0
0
89
48,435,165
2018-01-25T03:29:00.000
3
0
1
0
0
python,generator,yield
0
48,435,262
0
2
0
true
0
0
Simply put, yield delays the execution but remembers where it left off. However, more specifically, when yield is called, the variables in the state of the generator function are saved in a "frozen" state. When yield is called again, the built in next function sends back the data in line to be transmitted. If there is no more data to be yielded (hence a StopIteration is raised), the generator data stored in its "frozen" state is discarded.
1
3
0
0
I understand generator generates value once at a time, which could save a lot memory and not like list which stores all value in memory. I want to know in python, how yield knows which value should be returned during the iteration without storing all data at once in memory? In my understanding, if i want to print 1 to 100 using yield, it is necessary that yield needs to know or store 1 to 100 first and then move point one by one to return value ? If not, then how yield return value once at a time, but without storing all value in memory?
where does the yield store value in python
1
1.2
1
0
0
1,899
48,453,620
2018-01-25T23:32:00.000
1
0
0
0
0
java,python,arrays,jython
0
48,453,704
0
3
0
false
1
0
If you want a simple solution then I suggest that you write and read the integers to a file. Perhaps not the most elegant way but it would only take a couple of minutes to implement.
3
0
0
1
I have a java program and I need it to get some data calculated by a python script. I've already got java to send an integer to python via jython's PythonInterpreter and displayed it, but I can't recover it to make other operations. Also, it would be great to send a full integer array rather than a single integer but I can't wrap my mind arround PyObjects and how to use them. Is there any useful tutorial that covers arrays? I've been searching for a while but I just find integer and float related tutorials.
How can I send a data array back and forth between java and python?
0
0.066568
1
0
0
298
48,453,620
2018-01-25T23:32:00.000
0
0
0
0
0
java,python,arrays,jython
0
48,453,799
0
3
0
false
1
0
If the solution of writing/reading the numbers to a file somehow is not sufficient, you can try the following: Instead of using Jython, you can use Pyro4 (and the Pyrolite client library for your java code) to call a running Python program from your java code. This allows you to run your python code in a 'normal' python 3.6 interpreter for instance, rather than being limited to what version Jython is stuck on. You'll have to launch the Python interpreter in a separate process though (but this could very well even be on a different machine)
3
0
0
1
I have a java program and I need it to get some data calculated by a python script. I've already got java to send an integer to python via jython's PythonInterpreter and displayed it, but I can't recover it to make other operations. Also, it would be great to send a full integer array rather than a single integer but I can't wrap my mind arround PyObjects and how to use them. Is there any useful tutorial that covers arrays? I've been searching for a while but I just find integer and float related tutorials.
How can I send a data array back and forth between java and python?
0
0
1
0
0
298
48,453,620
2018-01-25T23:32:00.000
1
0
0
0
0
java,python,arrays,jython
0
48,454,007
0
3
0
false
1
0
I've worked on similar project. Here's brief outline of what Java and Python was doing respectively. Java We used Java as a main server for receiving requests from clients and sending back responses after some data manipulation. Python Python was in charge of data manipulation or calculation. Data was sent from Java via socket network. We first defined the data we needed in string format, then cncerted them into bytes in order to have them semt via socket network. Since there were limitations, though, using socket network, I changed it to Rest Api using Python Flask. In that way we could easily communicate with, not only but in this case mainly, Java with key-value json format. In this way, I was able to recieve any data type that could be passed through Api including array object you mentioned.
3
0
0
1
I have a java program and I need it to get some data calculated by a python script. I've already got java to send an integer to python via jython's PythonInterpreter and displayed it, but I can't recover it to make other operations. Also, it would be great to send a full integer array rather than a single integer but I can't wrap my mind arround PyObjects and how to use them. Is there any useful tutorial that covers arrays? I've been searching for a while but I just find integer and float related tutorials.
How can I send a data array back and forth between java and python?
0
0.066568
1
0
0
298
48,466,626
2018-01-26T17:34:00.000
0
0
0
1
1
python,linux,server,nohup,foreground
0
48,466,757
0
1
0
false
0
0
If you nohup a process, when you log out the parent of the process switches to being init (1) and you can't get control of it again. The best approach is to have the program open a socket and then use that for ipc. You probably want to split your code in to 2 pieces - a daemon that runs in the background and keeps a socket open, and a client which connects to the socket to control the daemon.
1
1
0
0
I am working on a chat program with Python. I would like to use nohup because users always can access server when I am logout. I could run nohup very well. It was great.But I am a admin and I can write messages,and can see online users as using python. after I worked nohup, and logout, when I login I can't access the python progress. I want to foreground it again. Yeah, I can see it in background with ps -aux . I see its PID,STAT but I don't know how to access it. I should access it.jobs doesn't see it. fg don't work. or I can't do. How can I do?
Access a progress that work background with nohup (LINUX) -get foreground
0
0
1
0
0
122
48,478,792
2018-01-27T17:57:00.000
1
0
0
0
1
python-3.x,video,pyqt5
0
49,198,231
0
1
0
false
0
1
Okay, so I couldn't find anything on "MP4 and green line" so I looked at how to modify the PyQt5 interface as a way of hiding the issue. The option I chose was QGroupBox and changing the padding in the stylesheet to -9 (in my particular case - you may find another value works better but it depends on the UI). I did attempt to use QFrame, as my other option, but this didn't personally work for me.
1
0
0
0
I've made a desktop app using Python 3 and PyQt 5 and it works except for the playback of the MP4 video files (compiled by pyrcc5). They are visible and play on the video widget but there is a green line down the right side. I tried to put a green frame (using a Style Sheet) around the QVideoWidget but with no success. Does anyone have any advice on how to resolve this issue? Thanks
Python 3, PyQt 5 - MP4 as resource file issue
0
0.197375
1
0
0
273
48,480,183
2018-01-27T20:24:00.000
0
0
0
0
0
python,python-3.x,postgresql,psycopg2,tor
0
48,602,297
0
1
0
false
0
0
This would be easy enough if I simply opened the database VPS to accept connections from anywhere Here lies your issue. Just simply lock down your VPS using fail2ban and ufw. Create a ufw role to only allow connection to your Postgres port from the IP address you want to give access from to that VPS ip address. This way, you don't open your Postgres port to anyone (from *) but only to a specific other server or servers that you control. This is how you do it. Don't run an onion service to connect Postgres content because that will only complicate things and slow down the reads to your Postgres database that I am assuming an API will be consuming eventually to get to the "useful data" you will be scraping. I hope that at least points you in the right direction. Your question was pretty general, so I am keeping my answer along the same vein.
1
0
0
0
I'm creating a Python 3 spider that scrapes Tor hidden services for useful data. I'm storing this data in a PostgreSQL database using the psycopg2 library. Currently, the spider script and the database are hosted on the same network, so they have no trouble communicating. However, I plan to migrate the database to a remote server on a VPS so that I can have a team of users running the spider script from a number of remote locations, all contributing to the same database. For example, I could be running the script at my house, my friend could run it from his VPS, and my professor could run the script from a few different systems in the lab at the university, and all of these individual systems could synchronize with the PostgreSQL server runnning on my remote VPS. This would be easy enough if I simply opened the database VPS to accept connections from anywhere, making the database public. However, I do not want to do this, for security reasons. I know I could tunnel the connection through SSH, but that would require giving each person a username and password that would grant them access to the server itself. I don't wish to do this. I'd prefer simply giving them access to the database without granting access to a shell account. I'd prefer to limit connections to the local system 127.0.0.1 and create a Tor hidden service .onion address for the database, so that my remote spider clients can connect to the database .onion through Tor. The problem is, I don't know how to connect to a remote database through a proxy using psycopg2. I can connect to remote databases, but I don't see any option for connecting through a proxy. Does anyone know how this might be done?
Connect to remote PostgreSQL server over Tor? [python] [Tor]
0
0
1
1
0
367
48,481,203
2018-01-27T22:28:00.000
0
1
1
1
0
python,atom-editor
0
48,512,683
0
1
0
true
0
0
Only way I can change Atom python is to run it from a directory that has a different default python version. if I type python from a terminal window, whichever version of python that opens is the version Atom uses. I use virtual environments so I can run python 2.7.13 or python 3.6. If I want Atom to run python 3, I activate my python 3 environment and then run atom. There may be a way to do this from within Atom but I haven't found it yet.
1
0
0
0
I have downloaded Anaconda on my computer however Anaconda is installed for all users on my mac therefore when I try and access python2.7 by typing in the path: /anaconda3/envs/py27/bin:/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin Even if I open from terminal the path above is not in the current directory since: machintoshHD/anaconda3/.... machintoshHD/Users/adam/desktop.... how can i redirect the configure script feature in the atom package script so that i can run python 2?
Atom script configure script run python 2.7
0
1.2
1
0
0
423
48,481,801
2018-01-27T23:59:00.000
1
0
1
0
0
python,encoding
0
48,481,828
0
2
0
false
0
0
Found the solution. repr() will do.
1
1
0
0
I am encoding Chinese characters using gb18030 in python. I want to access part of the encoded string. For example, the string for 李 is: '\xc0\xee'. I want to extract 'c0' and 'ee' out of this. However, python is not treating '\xc0\xee' as a 8 character string, but as a 2 character string. How I do turn it into a 8 character string so that I could access the individual roman letters in it?
how to access part of a encoded (gb18020) string in python
0
0.099668
1
0
0
16
48,494,296
2018-01-29T04:13:00.000
-1
0
1
0
0
python,loops,binary
0
48,494,319
0
4
0
false
0
0
A binary string has been defined as a string that only contains "0" or "1". So, how about checking each 'character' in the string, and if it's not a "0" or "1" you will know that the string is not a binary string.
1
0
0
0
The question on my assignment is as follows: Write a function that takes, as an argument, a string, identified by the variable aString. If the string only contains digits 0 and 1, return the string formed by concatenating the argument with the string "is a binary string." Otherwise, return a string indicating the length of the argument, as specified in the examples that follow. Name this function AmIBinary(aString). I am having trouble figuring out how to form a loop which searches through a string and determines whether or not the string is a binary string. I understand how to get the length of a string, I just don't understand how to figure out if it is a binary string.
Python Binary String Loop
0
-0.049958
1
0
0
2,689
48,501,645
2018-01-29T12:47:00.000
0
0
0
0
0
windows,python-3.x,powershell,winrm,wsman
0
64,877,092
0
1
0
false
0
0
Actually, I had a quick look at the code of wirm (as of 20201117) and the "Session" is not an actual session in the traditional sense, but only an object holding the creds to authenticate. Each time run_cmd or run_ps is invoked, a session in opened on the target and closed on completion of the task. So there's nothing to close, really.
1
3
0
0
hello I'm using PyWinRM to poll a remote windows server. s = winrm.Session('10.10.10.10', auth=('administrator', 'password')) As there is no s.close() function available, I am worried about leaking file descriptors. I've checked by using lsof -p <myprocess> | wc -l and my fd count is stable but my google searches show that ansible had fd leaks previously; ansible relies on pywinrm to manage remote window hosts as well kindly advice, thanks!
how do you close a Pywinrm session?
0
0
1
0
0
987
48,503,540
2018-01-29T14:27:00.000
0
0
0
0
0
python,graph,networkx
0
48,504,499
0
2
0
false
0
0
I guess you can use a directed graph and store the direction as an attribute if you don't need to represent that directed graph.
2
0
0
0
I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.). My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx if that attribute (type arrorw) is from a to b or from b to a. Does anyone know how to handle this?
Partially undirect graphs in Networkx
0
0
1
0
1
215
48,503,540
2018-01-29T14:27:00.000
0
0
0
0
0
python,graph,networkx
0
48,583,218
0
2
0
true
0
0
After search it in a lot of different sources, the only way to do a partial undirect graph I've found it is this is through adjacent matrices. Networkx has a good tools to move between graph and adjacent matrix (in pandas and numpy array format). The disadvantage is if you need networkx functions you have to program it yourself or convert the adjacent matrix to networkx format and then return it back to your previous adjacent matrix.
2
0
0
0
I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.). My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx if that attribute (type arrorw) is from a to b or from b to a. Does anyone know how to handle this?
Partially undirect graphs in Networkx
0
1.2
1
0
1
215
48,525,733
2018-01-30T16:03:00.000
0
0
0
0
0
python,tensorflow,amazon-s3,amazon-sagemaker
0
48,531,968
0
1
0
false
1
0
Create an object in S3 and enable versioning to the bucket. Everytime you change the model and save it to S3, it will be automatically versioned and stored in the bucket. Hope it helps.
1
0
1
0
I am executing a Python-Tensorflow script on Amazon Sagemaker. I need to checkpoint my model to the S3 instance I am using, but I can't find out how to do this without using the Sagemake Tensorflow version. How does one checkpoint to an S3 instance without using the Sagemaker TF version?
TensorFlow Checkpoints to S3
0
0
1
0
0
895
48,528,477
2018-01-30T18:46:00.000
0
0
1
0
0
python,anaconda,packages,python-idle
0
48,535,641
0
2
0
false
0
0
The IDLE that comes with python 3.5.2 can only be run by python 3.5.2. Code you submit to python 3.5.2 through that IDLE can normally only access packages installed for 3.5.2, plus your own code. I believe Anaconda 3.6.3 comes with Python 3.6.3 and the 3.6.3 standard library, including the 3.6.3 version of idlelib. In order for your code to use the packages installed with anaconda, you must run your code with the anaconda binary. To run your code from IDLE with the anaconda binary, you must run IDLE with that binary instead of some other binary (like the 3.5.2 binary. When running Python 3.6.3 interactively, you can start IDLE 3.6.3 at a >>> prompt with import idlelib.idle. If you can start python 3.6.3 in a terminal (Command Prompt on Windows), then adding the arguments -m idlelib will start IDLE. On Windows, I have no idea whether or not Anaconda adds 'Edit with IDLE 3.6.3' to the right-click context menu for .py and .pyw files, the way the python.org installer does. On any system, you should be able to create a file or icon that will start 3.6.3 with IDLE, but details depend heavily on OS and version.
1
1
0
0
The title says it all i want to be able to use the packages that are installed with anaconda with idle so are there any ways of making this work? When i try to import packages with idle that i installed using anaconda it says package not found. I need some help please and thank you in advance.
how can i get IDLE (Python) to use packages installed by anaconda (windows 7 32bit)?
1
0
1
0
0
3,178
48,532,069
2018-01-30T23:20:00.000
1
0
0
0
1
python-3.x,machine-learning,deep-learning,computer-vision,imblearn
0
48,550,016
0
2
0
false
0
0
Thanks for the clarification. In general, you don't oversample with Python. Rather, you pre-process your data base, duplicating the short-handed classes. In the case you cite, you might duplicate everything in class B, and make 5 copies of everything in class C. This gives you a new balance of 1000:600:500, likely more palatable to your training routines. Instead of the original 1400 images, you now shuffle 2100. Does that solve your problem?
1
3
1
0
I am working on a multiclass classification problem with an unbalanced dataset of images(different class). I tried imblearn library, but it is not working on the image dataset. I have a dataset of images belonging to 3 class namely A,B,C. A has 1000 data, B has 300 and C has 100. I want to oversample class B and C, so that I can avoid data imbalance. Please let me know how to oversample the image dataset using python.
How to oversample image dataset using Python?
0
0.099668
1
0
0
2,828
48,545,255
2018-01-31T15:03:00.000
3
0
1
0
0
python,visual-studio-2015,static-libraries,libcmtd
1
49,984,841
0
1
0
true
0
0
This is what I needed to do to build and use python statically embedded in another application. To build the static python library (e.g., python36_d.lib, python36.lib) Convert ALL projects in the python solution (pcbuild.sln) to static. This is about 40 projects, so it may take awhile. This includes setting library products to be build as 'static lib', and setting all /MD and /MDd build options to /MT and /MTd. For at least the pythoncore project alter the Preprocess define to be Py_NO_ENABLE_SHARED. This tells the project it will be looking for calls from static libraries. By hook or crook, find yourself a pyconfig.h file and put it in the Include area of your Python build. It is unclear how this file is built from Windows tools, but one seems to be able to snag one from other sources and it works ok. One could probably grab the pyconfig.h from the Pre-compiled version of the code you are building. [By the way, the Python I built was 3.6.5 and was built with Windows 2015, update 3.] Hopefully, this should enable you to build both python36.lib and python36_d.lib. Now you need to make changes to your application project(s) to enable it to link with the python library. You need to do this: Add the Python Include directory to the General->Include Directories list. Add the Python Library directories to the General->Library Directories lists. This will be ..\PCBuild\win32 and ..\PCBuild\amd64. Add the define Py_NO_ENABLE_SHARED to the C/C++ -> Preprocessor area. For Linker->input add (for releases) python36.lib;shlwapi.lib;version.lib and (for debugs) python36_d.lib;shlwapi.lib;version.lib. And that should be it. It should run and work. But one more thing. In order to be able to function, the executable needs to access the Lib directory of the python build. So a copy of that needs to be moved to wherever the executable (containing the embedded python) resides. Or you can add the Lib area to the execution PATH for windows. That should work as well. That's about all of it.
1
2
0
0
I am working with the 3.6.4 source release of Python. I have no trouble building it with Visual Studio as a dynamic library (/MDd) I can link the Python .dll to my own code and verify its operation. But when I build it (and my code) with (/MTd) it soon runs off the rails when I try to open a file with a Python program. A Debug assertion fails in read.cpp ("Expression: _osfile(fh) & FOPEN"). What I believe is happening is the Python .dll is linking with improper system libraries. What I can't figure out is how to get it to link with the correct ones (static libraries).
I cannot build python.dll as a static library (/MTd) using Visual Studio
0
1.2
1
0
0
1,688
48,558,151
2018-02-01T08:18:00.000
7
0
0
0
0
user-interface,automation,sl4a,pyautogui,qpython3
0
48,928,935
0
2
0
false
0
1
Something like PyAutoGui for Android is AutoInput. It's a part of the main app Tasker. If you've heard about Tasker, then you know what I'm taking about. If not, them Tasker is an app built to automate your Android Device. If you need specific taps and scrolls, then you can install the plug-in Auto Input for Tasker. It's precisely allows you to tap a point in your screen. Or, if you have a rooted device, then you can directly run shell commands from Tasker without the need of any Plug-in. Now, you can get the precise location of your x,y coordinate easily. Go to settings, About Phone, and Tap on Build Number until it says, "You've unlocked developer options." Now, in the settings app, you will have a new developer options. Click on that, then scroll down till you find "Show Pointer Location" and turn that on. Now wherever you tap and hold the screen, the top part of your screen will give you the x,y coordinate of that spot. I hope this helps. Please comment if you have any queries.
2
3
0
0
I'm currently using Qpython and Sl4A to run python scripts on my Droid device. Does anybody know of a way to use something like PyAutoGUI on mobile to automate tapping sequences (which would be mouse clicks on a desktop or laptop)? I feel like it wouldn't be too hard, but I'm not quite sure how to get the coordinates for positions on the mobile device.
Way to use something like PyAutoGUI on Mobile?
1
1
1
0
0
8,687
48,558,151
2018-02-01T08:18:00.000
2
0
0
0
0
user-interface,automation,sl4a,pyautogui,qpython3
0
66,658,332
0
2
0
false
0
1
Unfortunately no. PyAutoGUI only runs on Windows, macOS, and Linux
2
3
0
0
I'm currently using Qpython and Sl4A to run python scripts on my Droid device. Does anybody know of a way to use something like PyAutoGUI on mobile to automate tapping sequences (which would be mouse clicks on a desktop or laptop)? I feel like it wouldn't be too hard, but I'm not quite sure how to get the coordinates for positions on the mobile device.
Way to use something like PyAutoGUI on Mobile?
1
0.197375
1
0
0
8,687
48,567,012
2018-02-01T16:05:00.000
5
0
0
0
0
python,machine-learning,scikit-learn,keras
0
48,573,176
0
2
0
true
0
0
Metrics in Keras and in Sklearn mean different things. In Keras metrics are almost same as loss. They get called during training at the end of each batch and each epoch for reporting and logging purposes. Example use is having the loss 'mse' but you still would like to see 'mae'. In this case you can add 'mae' as a metrics to the model. In Sklearn metric functions are applied on predictions as per the definition "The metrics module implements functions assessing prediction error for specific purposes". While there's an overlap, the statistical functions of Sklearn doesn't fit to the definition of metrics in Keras. Sklearn metrics can return float, array, 2D array with both dimensions greater than 1. There is no such object in Keras by the predict method. Answer to your question: It depends where you want to trigger: End of each batch or each epoch You can write a custom callback that is fired at the end of batch. After prediction This seems to be easier. Let Keras predict on the entire dataset, capture the result and then feed the y_true and y_pred arrays to the respective Sklearn metric.
1
2
1
0
Tried googling up, but could not find how to implement Sklearn metrics like cohen kappa, roc, f1score in keras as a metric for imbalanced data. How to implement Sklearn Metric in Keras as Metric?
How to implement Sklearn Metric in Keras as Metric?
0
1.2
1
0
0
2,808
48,572,258
2018-02-01T21:46:00.000
0
0
1
0
0
python,mongodb,ssl,eve
0
48,727,022
0
1
0
true
0
0
Resolved by passing required parameters (ssl, ssl_ca_certs, etc) to MongoClient via MONGO_OPTIONS setting.
1
0
0
1
MongoDB uses self-signed certificate. I want to setup service on EVE to work with it. I searched documentation and SO but found only information how to use self-signed cert to access EVE itself. What should I do to connect to MongoDB from EVE with self-signed certificate?
Connect to MongoDB with self-signed certificate from EVE
0
1.2
1
1
0
228
48,573,443
2018-02-01T23:30:00.000
-1
0
0
1
0
python,bash,pycharm,sh
0
48,573,531
0
1
0
false
0
0
You should be able to go to Run->Edit Configurations and then add configurations, including an Interpreter path where you can set the path to your virtualenv bash executable
1
0
0
0
So I'm not sure how to word this correctly but I have a .sh file that I use to turn on my python app and it runs fine when I run this command ./start_dev.sh in my terminal. But I am having trouble trying to have it run in pycharm because my app is run in Ubuntu and I don't know how to direct the interpreter path to my Ubuntu virtualenv bash command. Is this even possible?
Unable to get pycharm to run my .sh file
0
-0.197375
1
0
0
1,297
48,573,989
2018-02-02T00:32:00.000
1
0
1
0
1
python,python-2.7,jupyter-notebook,exe
0
48,626,144
0
1
0
false
0
0
Try exporting your .ipynb file to a normal .py file and running pyinstaller on the .py file instead. You can export as .py by going to File > Download As > Python (.py) in the Jupyter Notebook interface
1
0
0
0
I am using jupyter notebook for coding, hence my file format is ipynb. I would like to turn this piece of code into an executable file .exe for later uses. So far I have managed to get the exe file by going to anaconda prompt, executed the following command ---> pyinstaller --name ‘name of the exe’ python_code.ipynb This gives me two folders build and dist, both contains .exe file. However, none of them worked. I would like to know why and how to fix it. by double click on the exe, it shows a black cmd pop up and then it went away. nothing else happens.
ipynb python file to executable exe file
1
0.197375
1
0
0
7,843
48,584,712
2018-02-02T14:23:00.000
0
0
0
1
0
python,multithreading,asynchronous
0
48,588,092
0
2
0
false
1
0
Celery will be your best bet - it's exactly what it's for. If you have a need to introduce dependencies, it's not a bad thing to have dependencies. Just as long as you don't have unneeded dependencies. Depending on your architecture, though, more advanced and locked-in solutions might be available. You could, if you're using AWS, launch an AWS Lambda function by firing off an AWS SNS notification, and have that handle what it needs to do. The sky is the limit.
1
0
0
0
I have seen a few variants of my question but not quite exactly what I am looking for, hence opening a new question. I have a Flask/Gunicorn app that for each request inserts some data in a store and, consequently, kicks off an indexing job. The indexing is 2-4 times longer than the main data write and I would like to do that asynchronously to reduce the response latency. The overall request lifespan is 100-150ms for a large request body. I have thought about a few ways to do this, that is as resource-efficient as possible: Use Celery. This seems the most obvious way to do it, but I don't want to introduce a large library and most of all, a dependency on Redis or other system packages. Use subprocess.Popen. This may be a good route but my bottleneck is I/O, so threads could be more efficient. Using threads? I am not sure how and if that can be done. All I know is how to launch multiple processes concurrently with ThreadPoolExecutor, but I only need to spawn one additional task, and return immediately without waiting for the results. asyncio? This too I am not sure how to apply to my situation. asyncio has always a blocking call. Launching data write and indexing concurrently: not doable. I have to wait for a response from the data write to launch indexing. Any suggestions are welcome! Thanks.
Flask: spawning a single async sub-task within a request
1
0
1
0
0
500
48,595,683
2018-02-03T09:01:00.000
0
0
1
0
0
python,sql,sql-server
0
48,687,137
0
2
1
false
0
0
First install the MySQL connector for python from the MySQL website and then import the mysql.connector module and initialize a variable to a mysql.connector.connect object and use cursors to modify and query data. Look at the documentation for more help. If you don't have problem with no networking capabilities and less concurrency use sqlite, it is better than MySQL in these cases.
1
1
0
0
I have some experience with SQL and python. In one of my SQL stored procedures I want to use some python code block and some functions of python numpy. What is the best way to do it.SQL server version is 2014.
how to use python inside SQL server 2014
0
0
1
1
0
662
48,600,583
2018-02-03T18:27:00.000
0
0
1
0
1
python,node.js,rest,api,web-services
0
48,600,936
0
1
0
false
1
0
In terms of thoughts: 1) You can build a REST interface to your python code using Flask. Make REST calls from your nodejs. 2) You have to decide if your client will wait synchronously for the result. If it takes a relatively long time you can use a web hook as a callback for the result.
1
0
0
0
I have the bulk of my web application in React (front-end) and Node (server), and am trying to use Python for certain computations. My intent is to send data from my Node application to a Python web service in JSON format, do the calculations in my Python web service, and send the data back to my Node application. Flask looks like a good option, but I do not intend to have any front-end usage for my Python web service. Would appreciate any thoughts on how to do this.
Python web service with React/Node Application
0
0
1
0
1
415
48,611,425
2018-02-04T18:10:00.000
5
0
0
0
0
python,socket.io,webserver,flask-socketio,eventlet
0
48,616,158
0
2
0
true
1
0
The eventlet web server supports concurrency through greenlets, same as gevent. No need for you to do anything, concurrency is always enabled.
1
2
0
0
I’ve started working a lot with Flask SocketIO in Python with Eventlet and are looking for a solution to handle concurrent requests/threading. I’ve seen that it is possible with gevent, but how can I do it if I use eventlet?
Handle concurrent requests or threading Flask SocketIO with eventlet
1
1.2
1
0
1
2,111
48,649,927
2018-02-06T18:49:00.000
0
0
0
1
0
eclipse,python-3.x,pandas,pydev
0
71,702,890
0
2
0
false
0
0
I faced same problem when working with eclipse pydev project. here are steps that helped me resolve the issue In eclipse open the workspace, click on Windows->Preferences->pydev-> interpreters-> python interpreter -> Click on Manage with PIP In Command to execute , enter install pandas. Problem solved
1
1
0
0
I have installed python 3.6.4 on my MAC OS and have Eclipse neon 2.0 running. I have added pydev plugin to work on python projects. I need to import pandas library, but there is no such option as windows -> preferences -> libraries Can someone help me with any other way to install python libraries in neon 2. And also how to run python cmds in terminal window in pydev? Thanks!
installing pandas in pydev eclipse neon 2
0
0
1
0
0
1,889
48,656,958
2018-02-07T06:00:00.000
0
1
0
0
0
python,probability,blockchain
0
48,712,283
0
1
0
false
0
0
I figured this out, the solution is below: int(hashlib.sha256(block_header + userID).hexdigest(), 16) / float(2**256) You conver the hash into an integer, then divide that integer by 2**256. This gives you a decimal from 0 to 1 that can be compared to a random.random() to get the prize.
1
0
0
0
I'm building a project that does a giveaway every Monday. There are three prizes, the basic prize, the "lucky" prize, and the jackpot prize. The basic prize is given out 70% of the time, the lucky prize gets given out 25% of the time and the jackpot prize gets rewarded the final 5% of the time. There are multiple people that get prizes each Monday. Each person in the giveaway gets at least the basic prize. Right now I'm just generating a random number and then assigning the prizes to each participant on my local computer. This works, but the participants have to trust that I'm not rigging the giveaway. I want to improve this giveaway by using the blockchain to be the random number generator. The problem is that I don't know how to do this technically. Here is my start to figuring out how to do it: When the giveaway is created, a block height is defined as being the source of randomness. Each participant has a userID. When the specific block number is found, the block hash is catenated to the userID and then hashed. The resulting hash is then "ranged" across the odds defined in the giveaway. The part I can't figure out is how to do the "ranging". I think it might involve the modulo operator, but I'm not sure. If the resulting hash falls into the 5% range, then that user gets the jackpot prize. If the hash falls into the 70% range, then he gets the basic prize, etc.
Assigning giveaways based on cryptocurrency block header
0
0
1
0
0
27
48,675,264
2018-02-07T23:51:00.000
0
0
0
0
0
python,ftp,ftplib
0
48,761,226
0
1
0
false
0
0
No. Using FTP.retrlines('LIST') is the only solution with FTP protocol. If you need a faster approach, you would have to use another interface. Like shell, some web API, etc.
1
0
0
0
I was wondering if there was an effective way to recursively search for a given filename using ftplib. I know that I could use FTP.cwd() and FTP.retrlines('LIST), but I think that it would be rather repetitive and inefficient. Is there something that lets me find files on an FTP server in Python, such as how os can do os.path.walk()?
Finding a file in an FTP server using ftplib
0
0
1
0
0
198
48,677,643
2018-02-08T04:38:00.000
0
0
0
0
0
python,django
0
48,680,841
0
1
0
false
1
0
Load balancing is not anything to do with Django. It is something you implement at a much higher layer, via servers that sit in front of the machines that Django is running on. However, if you've just started creating your site, it is much too early to start thinking about this.
1
2
0
0
I have a Django server which responds to a call like this 127.0.0.1:8000/ao/. Before adding further applications to the server, I would like to experiment the load balancing which are supported by Django. Can anyone please explain how to implement load balancing. I spent sometime in understanding the architecture but was unable to find a solution. I work on Windows OS and I am new to servers.
load balancing in django
0
0
1
0
0
2,681
48,698,110
2018-02-09T03:04:00.000
2
0
0
0
0
python,rest,http,react-native,server
0
48,718,794
0
2
0
false
0
0
You have to create a flask proxy, generate JSON endpoints then use fetch or axios to display this data in your react native app. You also have to be more specific next time.
1
2
0
0
I'm learning about basic back-end and server mechanics and how to connect it with the front end of an app. More specifically, I want to create a React Native app and connect it to a database using Python(simply because Python is easy to write and fast). From my research I've determined I'll need to make an API that communicates via HTTP with the server, then use the API with React Native. I'm still confused as to how the API works and how I can integrate it into my React Native front-end, or any front-end that's not Python-based for that matter.
How to create a Python API and use it with React Native?
0
0.197375
1
0
1
6,331
48,699,357
2018-02-09T05:38:00.000
0
0
0
0
0
python,lmfit
0
48,707,014
0
1
0
true
0
0
More detail about what you are actually doing would be helpful. That is, vague questions can really only get vague answers. Assuming you are doing curve fitting with lmfit's Model class, then once you have your Model and a set of Parameters (say, after a fit has refined them to best match some data), then you can use those to evaluate the Model (with the model.eval() method) for any values of the independent variable (typically called x). That allows on a finer grid or extending past the range of the data you actually used in the fit. Of course, predicting past the end of the data range assumes that the model is valid outside the range of the data. It's hard to know when that assumption is correct, especially when you have no data ;). "It's tough to make predictions, especially about the future."
1
0
1
0
I have fitted a curve using lmfit but the trendline/curve is short. Please how do I extend the trendline/curve in both directions because the trendline/curve is hanging. Sample codes are warmly welcome my senior programmers. Thanks.
Extending a trendline in a lmfit plot
0
1.2
1
0
0
213
48,700,554
2018-02-09T07:15:00.000
14
0
0
0
0
python,rasa-nlu,rasa-core
0
49,782,296
0
1
1
true
0
0
RASA NLU is the natural language understanding piece, which is used for taking examples of natural language and translating them into "intents." For example: "yes", "yeah", "yep" and "for sure" would all be translated into the "yes" intent. RASA CORE on the other hand is the engine that processes the flow of conversation after the intent of the user has already been determined. RASA CORE can use other natural language translators as well, so while it pairs very nicely with RASA NLU they don't both have to be used together. As an example if you were using both: User says "hey there" to RASA core bot Rasa core bot calls RASA NLU to understand what "hey there" means RASA NLU translates "hey there" into intent = hello (with 85% confidence) Rasa core receives "hello" intent Rasa core runs through it's training examples to guess what it should do when it receives the "hello" intent Rasa core predicts (with 92% confidence) that it should respond with the "utter_hello" template Rasa core responds to user "Hi, I'm your friendly Rasa bot" Hope this helps.
1
2
1
0
I am new to chatbot application and RASA as well, can anyone please help me to understand how should i use RASA NLU with RASA CORE.
How to use RASA NLU with RASA CORE
0
1.2
1
0
0
1,115
48,702,629
2018-02-09T09:28:00.000
0
0
1
0
0
python,queue,priority-queue
0
48,710,792
0
1
0
false
0
0
Since all items are arranged in a prioritised manner, just extract them in the way they come out...as it is a queue. Or you could apply more conditions on how you would like to arrange the items with same priorities among themselves.
1
0
0
0
Basically, no counter is allowed. No iterables (arrays, dictionaries, etc.) allowed either to store insertion. There are two linked lists: one storing odd insertions and even insertions. Each node has the priority, and the object. Is there any pattern you can find? Or is this impossible? Edit: sorry for not mentioning, we need to extract the first one that was inserted.
Priority Queue: If two objects have same priority, how do I determine which to extract?
0
0
1
0
0
240
48,715,867
2018-02-10T00:08:00.000
0
0
0
0
0
python,scikit-learn,time-series,svm
0
48,715,905
0
2
0
false
0
0
given multi-variable regression, y = Regression is a multi-dimensional separation which can be hard to visualize in ones head since it is not 3D. The better question might be, which are consequential to the output value `y'. Since you have the code to the loadavg in the kernel source, you can use the input parameters.
1
0
1
0
I have a dataset of peak load for a year. Its a simple two column dataset with the date and load(kWh). I want to train it on the first 9 months and then let it predict the next three months . I can't get my head around how to implement SVR. I understand my 'y' would be predicted value in kWh but what about my X values? Can anyone help?
support vector regression time series forecasting - python
0
0
1
0
0
2,181
48,730,694
2018-02-11T10:49:00.000
0
0
0
0
1
python,jquery,django,django-rest-framework
0
48,730,768
0
2
0
false
1
0
as per my experience and knowledge, you are almost going towards correct direction. my recommendation is for making backend rest api Django and django rest framework is the best option however for consuming those api you can look for the angular or react both works very well in terms of consuming API.
1
0
0
0
I am currently developing my first more complex Web Application and want to ask for directions from more experienced Developers. First I want to explain the most important requirements. I want to develop a Web App (no mobile apps or desktop apps) and want to use as much django as possible. Because I am comfortable with the ecosystem right now and don't have that much time to learn something new that is too complex. I am inexperienced in the Javascript World, but I am able to do a little bit of jQuery. The idea is to have one database and many different Frontends that are branded differently and have different users and administrators. So my current approach is to develop a Backend with Django and use Django Rest Framework to give the specific data to the Frontends via REST. Because I have not that much time to learn a Frontend-Framework I wanted to use another Django instance to use as a Frontend, as I really like the Django Template language. This would mean one Django instance one Frontend, where there would be mainly TemplateViews. The Frontends will be served on different subdomains, while the backend exposes the API Endpoints on the top level domain. It is not necessary to have a Single Page App. A Normal Website with mainly the normal request/response-cycle is fine. Do you think this is a possible approach to do things? I am currently thinking about how to use the data in the frontend sites in the best way. As I am familiar with the Django template language I thought about writing a middleware that asks about the user details in every request cycle from the backend. The thought is to use a request.user as normally as possible while getting the data from the backend. Or is ist better to ask these details via jQuery and Ajax Calls and don't use the django template language very much? Maybe there is also a way to make different Frontends for the same database without using REST? Or what would you think about using a database with each frontend, which changes everytime I make a change in the main database in the backend? Although I don't really like this approach due to the possibility of differences in data if I make a mistake. Hopefully this is not to confusing for you. If there are questions I will answer them happily. Maybe I am also totally on the wrong track. Please don't hesitate to point that out, too. I thank you very much in advance for your guiding and wish you a nice day.
Design Decision Django Rest Framework - Django as Frontend
1
0
1
0
0
548
48,738,338
2018-02-12T01:22:00.000
0
0
0
0
0
python,artificial-intelligence
0
51,712,117
0
1
0
false
0
0
You can just measure the loudness of the voice command spoken in mDb. You can then process these numbers to change the UI. Like , higher the loudness makes the bar longer. Let me know which speech to text engine are you using so that I can provide further help.
1
0
0
0
So basically I have created an A.I assistant and was wondering if anyone had suggestions on how to make visuals that react with the sound?
Making my Artificial Intelligence visuals that react with sound change
0
0
1
0
0
41
48,738,650
2018-02-12T02:10:00.000
2
1
1
0
0
python,algorithm,pow,modular-arithmetic,cryptanalysis
0
48,738,710
1
2
0
false
0
0
It sounds like you are trying to evaluate pow(a, b) % c. You should be using the 3-argument form, pow(a, b, c), which takes advantage of the fact that a * b mod c == a mod c * b mod c, which means you can reduce subproducts as they are computed while computing a ^ b, rather than having to do all the multiplications first.
1
1
0
0
I am trying to calculate something like this: a^b mod c, where all three numbers are large. Things I've tried: Python's pow() function is taking hours and has yet to produce a result. (if someone could tell me how it's implemented that would be very helpful!) A right-to-left binary method that I implemented, with O(log e) time, would take about 30~40 hours (don't wanna wait that long). Various recursion methods are producing segmentation faults (after I changed the recursion limits) Any optimizations I could make?
How to implement modular exponentiation?
0
0.197375
1
0
0
1,213
48,753,128
2018-02-12T18:28:00.000
0
0
0
0
0
python,rstudio,r-markdown,python-import
0
49,017,753
0
1
0
false
0
0
First I had to make a setup.py file for my project. activate the virtual environment corresponding to my project source activate, then run python setup.py develop Now, I can import my own python library from R as I installed it in my environment.
1
0
1
0
I have developed few modules in python and I want to import them to rstudio RMarkdown file. However, I am not sure how I can do it. For example, I can't do from code.extract_feat.cluster_blast import fill_df_by_blast as fill_df as I am used to do it in pycharm. Any hint? Thanks.
Import my python module to rstudio
0
0
1
0
0
378
48,757,970
2018-02-13T00:58:00.000
0
0
1
0
1
tensorflow,ipython
1
48,777,636
0
1
0
false
0
0
I think I figured out the problem. pip was pointing to /Library/Frameworks/Python.framework/Versions/3.4/bin/pip My ipython was pointing to /opt/local/bin/ipython I re-installed tensorflow within my virtual environment by calling /opt/local/bin/pip-2.7 install --upgrade tensorflow Now I can use tensorflow within ipython.
1
0
1
0
tensorflow works using python in a virtualenv I created, but tensorflow doesn't work in the same virtualenv with ipython. This is the error I get: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found. I have tried installing ipython within the virtual environment. This is the message I get: Requirement already satisfied: ipython in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages If I try to uninstall ipython within the virtual environment. I get this message: Not uninstalling ipython at /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages Any ideas on how to get this to work? I don't know how to force the install of ipython to be inside the virtual environment. I've tried deleting the virtual environment and making a new one from scratch, but I get the same error.
Running tensorflow in ipython
0
0
1
0
0
213
48,759,535
2018-02-13T04:35:00.000
0
0
0
0
0
python,tensorflow,computer-vision
0
48,761,331
0
1
0
true
0
0
The scores argument decides the sorting order. The method tf.image.non_max_suppression goes through (greedily, so all input entries are covered) input bounding boxes in order decided by this scores argument, selects only those bounding boxes from them which are not overlapping (more than iou_threshold) with boxes already selected. NMS first look at bottom right coordinate and sort according to it and calculate IoU... This is not correct, can you site any resource which made you think this way?
1
1
1
0
I read the document about the function and I understood how NMS works. What I'm not clear is scores argument to this function. I think NMS first look at bottom right coordinate and sort according to it and calculate IoU then discard some boxes which have IoU greater than the threshold that you set. In this theory scores argument does absolutely nothing and the document doesn't tell much about scores arguments. I want to know how the argument affect the function. Thank you.
What does tensorflow nonmaximum suppression function's argument "score" do to this function?
0
1.2
1
0
0
537
48,772,583
2018-02-13T17:39:00.000
1
0
1
0
0
python,google-cloud-platform,google-cloud-functions
0
50,966,006
0
4
0
false
0
0
You can use AWS lambda as well if you want to work around and still use Python as your main language. Some modules/packages will need to be imported via zip file with AWS Lambda but it has a broader range of usable languages than GCF
1
10
0
0
Can Google Cloud Functions handle python with packages like sklearn, pandas, etc? If so, can someone point me in the direction of resources on how to do so. I've been searching a while and it seems like this is impossible, all I've found are resources to deploy the base python language to google cloud.
Python in Google Cloud Functions
0
0.049958
1
0
0
12,755
48,779,478
2018-02-14T04:04:00.000
0
0
0
0
0
python-3.x,wxwidgets
0
48,779,658
0
2
0
false
0
1
Try this: self.YourCheckboxObject.SetToolTip(wx.ToolTip("Paste your tooltip text here"))
1
0
0
0
And if so, how would one add a tooltip to a checkbox object? It appears that the control inherits from wxWindow which has tooltips, so can it be added to a wxCheckBox? Thanks!
Python - can you add a tooltip on a wx.CheckBox object?
0
0
1
0
0
438
48,780,634
2018-02-14T06:04:00.000
5
0
0
0
1
python-3.x
1
52,187,177
0
1
0
false
0
0
Use below code, this worked for me: pip3 install --upgrade oauth2client
1
2
0
0
I got this error in in Python3.6 ModuleNotFoundError: No module named 'oauth2client.client',i tried pip3.6 install --upgrade google-api-python-client,But I don't know how to fix Please tell me how to fix, Thanks
ModuleNotFoundError: No module named 'oauth2client.client'
0
0.761594
1
0
1
7,084
48,806,894
2018-02-15T12:06:00.000
0
0
0
0
0
python
0
49,682,887
0
2
0
false
1
0
I tried robot.step() and it works thank you. I use small increments of time so that the code is not continuously blocing and there is time for my sensors to do the reading.
1
0
0
0
I am using webots for my project at university. I want my robot to do a specific action for a certain amount of time, but I cannot find a way to do it without blocking the code and the sensors and consequently the whole simulation. I tried both the commands robot.step() and time.sleep() but they both blocck the code and by the time the action is finished the robot does not do anything else even when it is normally supposed to. Specificaly I want the robot to go backwards for a certain ammount of time if the sesnsors at the front and the sides read below a specific distance. Any ideas on how to do it without blocking the code? because if for example I use one of the above commands and there is an object behind the robot the beck sensor will not work because it is blocked and the robot will hit on the object. Thank you.
Webots programming with Python - blocking code
0
0
1
0
0
529
48,832,344
2018-02-16T17:51:00.000
0
0
1
0
0
python,uninstallation
0
50,609,180
0
1
0
true
0
0
This depends on the OS and how Python was installed. For windows, look under %USERPROFILE%\AppData\Local\Programs\Python - or just run the installer again, it should have an option to fix or remove the current install.
1
0
0
0
I have accidentally removed several parts of python and now am trying to start again... The installer says that 57 files are still on my PC and I cannot find them. Does anyone know how to get a copy of the uninstaller? As it should find the remaining files.
How to remove the remains of python 3.7?
0
1.2
1
0
0
3,423
48,876,711
2018-02-20T01:44:00.000
0
0
0
0
0
python,flask,virtualenv
0
48,877,242
0
4
0
false
1
0
You may want to look at using a requirements.txt file in Python. Using $ pip freeze > requirements.txt can build that file with what pip has installed in your virtualenv.
1
0
0
0
I developed a flask app running on virtualenv, how do I deploy it into production? I have a Red Hat Enterprise Linux Server release 5.6, cannot use docker. The server has cgi and wsgi setup. Python 2.7. I know using the pip install -r requirements.txt, but how do I get the virtualenv to persist on production once my session is terminated? I am using source x../venv/bin/activate export FLASK_APP=myapp.py flask run --host=0.0.0.0 --port=8082 and this will allow me to access myurl:8082 How do I present a way for other users once I terminate session?
How to deploy flask virtualenv into production
0
0
1
0
0
2,680
48,879,495
2018-02-20T06:56:00.000
2
0
1
0
0
python,recursion,data-structures,intel,google-colaboratory
0
48,922,199
0
4
0
false
0
0
There is no way to request more CPU/RAM from Google Colaboratory at this point, sorry.
1
1
0
0
I use GoogleColab to test data stuctures like chain-hashmap,probe-hashmap,AVL-tree,red-black-tree,splay-tree(written in Python),and I store very large dataset(key-value pairs) with these data stuctures to test some operation running time,its scale just like a small wikipedia,so run these python script will use very much memory(RAM),GoogleColab offers a approximately 12G RAM but not enough for me,these python scripts will use about 20-30G RAM,so when I run python program in GoogleColab,will often raise an exception that"your program run over 12G upper bound",and often restarts.On the other hand,I have some PythonScript to do some recursion algorithm,as is seen to all,recursion algorithm use CPU vety mush(as well as RAM),when I run these algorithm with 20000+ recursion,GoogleColab often fails to run and restart,I knew that GoogleColab uses two cores of Intel-XEON CPU,but how do I apply more cores of CPU from Google?
How to apply GoogleColab stronger CPU and more RAM?
0
0.099668
1
0
0
13,142
48,880,273
2018-02-20T07:57:00.000
2
0
0
0
1
python,tensorflow,neural-network,keras,multilabel-classification
0
49,065,611
0
2
0
false
0
0
You're on the right track. Usually, you would either balance your data set before training, i.e. reducing the over-represented class or generate artificial (augmented) data for the under-represented class to boost its occurrence. Reduce over-represented class This one is simpler, you would just randomly pick as many samples as there are in the under-represented class, discard the rest and train with the new subset. The disadvantage of course is that you're losing some learning potential, depending on how complex (how many features) your task has. Augment data Depending on the kind of data you're working with, you can "augment" data. That just means that you take existing samples from your data and slightly modify them and use them as additional samples. This works very well with image data, sound data. You could flip/rotate, scale, add-noise, in-/decrease brightness, scale, crop etc. The important thing here is that you stay within bounds of what could happen in the real world. If for example you want to recognize a "70mph speed limit" sign, well, flipping it doesn't make sense, you will never encounter an actual flipped 70mph sign. If you want to recognize a flower, flipping or rotating it is permissible. Same for sound, changing volume / frequency slighty won't matter much. But reversing the audio track changes its "meaning" and you won't have to recognize backwards spoken words in the real world. Now if you have to augment tabular data like sales data, metadata, etc... that's much trickier as you have to be careful not to implicitly feed your own assumptions into the model.
1
8
1
0
I'm trying to build a multilabel-classifier to predict the probabilities of some input data being either 0 or 1. I'm using a neural network and Tensorflow + Keras (maybe a CNN later). The problem is the following: The data is highly skewed. There are a lot more negative examples than positive maybe 90:10. So my neural network nearly always outputs very low probabilities for positive examples. Using binary numbers it would predict 0 in most of the cases. The performance is > 95% for nearly all classes, but this is due to the fact that it nearly always predicts zero... Therefore the number of false negatives is very high. Some suggestions how to fix this? Here are the ideas I considered so far: Punishing false negatives more with a customized loss function (my first attempt failed). Similar to class weighting positive examples inside a class more than negative ones. This is similar to class weights but within a class. How would you implement this in Keras? Oversampling positive examples by cloning them and then overfitting the neural network such that positive and negative examples are balanced. Thanks in advance!
Classification: skewed data within a class
0
0.197375
1
0
0
1,027
48,899,234
2018-02-21T06:13:00.000
1
0
0
0
0
python,tensorflow,recurrent-neural-network,sequence-to-sequence,encoder-decoder
0
48,899,306
0
1
0
true
0
0
If for example, you are using Tensorflow's attention_decoder method, pass a parameter "loop_function" to your decoder. Google search for "extract_argmax_and_embed", that is your loop function.
1
1
1
0
I know how to build an encoder using dynamic rnn in Tensorflow, but my question is how can we use it for decoder? Because in decoder at each time step we should feed the prediction of previous time step. Thanks in advance!
How to build a decoder using dynamic rnn in Tensorflow?
1
1.2
1
0
0
342
48,904,701
2018-02-21T11:26:00.000
1
0
1
0
0
python,python-2.7,pycharm
1
59,266,936
0
1
0
true
0
0
On the command line, go to the location where you had installed your setup and use this command to install the missing package: pip install pcap
1
0
0
0
I try install pcap package in pycharam tools but did not install and show blow error : Collecting pcap Could not find a version that satisfies the requirement pcap (from versions: ) No matching distribution found for pcap how can i fix installing package?
Can't install pcap in pycharm?
0
1.2
1
0
0
717
48,911,436
2018-02-21T16:58:00.000
0
0
0
0
0
python,pygame,blender,pyopengl
0
48,952,393
0
1
0
true
0
1
Ok I think i have found what you should do just for the people that have trouble with this like I did this is the way you should do it: to rotate around a cube with the camera in opengl: your x mouse value has to be added to the z rotator of your scene and the cosinus of your y mouse value has to be added to the x rotator and then the sinus of your y mouse value has to be subtracted of your y rotator that should do it
1
0
1
0
I am trying to create a simple scene in 3d (in python) where you have a cube in front of you, and you are able to rotate it around with the mouse. I understand that you should rotate the complete scene to mimic camera movement but i can't figure out how you should do this. Just to clarify I want the camera (or scene) to move a bit like blender (the program). Thanks in advance
PyOpenGL how to rotate a scene with the mouse
0
1.2
1
0
0
553
48,914,074
2018-02-21T19:35:00.000
0
0
0
0
0
postgresql,python-3.6
1
53,563,271
0
1
0
false
0
0
You can try using f-Strings and separating out the statement from the execution: statement = f"INSERT INTO name VALUES({VARIABLE_NAME},'string',int,'string')" cur.execute(statement) You might also want to try with '' around {VARIABLE_NAME}: '{VARIABLE_NAME}' In f-strings, the expressions in {} get evaluated and their values inserted into the string. By separating out the statement you can print it and see if the string is what you were expecting. Note, the f-string can be used within the cur.execute function, however I find it more readable to separate out. In python3.6+ this is a better way of formatting strings then with %s. If this does not solve the problem, more information will help debug: what is the name table's schema? what variable / value are you trying to insert? what is the exact error you are given?
1
0
0
0
I got something like this: cur.execute("INSERT INTO name VALUES(HERE_IS_VARIABLE,'string',int,'string')") Stuff with %s (like in python 2.*) not working. I got errors, which tells me that im trying to use "column name" in place where i put my variable.
Python3.6 + Postgresql how to put VARIABLES to SQL query?
1
0
1
1
0
376
48,921,619
2018-02-22T07:16:00.000
0
0
0
0
0
python,django,django-sessions
0
48,923,389
0
1
0
false
1
0
You can run a javascript setTimeout in the background which will check if user is logged in and after three minutes the browser window will refresh. OR (better) You can run this timer server-side and when the client would try to change something, firstly look at the timer or the value where is the time until when is the client logged in and then based on the time perform the action or not. So After you three minutes interval user would be able to see the content but when he would try to change something the backend would reject the request and would require him to log in again. It is much better solution because when it comes to the authentication and similar things, it's better everytime to do them server-side rather than in client browser so that it could not be exploited. BUT Both solutions can be applied simultaneously (so that client's browser would reload the window and redirect client to the login page and server would reject the request so that data would not be modified in any way).
1
0
0
0
I have created a login page for my application and set the session out for 3 minutes and it is working fine, but the problem is when session out happened the user is still able to do many activities on the current page i.e the logout page do not show until unless user do a page refresh or redirect to the other page. So, how is it possible to do the logout once the session out and user do any of the activity on the current page?
Django: detect the mouse click if session out
0
0
1
0
0
231
48,924,787
2018-02-22T10:14:00.000
1
0
1
0
1
python,windows,pycharm,anaconda
0
64,291,969
0
13
1
false
0
0
Found a solution. Problem is we have been creating conda environments from within Pycharm while starting a new project. This is created at the location /Users/<username>/.conda/envs/<env-name>. e.g. /Users/taponidhi/.conda/envs/py38. Instead create environments from terminal using conda create --name py38. This will create the environment at /opt/anaconda3/envs/. After this, when starting a new project, select this environment from existing environments. Everything works fine.
3
30
0
0
I have a conda environment at the default location for windows, which is C:\ProgramData\Anaconda2\envs\myenv. Also, as recommended, the conda scripts and executables are not in the %PATH% environment variable. I opened a project in pycharm and pointed the python interpreter to C:\ProgramData\Anaconda2\envs\myenv\python.exe and pycharm seems to work well with the environment in the python console, in the run environment, and in debug mode. However, when opening the terminal the environment is not activated (I made sure that the checkbox for activating the environment is checked). To be clear - when I do the same thing with a virtualenv the terminal does activate the environment without a problem. Here are a few things I tried and did not work: Copied the activate script from the anaconda folder to the environment folder Copied the activate script from the anaconda folder to the Scripts folder under the environment Copied an activate script from the virtualenv (an identical one for which the environment is activated) Added the anaconda folders to the path None of these worked. I can manually activate the environment without a problem once the terminal is open, but how do I do it automatically?
PyCharm terminal doesn't activate conda environment
0
0.015383
1
0
0
19,602
48,924,787
2018-02-22T10:14:00.000
4
0
1
0
1
python,windows,pycharm,anaconda
0
69,735,670
0
13
1
false
0
0
Solution for Windows Go to Settings -> Tools -> Terminal set Shell path to: For powershell (I recommend this): powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\tools\miniconda3\shell\condabin\conda-hook.ps1' For cmd.exe: cmd.exe "C:\tools\miniconda3\Scripts\activate.bat" PyCharm will change environment automatically in the terminal PS: I'm using my paths to miniconda, so replace it with yours
3
30
0
0
I have a conda environment at the default location for windows, which is C:\ProgramData\Anaconda2\envs\myenv. Also, as recommended, the conda scripts and executables are not in the %PATH% environment variable. I opened a project in pycharm and pointed the python interpreter to C:\ProgramData\Anaconda2\envs\myenv\python.exe and pycharm seems to work well with the environment in the python console, in the run environment, and in debug mode. However, when opening the terminal the environment is not activated (I made sure that the checkbox for activating the environment is checked). To be clear - when I do the same thing with a virtualenv the terminal does activate the environment without a problem. Here are a few things I tried and did not work: Copied the activate script from the anaconda folder to the environment folder Copied the activate script from the anaconda folder to the Scripts folder under the environment Copied an activate script from the virtualenv (an identical one for which the environment is activated) Added the anaconda folders to the path None of these worked. I can manually activate the environment without a problem once the terminal is open, but how do I do it automatically?
PyCharm terminal doesn't activate conda environment
0
0.061461
1
0
0
19,602
48,924,787
2018-02-22T10:14:00.000
0
0
1
0
1
python,windows,pycharm,anaconda
0
64,384,425
0
13
1
false
0
0
I am using OSX and zshell has become the default shell in 2020. I faced the same problem: my conda environment was not working inside pycharm's terminal. File -> Settings -> Tools -> Terminal. the default shell path was configured as /bin/zsh --login I tested on a separate OSX terminal that /bin/zsh --login somehow messes up $PATH variable. conda activate keep adding conda env path at the end instead of at the beginning. So the default python (2.7) always took precedence because of messed up PATH string. This issue had nothing to do with pycharm (just how zshell behaved with --login), I removed --login part from the script path; just /bin/zsh works (I had to restart pycharm after this change!)
3
30
0
0
I have a conda environment at the default location for windows, which is C:\ProgramData\Anaconda2\envs\myenv. Also, as recommended, the conda scripts and executables are not in the %PATH% environment variable. I opened a project in pycharm and pointed the python interpreter to C:\ProgramData\Anaconda2\envs\myenv\python.exe and pycharm seems to work well with the environment in the python console, in the run environment, and in debug mode. However, when opening the terminal the environment is not activated (I made sure that the checkbox for activating the environment is checked). To be clear - when I do the same thing with a virtualenv the terminal does activate the environment without a problem. Here are a few things I tried and did not work: Copied the activate script from the anaconda folder to the environment folder Copied the activate script from the anaconda folder to the Scripts folder under the environment Copied an activate script from the virtualenv (an identical one for which the environment is activated) Added the anaconda folders to the path None of these worked. I can manually activate the environment without a problem once the terminal is open, but how do I do it automatically?
PyCharm terminal doesn't activate conda environment
0
0
1
0
0
19,602
48,925,086
2018-02-22T10:29:00.000
0
0
0
0
0
python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling
0
60,953,415
0
4
0
false
0
0
Find the maximum extent of all points. Split into 7x7x7 voxels. For all points in a voxel find the point closest to its centre. Return these 7x7x7 points. Some voxels may contain no points, hopefully not too many.
2
13
1
0
Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in Q is less than that in Q’. If n is approximately 16 million and k is about 300, how do we efficiently do this? My guess is that this NP-hard so may be we just want to focus on approximation. One idea I can think of is using Multidimensional scaling to sort these points in a line and then use version of binary search to get points that are furthest apart on this line.
Choosing subset of farthest points in given set of points
0
0
1
0
0
3,420
48,925,086
2018-02-22T10:29:00.000
1
0
0
0
0
python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling
0
48,925,457
0
4
0
false
0
0
If you can afford to do ~ k*n distance calculations then you could Find the center of the distribution of points. Select the point furthest from the center. (and remove it from the set of un-selected points). Find the point furthest from all the currently selected points and select it. Repeat 3. until you end with k points.
2
13
1
0
Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in Q is less than that in Q’. If n is approximately 16 million and k is about 300, how do we efficiently do this? My guess is that this NP-hard so may be we just want to focus on approximation. One idea I can think of is using Multidimensional scaling to sort these points in a line and then use version of binary search to get points that are furthest apart on this line.
Choosing subset of farthest points in given set of points
0
0.049958
1
0
0
3,420
48,936,542
2018-02-22T20:22:00.000
1
0
0
0
0
python,scikit-learn
0
48,936,596
0
3
0
false
0
0
Use the feature_importances_ property. Very easy.
1
1
1
0
Is there a way in python by which I can get contribution of each feature in probability predicted by my gradient boosting classification model for each test observation. Can anyone give actual mathematics behind probability prediction in gradient boosting classification model and how can it be implemented in Python.
gradient boosting- features contribution
0
0.066568
1
0
0
1,153
48,940,807
2018-02-23T03:48:00.000
0
0
0
0
0
python,post,cookies,request
0
48,940,818
0
1
0
true
0
0
First create a session then use GET and use session.cookies.get_dict() it will return a dict and it should have appropriate values you need
1
0
0
0
I am basically running my personal project,but i'm stuck in some point.I am trying to make a login request to hulu.com using Python's request module but the problem is hulu needs a cookie and a CSRF token.When I inspected the request with HTTP Debugger it shows me the action URL and some request headers.But the cookie and the CSRF token was already there.But how to can do that with request module? I mean getting the cookies and the CSRF token before the post request? Any ideas? Thanks
How to get cookies before making request in Python
0
1.2
1
0
1
882
48,969,107
2018-02-25T00:38:00.000
0
0
0
0
0
python,grpc
0
49,018,750
0
1
0
false
0
0
Short answer: you can't gRPC is a request-response framework based on HTTP2. Just as you cannot make a website that initiates a connection to a browser, you cannot make a gRPC service initiating a connection to the client. How would the service even know who to talk to? A solution could be to open a gRPC server on the client. This way both the client and the server can accept connections from one another.
1
0
0
0
Hi i am new to GRPC and i want to send one message from server to client first. I understood how to implement client sending a message and getting response from server. But i wanna try how server could initiate a message to connected clients. How could i do that?
How to let server send the message first in GRPC using python
0
0
1
0
1
325
48,970,752
2018-02-25T06:10:00.000
0
0
1
0
0
python,dictionary
0
48,970,792
0
4
0
false
0
0
Use the enumerate function that will count all the words like looping it as for index, value in enumerate(dic):
1
2
0
0
A simple program about storing rivers and their respective locations in a dictionary. I was wondering how I would go about looping through a dictionary key and looking if the dictionary key (or value) contains a certain word, if the word is present in the key, remove it. EX: rivers_dict = {'mississippi river': 'mississippi'} How would I remove the word 'river' in the dictionary key 'mississippi river'? I know i can assign something such as: rivers_dict['mississippi'] = rivers_dict.pop('mississippi river'). Is there a way to do this in a more modular manner? Thanks in advance.
If a dictionary key contains a certain word, how would I remove it?
0
0
1
0
0
899
48,977,688
2018-02-25T19:43:00.000
1
1
0
0
0
python,server,putty
0
48,977,787
0
1
0
true
0
0
There are many ways you can run a python program after you disconnect from an SSH session. 1) Tmux or Screen Tmux is a "terminal multiplexer" which enables a number of terminals to be accessed by a single one. You start by sshing as you do, run it by typing tmux and executing it. Once you are done you can disconnect from putty and when you login back you can relog to the tmux session you left Screen also does that you just type screen instead of tmux 2) nohup "nohup is a POSIX command to ignore the HUP signal. The HUP signal is, by convention, the way a terminal warns dependent processes of logout." You can run it by typing nohup <pythonprogram> &
1
0
0
0
I've created a script for my school project that works with data. I'm quite new to working remotely on a server, so this might seem like a dumb question, but how do I execute my script named stats.py so that it continues executing even after I log off PuTTy? The script file is located on the server. It has to work with a lot of data, so I don't want to just try something and then few days later find out that it has exited right after I logged off. Thank you for any help!
How to run a python script on a remote server that it doesn't quit after I log off?
0
1.2
1
0
0
1,277
48,985,145
2018-02-26T09:27:00.000
0
0
0
0
1
python,django,email,user-registration
0
48,985,464
0
1
0
false
1
0
I would change it to lowercase and then save it because it looks like the least number of operations to be done. Also it should make the shortest code. If you'll decide to check if it's unique in lowercase in DB and then save it you may end up checking DB two times (one when first check, 2nd when saving) if you'll implement it in wrong way.
1
0
0
0
As I know, standard new user process registration (Django 2.x) is validate email field only for exists and equals for E-mail Schemas. But users may be write e-mail address like this: [email protected] (via Caps Lock) and save it to DB. It's would be dangerous, because other user can register account for these e-mail, but in lowercase: [email protected] or similar, but still that e-mail address! So, question now is how to (smart) clean up e-mail address when user is registering?. My ideas: set email to lowercase before save to DB check if it exist/unique in DB (in lowercase view, of course) I search for best practice for solve this question, btw.
Clean email field when new user is registering in Django?
0
0
1
0
0
122
49,011,180
2018-02-27T14:36:00.000
1
0
0
0
0
android,python,ios,node.js,lyft-api
0
52,992,307
0
1
0
false
0
0
The Mystro app does not have any affiliation with either Uber or Lyft nor do they use their APIs to interact with a driver (as neither Uber or Lyft have a publicly accessible driver API like this). They use an Android Accessibility "feature" that let's the phone look into and interact with other apps you have running. So basically Mystro uses this accessibility feature (Google has since condemned the use of the accessibility feature like this) to interact with the Uber and Lyft app on the driver's behalf.
1
0
0
0
I want to use Lyft Driver api like in the Mystro android app however iv searched everywhere and all I could find is lyft api. To elaborate more on what I'm trying to achieve, I want api that will allow me to intergrate with the lyft driver app and not the lyft rider app, I want to be able to for example view nearby ride requests as a driver. The Mystro android app has this feature, how is it done
How do I use Lyft driver API like Mystro android app?
0
0.197375
1
0
1
249
49,027,447
2018-02-28T10:35:00.000
1
0
0
1
0
python,bash,concurrency,background-process
0
49,088,871
0
1
0
false
1
0
Here's how it might look like (hosting-agnostic): A user uploads a file on the web server The file is saved in a storage that can be accessed later by the background jobs Some metadata (location in the storage, user's email etc) about the file is saved in a DB/message broker Background jobs tracking the DB/message broker pick up the metadata and start handling the file (this is why it needs to be accessible by it in p.2) and notify the user More specifically, in case of python/django + aws you might use the following stack: Lets assume you're using python + django You can save the uploaded files in a private AWS S3 bucket Some meta might be saved in the db or use celery + AWS SQS or AWS SQS directly or bring up something like rabbitmq or redis(+pubsub) Have python code handling the job - depends on what your opt for in p.3. The only requirement is that it can pull data from your S3 bucket. After the job is done notify the user via AWS SES The simplest single-server setup that doesn't require any intermediate components: Your python script that simply saves the file in a folder and gives it a name like [email protected] Cron job looking for any files in this folder that would handle found files and notify the user. Notice if you need multiple background jobs running in parallel you'll need to slightly complicate the scheme to avoid race conditions (i.e. rename the file being processed so that only a single job would handle it) In a prod app you'll likely need something in between depending on your needs
1
2
0
0
I want to create a minimal webpage where concurrent users can upload a file and I can process the file (which is expected to take some hours) and email back to the user later on. Since I am hosting this on AWS, I was thinking of invoking some background process once I receive the file so that even if the user closes the browser window, the processing keeps taking place and I am able to send the results after few hours, all through some pre-written scripts. Can you please help me with the logistics of how should I do this?
Concurrent file upload/download and running background processes
0
0.197375
1
0
0
912
49,031,954
2018-02-28T14:31:00.000
6
0
0
0
1
python,django,django-rest-framework,django-registration,django-oauth
0
49,129,766
0
4
0
false
1
0
You have to create the user using normal Django mechanism (For example, you can add new users from admin or from django shell). However, to get access token, OAuth consumer should send a request to OAuth server where user will authorize it, once the server validates the authorization, it will return the access token.
2
11
0
0
I've gone through the docs of Provider and Resource of Django OAuth Toolkit, but all I'm able to find is how to 'authenticate' a user, not how to register a user. I'm able to set up everything on my machine, but not sure how to register a user using username & password. I know I'm missing something very subtle. How do I exactly register a user and get an access token in return to talk to my resource servers. OR Is it like that I've to first register the user using normal Django mechanism and then get the token of the same?
Django OAuth Toolkit - Register a user
0
1
1
0
0
6,515
49,031,954
2018-02-28T14:31:00.000
1
0
0
0
1
python,django,django-rest-framework,django-registration,django-oauth
0
59,511,833
0
4
0
false
1
0
I'm registering user with regular django mechanism combined with django-oauth-toolkit's application client details (client id and client secret key). I have separate UserRegisterApiView which is not restricted with token authentication but it checks for client id and client secret key while making post request to register a new user. In this way we are restricting register url access to only registered OAuth clients. Here is the registration workflow: User registration request from React/Angular/View app with client_id and client_secret. Django will check if client_id and client_secret are valid if not respond 401 unauthorized. If valid and register user data is valid, register the user. On successful response redirect user to login page.
2
11
0
0
I've gone through the docs of Provider and Resource of Django OAuth Toolkit, but all I'm able to find is how to 'authenticate' a user, not how to register a user. I'm able to set up everything on my machine, but not sure how to register a user using username & password. I know I'm missing something very subtle. How do I exactly register a user and get an access token in return to talk to my resource servers. OR Is it like that I've to first register the user using normal Django mechanism and then get the token of the same?
Django OAuth Toolkit - Register a user
0
0.049958
1
0
0
6,515
49,046,224
2018-03-01T09:10:00.000
0
0
0
0
0
python-2.7,odoo-8,odoo
0
49,046,718
0
1
0
false
1
0
I solved it myself. I just added _order = 'finished asc' to the class. finished is a record of type Boolean and tells me if the Task is finished or not.
1
0
0
0
At the moment i am working on an odoo project and i have a kanban view. My question is how do i put a kanban element to the bottom via xml or python. Is there an index for the elements or something like that?
Is there a way to put a kanban element to the bottom in odoo
0
0
1
0
1
63