Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Incorrect output due to regular expression
41,868,352
0
0
43
0
python,regex
Your + is at the wrong position; your regexp, as it stands, would demand /John /Adam /Will /Newman, with a trailing space. r'((/)((\w)+(\s))+)' is a little better; it will accept /John Adam Will, with a trailing space; won't take Newman, because there is nothing to match \s. r'((/)(\w+(\s\w+)*))' matches what you posted. Note that it is necessary to repeat one of the sequences that match a name, because we want N-1 spaces if there are N words. (As Ondřej Grover says in comments, you likely have too many unneeded capturing brackets, but I left that alone as it hurts nothing but performance.)
0
0
0
0
2017-01-26T07:01:00.000
2
0
false
41,868,290
1
0
1
1
I had a pdf in which names are written after a '/' Eg: /John Adam Will Newman I want to extract the names starting with '/', the code which i wrote is : names=re.compile(r'((/)((\w)+(\s)))+') However, it produces just first name of the string "JOHN" and that too two times not the rest of the name.
Performance impact of reverse relationships in Django
41,899,169
4
0
200
0
python,django,orm,model
This is unnecessarily complex. There is no performance overhead to having a many-to-many relationship. This is represented by an intermediary table in the database; there's no actual field in the humans table. If an item doesn't have any m2m members, then no data is stored.
0
0
0
0
2017-01-27T16:52:00.000
1
1.2
true
41,899,083
0
0
1
1
I'm setting up my Models and I'm trying to avoid using ManyToMany Relationships. I have this setup: Model: Human Some Humans (a small percentage) need to have M2M relationships with other Humans. Let's call this relationship "knows" (reverse relationship called "is_known_by"). To avoid setting a ManyToManyField in Humans, I made a Model FamousHumans. FamousHumans are a special class of Human and have a OneToOneField(Human) They also have a ManyToManyField(Humans) to represent the "knows" relationship Here is my question: Since Django creates reverse relationships, I assume that all Humans will have a reverse "is_known_by" relationship to FamousHumans, so there is still a M2M relationship. Is there any performance benefit to my setup? The dataset will be rather large and only a few Humans will need the M2M relationship. My main concern is performance.
Flask "Error: The file/path provided does not appear to exist" although the file does exist
42,589,534
1
21
25,530
0
python,flask,python-import,file-not-found
Please follow these steps: Make sure you have already done with [pip install --editable . ]. where '.' represent the location of directory where your app is installed. e.g(flask_app) Run python It will open command line python interpreter Try to import the flask app If its there error, you will get the detailed error. Try to fix that error. I do ran into the same problem and followed the steps above and found that there is error in running code. Interpreter is showing compile error.
0
0
0
0
2017-01-28T18:34:00.000
9
0.022219
false
41,913,345
0
0
1
4
I use export FLASK_APP=flask_app and then do flask run but I get the error: Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either.
Flask "Error: The file/path provided does not appear to exist" although the file does exist
47,385,908
0
21
25,530
0
python,flask,python-import,file-not-found
The werkzeug version is not suitable for flask. To address this problem, you need to upgrade the werkzeug, use: $pip install werkzeug --upgrade
0
0
0
0
2017-01-28T18:34:00.000
9
0
false
41,913,345
0
0
1
4
I use export FLASK_APP=flask_app and then do flask run but I get the error: Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either.
Flask "Error: The file/path provided does not appear to exist" although the file does exist
54,899,607
5
21
25,530
0
python,flask,python-import,file-not-found
This could be many reasons. python2 vs python3 issue, pip2 install Flask vs pip3 install Flask issue, and (venv) virtual environment vs local environment issue. In my case, had to do the following to solve the problem: python3 -m venv venv . venv/bin/activate pip3 install Flask export FLASK_APP=flask_app flask run
0
0
0
0
2017-01-28T18:34:00.000
9
0.110656
false
41,913,345
0
0
1
4
I use export FLASK_APP=flask_app and then do flask run but I get the error: Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either.
Flask "Error: The file/path provided does not appear to exist" although the file does exist
51,108,431
7
21
25,530
0
python,flask,python-import,file-not-found
This message will occur if you issue flask run on the command line. Instead use python -m flask run after setting export FLASK_APP and export FLASK_ENV variables. I ran into this issue while following the Flask Tutorial when creating The Application Factory. The instruction does not specify to preface flask run with python -m.
0
0
0
0
2017-01-28T18:34:00.000
9
1
false
41,913,345
0
0
1
4
I use export FLASK_APP=flask_app and then do flask run but I get the error: Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either.
Selenium using too much memory
44,546,508
1
3
7,993
0
java,python-3.x,selenium,memory-management
Don't forget drive.close() in your code , if you don't close your driver, you will have a lot instance of Chrome.
0
0
1
0
2017-01-29T08:02:00.000
3
0.066568
false
41,918,828
0
0
1
1
I'm using selenium on python 3.5 with chrome webdriver on a ububtu vps, and when I run a very basic script (navigate to site, enter login fields, click), memory usage goes up by ~400mb,and cpu usage goes up to 100%. Are there any things I can do to lower this, or if not, are there any alternatives? I'm testing out selenium in python but I plan to do a project with it in java, where memory usage is a critical factor for me, so the same question applies for java as well.
Book Structure in Django
41,928,825
-1
1
296
0
python,django,sqlite
Each Django model is a class which you import in your app to be able to work with them. To connect models together you can use foreign keys to define relationships, i.e. your Page class will have a foreign key from Book. To store lists in a field, one of the ways of doing it is to convert a list to string using json module and define the field as a text field. json.dumps converts the list to a string, json.loads converts the string back to a list. Or, if you are talking about other "lists" in your question, then maybe all you need is just django's basic queryset that you get with model.objects.get(). Queryset is a list of rows from a table.
0
0
0
0
2017-01-30T01:45:00.000
2
-0.099668
false
41,927,996
0
0
1
1
I am currently trying to implement a book structure in Django in the model. The structure is as follows Book Class: title pages (this is an array of page objects) bibliography (a dictionary of titles and links) Page Class: title sections (an array of section objects) images (array of image urls) Section Class: title: text: images (array of image urls) videos (array of video urls) I am pretty new to Django and SQL structuring. What my question specifically is, what would be the best method in order to make a db with books where each entry has the components listed above? I understand that the best method would be to have a table of books where each entry has a one to many relationship to pages which in turn has a one to many relationship with sections. But I am unclear on connecting Django models together and how I can enable lists of objects (importantly these lists have to be dynamic).
Django REST Framework and HTML pages
41,934,786
0
0
214
0
python,html,json,django,rest
I don't see the point of using Django HTML Templates in your API endpoint since the whole point of using a REST API is to have the server side and the client side completely independent from one another. So yes, the FAQ items should be delivered as JSON and displayed as you want on the client side.
0
0
0
0
2017-01-30T11:05:00.000
1
0
false
41,934,355
0
0
1
1
I'm working on a project that uses Django on the server side and I have a REST(ish) API going. One thing I'm wondering about. Is it considered ok practice to deliver Django HTML templates via the API endpoints? For example, by going to www.rooturl.com, an API endpoint is called and the HTML delivered. Then, when user clicks on, say FAQ, a GET request is made to www.rooturl.com/faq and an HTML template delivered again? Or should the FAQ items be delivered as JSON? Or maybe give both alternatives through content negotiation? At which point is all the HTML content usually delivered? I couldn't find a satisfying answer with my google-fu.
Google Authenticator passwords duplicated somewhere else?
41,942,437
0
0
75
0
javascript,java,python,google-authenticator,authenticator
So I dug a little deeper. This, however, requires I disable and remove the current 2FA from my account. Go disable/remove current 2FA Go enable it again, but remember to grab the secret (it's listed somewhere in the request or on the page) and save it somewhere Find any secret -> One time password "generator" Now I have the secrets synced on my PC and on my phone. Pretty neat. Requires a lot of work, as I need to disable all my authenticators, but it does work actually.
0
0
1
0
2017-01-30T17:08:00.000
1
0
false
41,941,537
0
0
1
1
I'm trying to know if this is possible at all. So far it doesn't look that great. Let's imagine I wanted to list all my current Google Authenticator passwords somewhere. That list would update once there's a new set. Is this possible at all? I remember back when Blizzard made their authenticator. You would basically have to enter the recovery key/password from their app into a program, which could then show your authenticator on the screen and on your phone or physical device (yeah they sold those). I imagine they used TOTP just like Google Authenticator does. So my real question is: I have my x amount of Google Authenticator passwords, which refreshes every 30 seconds. Can I pull these out and show them in another program? Java? Python? Anything? I assume "reverse engineering the algorithm" and brute forcing the keys (like grab 100 keys and work out the next key) would be impossible, as these are server-client based.. right?
Include auto generated primary key in Django ModelForm fields
41,951,490
3
3
356
0
python,django,django-models,django-forms
Are only the model fields explicitly declared visible in the ModelForm? Yes, generally you don't want to mess with this field, if the user inputs a value for the id field it's very likely to be duplicated so this is something you want django to take care of for you.
0
0
0
0
2017-01-31T06:59:00.000
1
1.2
true
41,951,447
0
0
1
1
I have a model where I didn't specify a primary key and Django generated one for me. Now I create a ModelForm for the model and I have specified id in the fields section of ModelForm. However, in my ModelForm object, the id field is not present. Are only the model fields explicitly declared visible in the ModelForm?
Google Stackdriver does not show trace
45,255,813
0
2
264
0
python,google-app-engine,google-cloud-logging
Google has in the mean time update the cloud console and debugger, which now does contain full stack traces for Python.
0
1
0
0
2017-01-31T21:24:00.000
1
1.2
true
41,967,742
0
0
1
1
Previously when an error occurred in my application I could find a trace of the entire code to where it happened ( file, line number ). In the Google Cloud console. Right now I only receive a request ID and a timestamp, with no indication of a trace or line number in the code when in the 'logging' window in the Google Cloud Console. Selecting a 'log event' only shows some sort of JSON structure of a request, but not anything about the code or any helpful information what went wrong with the application. What option should be selected in the google cloud console to show a stack trace for Python App Engine applications?
Nginx non-responsive while celery is running
42,014,392
0
0
233
0
python,django,nginx,redis,wsgi
Figured this out after a few days. We were using a django app called django-health-check. It has a component called health_check_celery3 that was in the installed apps. This was having trouble loading while celery was running, and thus causing the whole app to stall. After removing it, celery runs as it should.
0
1
0
0
2017-01-31T23:47:00.000
2
0
false
41,969,597
0
0
1
1
I have a django app configured to run behind nginx using uWSGI. On a separate machine I am running celery, and pushing long running tasks from the webserver to the task machine. The majority of the task I/O is outbound http requests, which go on for an hour or more. The task broker is redis. When the tasks run for more than a minute or two, the webserver becomes unresponsive (503 errors). There are no errors raised anywhere within the python app. The tasks complete normally, after which the webserver continues handling requests. Has anyone experienced this before, and if so, how did you deal with it? Thanks
How to perform AWS DynamoDB backup and restore operations by utilizing minimal read/write units?
42,009,940
1
3
1,135
1
python-2.7,amazon-web-services,amazon-dynamodb,amazon-dynamodb-streams
Option #1 and #2 are almost the same- both do a Scan operation on the DynamoDB table, thereby consuming maximum no. of RCUs. Option #3 will save RCUs, but restoring becomes a challenge. If a record is updated more than once, you'll have multiple copies of it in the S3 backup because the record update will appear twice in the DynamoDB stream. So, while restoring you need to pick the latest record. You also need to handle deleted records correctly. You should choose option #3 if the frequency of restoring is less, in which case you can run an EMR job over the incremental backups when needed. Otherwise, you should choose #1 or #2.
0
0
0
1
2017-02-01T07:14:00.000
2
0.099668
false
41,973,955
0
0
1
1
We are looking for a solution which uses minimum read/write units of DynamoDB table for performing full backup, incremental backup and restore operations. Backup should store in AWS S3 (open to other alternatives). We have thought of few options such as: 1) Using python multiprocessing and boto modules we were able to perform Full backup and Restore operations, it is performing well, but is taking more DynamoDB read/write Units. 2) Using AWS Data Pipeline service, we were able to perform Full backup and Restore operations. 3) Using Dynamo Streams and kinesis Adapter/ Dynamo Streams and Lambda function, we were able to perform Incremental backup. Are there other alternatives for Full backup, Incremental backup and Restore operations. The main limitation/need is to have a scalable solution by utilizing minimal read/write units of DynamoDb table.
Is it normal that the Django site I recently deployed on Apache is always on?
41,976,000
1
0
48
0
python,django,apache,ssh
Your question is confusing. If you deployed it with Apache, it's running through Apache and not through runserver. You might have additionally started runserver, but that is not what is serving your site.
0
0
0
0
2017-02-01T08:21:00.000
1
1.2
true
41,974,959
0
0
1
1
I recently deployed a Django site on a DigitalOcean droplet through Apache. I did python manage.py runserver through ssh and now the Django site is running. However, it stayed on even after the ssh session expired (understandable because it's still running on the remote server) but how do I shut it down if I need to? Also, due to this, I don't get error messages on the terminal if something goes wrong like I do when I develop locally. What would be a fix for this?
Python WSGI missing request header 'If-None-Matches'
42,309,756
0
0
103
0
python,http,header,wsgi
Yeah, so the problem is, the header is called "If-None-Match", which is not plural.
0
0
0
0
2017-02-01T10:20:00.000
1
1.2
true
41,977,176
0
0
1
1
I'm using Python & WSGI to create a web application. Currently I'm loading the server with wsgiref.simple_server.make_server . However, I'm running into the problem that not all request headers are given to my application. Specifically the header "If-None_matches". The browser is sending it, but I don't get an environment variable like "HTTP_IF_NONE_MATCHES" for some reason. Anyone knows what is going on? Thanks you guys.
Do I ever have to decrypt S3-encrypted files?
41,987,427
7
1
3,317
1
python,amazon-web-services,encryption,amazon-s3
The "server-side" encryption you have enabled turns on encryption at rest. Which means the file is encrypted while it's sitting on S3. But S3 will decrypt the file before it sends you the data when you download the file. So there is no change to how you handle the file when downloading it if the file is encrypted or not. This type of encryption does not protect the file if the file is downloaded via valid means, such as when using the API. It only protects the file from reading if someone were to circumvent the S3 data center or something like that. If you need to protect the file, such that it must be decrypted when downloaded, then you need to encrypt it client-side, before uploading it to S3. You can use any client-side encryption scheme you deem worthy: AES256, etc. But S3 won't do it for you.
0
0
0
0
2017-02-01T18:34:00.000
1
1.2
true
41,987,133
0
0
1
1
I'm using S3 instead of KMS to store essentially a credentials file, and Python to read the file's contents. I manually set the file encrypted by clicking on it in S3, going to Properties - Details - Server Side Encryption:AES-256 And in my Python script, I read the key without making changes from when I read the file when it was unencrypted. And I was also able to download the file and open it without having to do anything like decrypting it. I was expecting to have to decrypt it, so I'm a little confused. I'm just unable to understand what server-side encryption protects against. Would anyone already with access to S3 or the S3 bucket with the key/file be able to read the file? Who wouldn't be able to open the file?
How to run two different python version in single Django project?
41,987,877
1
1
82
0
python,django,python-2.7,python-3.x
In a word, no. An app built for Django 1.4 will almost certainly not work on Django 1.9. Django does usually offer backwards compatibility, but only on revision numbers of the minor version. That is, you might expect 1.4.22 to run code written for any 1.4.x without any change necessary, but a 1.5 release would introduce backwards-incompatible changes.
0
0
0
0
2017-02-01T19:03:00.000
1
1.2
true
41,987,619
1
0
1
1
Currently i am working on django project with python 3.5 and Django 1.9.2. I want to integrated one app(Module) which was build with python 2.7 and Django 1.4 from different django project in my latest project.Can i run two different app with different python and Django in single Django project.
Does Python requests session keep page active?
42,003,480
0
1
194
0
python,python-2.7,session,request,phantomjs
You don't need to keep sending the session, as long as you keep the Python application running you should be good.
0
0
1
0
2017-02-02T13:27:00.000
1
0
false
42,003,456
0
0
1
1
So I am currently writing a script that will allow me to wait on a website that has queue page before I can access contents Essentially queue page is where they let people in randomly. In order to increase my chance of getting in faster , I am writing multi thread script and have each thread wait in line. First thing that came to my mind is would session.get() works in this case? If I send session get request every 10 seconds, would I stay hold my position in queue? Or would I end up at the end? Some info about website, they randomly let people in. I am not sure if refreshing page reset your chance or not. But best thing would be to leave page open and let it do it things. I could use phantomjs but I would rather not have over 100 headless browser open slowing down my program and computer
Bigcommerce Python API, how do I create a product with an image?
48,835,159
-2
3
438
0
python,e-commerce,bigcommerce
This will create the product on the BigCommerce website. You create the image after creating the product, by entering the following line. The image_file tag should be a fully qualified URL pointing to an image that is accessible to the BigCommerce website, being found either on another website or on your own webserver. api.ProductImages.create(parentid=custom.id, image_file='http://www.evenmore.co.uk/images/emgrab_s.jpg', description='My image description')
0
0
1
0
2017-02-02T13:27:00.000
1
-0.379949
false
42,003,461
0
0
1
1
how do I upload an image (from the web) using Bigcommerce's Python API? I've got this so far: custom = api.Products.create(name='Test', type='physical', price=8.33, categories=[85], availability='available', weight=0) Thank you! I've tried almost everything!
Django All-Auth Role Based Signup
42,011,936
1
0
376
0
python,django,django-allauth
@pennersr was kind enough to answer this on the allauth github page: This truly all depends on how you model things, there is nothing in allauth that blocks you from implementing the above. One way of looking at things is that the signup form is not different at all. It merely contains an additional switch that indicates the type of user account that is to be created. Then, it is merely a matter of visualizing things properly, if you select type=employer, then show a different set of fields compared to signing up using type=developer. If you don't want such a switch in your form, then you can store the type of account being created somewhere in the session, and refer to that when populating the account.
0
0
0
0
2017-02-02T15:34:00.000
1
0.197375
false
42,006,246
0
0
1
1
Having read many stack overflow questions, tutorials etc on all-auth I keep getting the impression that it only supports the registration of one type of user per project. I have two usecases A business user authenticates and registers his business in one step. A developer user authenticates and just fills in the name of his employer (software company). I do not want the developer to see the business fields when he signs up. i.e his signup form is different. If, in fact signup should be common and the user specific details should be left to a redirect, how to accomplish this from social auth depending on user type?
Downgrade Sphinx from version 1.5.2. to version 1.4.9
48,073,039
2
0
1,461
0
python,python-sphinx
inorder to install specific version of sphinx .type in terminal : pip install sphinx==1.4.9
0
0
0
0
2017-02-02T15:47:00.000
1
0.379949
false
42,006,500
0
0
1
1
The search functionality in my Sphinx doc has stopped working after upgrading to version 1.5.1. I'm using the sphinx_rtd_theme. How do I downgrade Sphinx to 1.4.9?
Website error after scraping
42,006,894
4
2
337
0
python,web-scraping
Apparently your IP was banned by the website for suspicious activity. There are couple ways around that: talk to website owners. This is the most straightforward and nicest way change your IP, e.g. by connecting though a pool of public proxies or Tor. This is a little bit dirty and it is not so robust, e.g. you can be banned by user-agent or some other properties of your scraper.
0
0
1
0
2017-02-02T15:57:00.000
1
0.664037
false
42,006,758
0
0
1
1
I made a simple scraper that accesses an album, and scrapes lyrics for each song from azlyrics.com. After about an hour of working, the website crashed, with an error: Chrome: www.azlyrics.com didn’t send any data. ERR_EMPTY_RESPONSE Tor, firefox, waterfox: The connection was reset The connection to the server was reset while the page was loading. It's the same for all devices on my home network. If I use mobile data to access it via my phone it works fine. I tried fixing it with ipconfig /release /renew, but it didn't work. I'm at a loss for what else I could do or why it even happened. Any help is greatly appreciated.
Leave Debug on in Django for internal application
42,007,400
1
0
162
0
python,django,debugging
"Internal" is a relative term. People or machines on the internal network can still be considered attackers.
0
0
0
0
2017-02-02T15:58:00.000
3
0.066568
false
42,006,791
0
0
1
3
I'm working on a Django project that I took over for someone else that is only used internally. It's not deployed to a website and can only be accessed on a local network. The previous developer had left DEBUG = True in settings.py. Django docs really emphasize that leaving DEBUG=True when the site is in production is bad. The site is inaccessible by anyone not on the local network, and is only even looked at by ~5 people regularly. Aside from security reasons, is there any other downside to operating permanently in DEBUG mode?
Leave Debug on in Django for internal application
42,006,881
3
0
162
0
python,django,debugging
Debug mode might leak a bit of memory. Additionally, it is much better for production systems, however small, to email their administrator with the full error message and stack trace (which Django does by default when DEBUG=False) than to show it on the browser. This way the administrator knows exactly what happened instead of trying to reproduce it with vague information from the users ("I clicked here and then I think I clicked there and then there was this message"). You need to set the ADMINS and EMAIL_* settings correctly though.
0
0
0
0
2017-02-02T15:58:00.000
3
0.197375
false
42,006,791
0
0
1
3
I'm working on a Django project that I took over for someone else that is only used internally. It's not deployed to a website and can only be accessed on a local network. The previous developer had left DEBUG = True in settings.py. Django docs really emphasize that leaving DEBUG=True when the site is in production is bad. The site is inaccessible by anyone not on the local network, and is only even looked at by ~5 people regularly. Aside from security reasons, is there any other downside to operating permanently in DEBUG mode?
Leave Debug on in Django for internal application
42,006,896
1
0
162
0
python,django,debugging
The Django docs warn to never deploy with debug on: Never deploy a site into production with DEBUG turned on. Did you catch that? NEVER deploy a site into production with DEBUG turned on. Later, they give a reason that isn't related to security: It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server.
0
0
0
0
2017-02-02T15:58:00.000
3
0.066568
false
42,006,791
0
0
1
3
I'm working on a Django project that I took over for someone else that is only used internally. It's not deployed to a website and can only be accessed on a local network. The previous developer had left DEBUG = True in settings.py. Django docs really emphasize that leaving DEBUG=True when the site is in production is bad. The site is inaccessible by anyone not on the local network, and is only even looked at by ~5 people regularly. Aside from security reasons, is there any other downside to operating permanently in DEBUG mode?
Any solutions for unsupported fonts to be able to print via POSBOX for odoo POS?
44,111,743
0
0
542
0
openerp,python-unicode,point-of-sale
Try this one maybe it could help you to solve the problem: Just try to make changes in default addons of your odoo. Because there are some configuration is there related to font printing. There is one module name hw_escpos in that module you can find one function where font configuration is there so just add font what you want and try to print.
0
0
0
0
2017-02-03T05:09:00.000
1
0
false
42,017,180
0
0
1
1
Any solutions for unsupported fonts to be able to print via POSBOX for odoo POS? POS BOX does not support Myanmar font. We need to print via POS BOX because we need multiple printings (to kitchen 1, to kitchen 2, to drink counter, etc ...). Any solutions for this issue, please?
SQLAlchemy ORM Load Cols Only not working
42,028,145
1
3
1,278
1
python,sqlalchemy
If you're using a database session, you can simply specify the columns directly. session.query(User.email, User.name).filter(and_(User.id == id, User.status == 1)).first()
0
0
0
0
2017-02-03T08:27:00.000
1
0.197375
false
42,019,810
0
0
1
1
I'm using query like this: user = User.query.options(load_only("email", "name")).filter(and_(User.id == id, User.status == 1)).first() I want to get only email and name column as an User object. But it returns all columns. I can't find any solutions. Can anybody help? Thanks
Encrypting credentials and reusing them securely
42,026,136
0
0
207
0
python,security,encryption,cryptography
Do the encryption and decryption on the second server (encryption server). Pass the password to the encryption server along with an id for encryption and it returns the encrypted password to store in the DB. When the password is needed pass the encrypted password to the encryption server for decryption. Have the encryption server monitor request activity, if an unusual number of requests are received sound an alarm and in extreme cases stop processing requests. Make the second server very secure. No Internet access, minimal access accounts, 2-factor authentication. The encryption server becomes a poor-man's HSM (Hardware Encryption Module).
0
0
0
1
2017-02-03T12:07:00.000
2
0
false
42,023,939
0
0
1
1
I'm currently building a website where the users would enter their credentials for another web service that I'm going to scrape to get their data. I want to make sure that when I save their credentials in my database, I'm using the best encryption possible and the best architecture to ensure the highest level of security. The first idea that I had in mind was to encrypt the data using an RSA pub key (PBKDF2, PKCS1_OAEP, AES 256bit... ???) and then allowing my scrapping script to use the private key to decrypt the credentials and use them. But if my server is hacked, the hacker would have access to both the database and the private key, since it will be kept on my server that runs the scrapping script and hosts the DB. Is there an architecture pattern that solves this ? I've read that that there should be a mix of hashing and encryption to enable maximum security but hashing is uni directional and it doesn't fit my use case since I will have to reuse the credentials. If you can advise me with the best encryption cypher/pattern you know it could be awesome. I'm coding in python and I believe PyCrypto is the go-to library for encryption. (Sorry I have very little knowledge about cryptography so I might be confusing technologies)
Does Python garbage collect when Heroku warns about memory quota vastly exceeded (R15)?
42,077,350
0
2
609
0
python,django,heroku,garbage-collection,celery
Looks like the problem is that I'm not using .iterator() to iterate over the main queryset. Even though I'm freeing the data structures I'm creating after each iteration, the actual query results are all cached. Unfortunately, I can't use .iterator(), because I use prefetch_related extensively. I need some kind of hybrid method. I think it will involve processing the top-level queryset in batches. It won't completely have the advantage of a finite number of queries that prefetch_related has, but it will be better than one query per model object.
0
0
0
0
2017-02-03T17:59:00.000
1
0
false
42,030,293
0
0
1
1
I have a background task running under Celery on Heroku, that is getting "Error R14 (Memory quota exceeded)" frequently and "Error R15 (Memory quota vastly exceeded)" occasionally. I am loading a lot of stuff from the database (via Django on Postgres), but it should be loading up a big object, processing it, then disposing of the reference and loading up the next big object. My question is, does the garbage collector know to run before hitting Heroku's memory limit? Should I manually run the gc? Another thing is that my task sometimes fails, and then Celery automatically retries it, and it succeeds. It should be deterministic. I wonder if something is hanging around in memory after the task is done, and still takes up space when the next task starts. Restarting the worker process clears the memory and lets it succeed. Maybe Django or the DB has some caches that are not cleared? I'm using standard-2x size. I could go to performance-m or performance-l, but trying to avoid that as it would cost more money.
Fast data access for AWS Lambda function
42,035,255
1
2
623
0
python,amazon-web-services,aws-lambda,amazon-kinesis-firehose
Preload the data into a Redis server. This is exactly what Redis is good at.
0
0
0
1
2017-02-04T00:21:00.000
1
1.2
true
42,035,082
0
0
1
1
I have a python-based lambda function which triggers on s3 put operations based on a kinesis firehose stream which sends data at the rate of around 10k records per minute. Right now the lambda function just performs some small fixups of the data and delivers it to a logstash instance in batches of 100. The lambda execution time is 5-12 secs which is fine as it runs every minute. We're looking at enriching the streamed data with some more info before sending it to logstash. Each message coming in has an "id" field, and we'd like to lookup that id against a db of some sort, grab some extra info from the db and inject that into the object before passing it on. Problem is, I cannot make it go fast enough. I tried loading all the data (600k records) into DynamoDB, and perform lookups on each record loop in the lambda function. This slows down the execution way too much. Then I figured we don't have to lookup the same id twice, so i'm using a list obj to hold already "looked-up" data - this brought the execution time down somewhat, but still not nearly close to what we'd like. Then I thought about preloading the entire DB dataset. I tested this - simply dumping all 600 records from dynamodb into a "cache list" object before starting to loop thru each record from the s3 object. The data dumps in about one minute, but the cache list is now so large that each lookup against it takes 5 secs (way slower than hitting the db). I'm at a loss on what do do here - I totally realize that lambda might not be the right platform for this and we'll probably move to some other product if we can't make it work, but first I thought I'd see if the community had some pointers as to how to speed up this thing.
Build a Server to Receive and Send User's Private Information
42,036,454
1
1
30
0
python,node.js,web-applications,server
You don't need to create your own encrypted communication protocol. Just serve all traffic over https. If you also wish to encrypt the data before storing it on a database you can encrypt it on arrival to the server. Check out Express.js for the server, Passport.js for authentication and search for 256-bit encryption on npm. There are quite a few implementations.
0
0
1
1
2017-02-04T03:58:00.000
1
1.2
true
42,036,307
0
0
1
1
I'm getting started out creating a website where users can store and get (on user request) private information they store on the server. Since the information is private, I would also like to provide 256 bit encryption. So, how should I go about it? Should I code the back end server stuff in node.js or Python, since I'm comfortable with both languages? How do I go about providing a secure server to the user? And if in the future, I would like to expand my service to mobile apps for Android and iOS, what would be the process? Please try explaining in detail since that would be a great help :)
Is there a REPL like iPython for Nodejs?
57,401,854
9
31
7,289
0
node.js,ipython,read-eval-print-loop,ijavascript
I've been looking for "ipython for node" for years and here's how I would answer your question: No.
0
0
1
0
2017-02-04T11:33:00.000
4
1
false
42,039,868
1
0
1
1
Is there any kid of "repl + extra features" (like showing docs, module autoreload etc.), like iPython, but for Nodejs? And I mean something that runs locally & offline. This is a must. And preferably to work both in terminal mode and have an optional nicer GUI on top (like iPython + iPythonQT/Jupyter-qtconsole). The standard Nodejs repl is usable, but it has horrible usability (clicking the up-arrow cycles through the repl hisoty by line instead of by multi-line command, as you would expect any sane repl to work for interactively experimenting with things like class statements), and is very bare-bones. Every time I switch from iPython to it it's painful. A browser's repl like Chrome's that you can run for node too by starting a node-inspector debug session is more usable... but also too cumbersome.
Apache2 uses Python CGI script, it is Security hole?
42,043,536
0
0
102
0
android,python,apache2,cgi
Hooray for worrying about security! Yes. There are always security holes. Use HTTPS rather than HTTP... Everywhere... (get free certificate from letsencrypt.com) Submitting data should generally use POST, not GET. (POST and HTTPS means the data is encrypted during transport. GET requests data via the URL which itself isn't encrypted. Mobile vs. Desktop isn't an issue Json vs. whatever isn't an issue Django vs. Python CGI isn't really an issue: if not properly configured either can have security issues.
0
0
0
0
2017-02-04T17:37:00.000
1
1.2
true
42,043,418
0
0
1
1
1.Hi I have Python CGI script on Apache2 Server. 2.I want send data from apache2. Format is Json 3.Send data to mobile aplication. 4.Mobile aplication request data usefull HTTP Request Methods: GET. 5.The application uses HTTPURLCONECTION. But People ask, This is security hole. Is it realy security hole ?? Solution could be Django on Apache 2 ? or Solution could be SSL?
How to run a local python script from django project hosted on aws?
42,053,909
0
0
59
0
python,django
Well, you simply need to find a way for the two of them to communicate without opening huge security holes. My suggestion would be a message queue (rabbit MQ, amazon SQS). The aws application writes jobs to the message queue and a local script runs the worker which is waiting for messages to be written to the queue for it to pick up.
0
1
0
0
2017-02-05T15:35:00.000
1
1.2
true
42,053,855
0
0
1
1
I have a requirement to run a local python script which takes arguements and will run on local windows computer from python code hosted on aws.
Django Migrate Change of App Name (active project)
42,257,514
5
1
1,272
0
python,django
I've worked on this since I posted it, and the real answer is what I've synthesized from multiple sources (including other stack exchange posts). So... Everything changed in Django before I started using it. After 1.7, the 'migrations' bit was internalized and posts including the word "South" are about how the world was before 1.7. Further, the complication in my case dealt with the issue of migrations in that the project was already active and had real data in production. There were some posts including a GITHub chunk of code that talked about migrating tables from one App to another App. This is inherently part of the process, but several posts noted that to do this as a "migration" you needed the Migration.py to be in another App. Maybe even an App created for the purpose. In-the-end, I decided to approach the problem by changing the label in the Application class of apps.py in the application in question. In my case, I am changing "pages" to "phpages" but the directory name of my app is still pages. This works for me because the mezzanine app's "pages" sub-App is back in the python library and not a conflict in the filesystem. If this is not your situation, you can solve it with another use of label. So... Step-by-step, my procedure to rename pages to phpages. Create apps.py in the pages sub-directory. In it put: class PagesConfig(AppConfig): name = "pages" label = "phpages" verbose_name = "Purple Hat Pages" Key among these is label which is going to change things. In __init__.py in the pages sub-directory, put default_app_config = "pages.apps.PagesConfig" In your settings.py change the INSTALLED_APPS entry for your app to 'pages.apps.PagesConfig', ... All of your migrations need to be edited in this step. In the dependencies list, you'll need to change 'pages' to 'phpages'. In the ForeignKeys you'll need to also change 'pages.Something' to 'phpages.Something' for every something in every migration file. Find these under pages/mitrations/nnnn_*.py If you refer to foreign keys in other modules by from pages.models import Something and then use ForeignKey(Something), you're good for this stop. If you use ForeignKey('pages.Something') then you need to change those references to ForeignKey('phpages.Something'). I would assume other like-references are the same. For the next 4 steps (7, 8, 9 and 10), I built pagestophpages.sql and added it to the pages sub-directory. It's not a standard django thing, but each test copy and each production copy of the database was going to need the same set of steps. UPDATE django_contecnt_type SET app_label='phpages' WHERE app_label='pages'; UPDATE django_migrations SET app='phpages' WHERE app='pages'; Now... in your database (my is PostgreSQL) there will be a bunch of tables that start with "pages". You need to list all of these. In PostgreSQL, in addition to tables, there will be sequences for each AutoField. For each table construct ALTER TABLE pages_something RENAME TO phpages_something; For each sequence ALTER SEQUENCE pages_something_id_seq RENAME TO phpages_something_id_seq; You should probably backup the database. You may need to try this a few times. Run your SQL script through your database shell. Note that all other changes can be propagated by source code control (git, svn, etc). This last step must be run on each and every database. Obviously, you need to change pages and phpages to your stuff. You may have more than one table with one auto field and it may not be named something. Another thing of note, in terms of process, is that this is probably a hard point in your development where everything needs be in sync. Given that we're playing with editing migrations and changing names, you need a hard stop in development so that everything that's going to be changed (dev box, test box, staging box, production box ... and all of their databases) is at the same revision and schema. YMMV. This is also solving the problem by using the label field of class Application. I choose this method in deference to changing the directory name because it involved fewer changes. I chose not to change the name field because that did not work for me. YMMV. I must say that I'm a little disappointed that myapp/pages conflicts with mezzanine.pages. It looks like some of the reasons are due to the pages slug being used in the database table name (and off top of my head, I don't see a good solution there). What I don't see that would make sense is the equivalent to "from mezzanine import pages as mpages" or somesuch. The ability to alias imported apps (not talking about apps in my own file tree). I think this might be possible if I sucked in the app into my own file tree --- but this doesn't seem to be a sanctioned act, either.
0
0
0
0
2017-02-06T01:53:00.000
1
1.2
true
42,059,381
0
0
1
1
So... I've done a lot of research on this... there are answers, but not complete or appropriate answers. I have an in-use and in-production django "project" in which the "main" application is called "pages" ... for reasonably dumb reasons. My problem is now to add mezzanine ... which has a sub-module mezzanine.pages (seems to be required .... but I'm pretty sure I need it). mezzanine.pages apparently conflicts with "pages" ... Now ... my pages contains a slew of non-trivial models including one that extends user (One-to-One ref), and many references to other app's tables (fortunately only outbound, ForeignKey). It also has management/commands and about 20 migrations of it's own history. I gather I either have to changes pages to mypages or is there another route (seemingly changing mezzanine.pages seems wrong-headed). for reference, The project is on Django 1.8 right now, so the preferred answer includes migrations.
Does using a vpn interrupts python sessions requests which are using the same cookies over and over?
42,079,994
0
1
397
0
python,web-scraping,session-cookies,vpn
HTTP 400 is returned, if the request is malformed. You should inspect the request being made, when you get the error. Perhaps, it is not properly encoded. VPN should not cause an HTTP 400.
0
0
1
0
2017-02-07T00:44:00.000
1
0
false
42,079,848
0
0
1
1
I am scraping data from peoplefinders.com a website which is not accesible from my home country so I am basically using a vpn client. I login to this website with a session post and through the same session I get items from different pages of the same website. The problem is that I do scraping in a for loop with get requests but for some reason I receive response 400 error after a several iterations. The error occurs after scraping 4-5 pages on average. Is it due to fact that I am using a vpn connection ? Doesn't all requests from the same session contains same cookies and hence allow me to keep logged in while scraping different pages of the same website ? Thank You
What is the best way to build and expose a Machine Learning model REST api?
69,476,803
0
5
5,677
0
java,python,rest,machine-learning,scikit-learn
I have been experimenting with this same task and would like to add another option, not using a REST API: The format of the Apache Spark models is compatible in both the Python and Jave implementations of the framework. So, you could train and build your model in Python (using PySpark), export, and import on the Java side for serving/predictions. This works well. There are, however, some downsides to this approach: Spark has two separate ML packages (ML and MLLib) for different data formats (RDD and dataframes) The algorithms for training models in each of these packages are not the same (no model parity) The models and training classes don't have uniform interfaces. So, you have to be aware of what the expected format is and might have to transform your data accordingly for both training and inference. Pre-processing for both training and inference has to be the same, so you either need to do this on the Python side for both stages or somehow replicate the pre-processing on the Java side. So, if you don't mind the downsides of a Rest API solution (availability, network latency), then this might be the preferable solution.
0
0
0
0
2017-02-07T02:19:00.000
6
0
false
42,080,598
0
1
1
3
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction) I searched round the internet and found following ways: Write the whole thing in Java - ML model + REST api Write the whole thing in Python - ML model + REST api But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part. I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work? Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution. Thanks in advance.
What is the best way to build and expose a Machine Learning model REST api?
46,918,647
0
5
5,677
0
java,python,rest,machine-learning,scikit-learn
I'm using Node.js as my rest service and I just call out to the system to interact with my python that holds the stored model. You could always do that if you are more comfortable writing your services in JAVA, just make a call to Runtime exec or use ProcessBuilder to call the python script and get the reply back.
0
0
0
0
2017-02-07T02:19:00.000
6
0
false
42,080,598
0
1
1
3
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction) I searched round the internet and found following ways: Write the whole thing in Java - ML model + REST api Write the whole thing in Python - ML model + REST api But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part. I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work? Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution. Thanks in advance.
What is the best way to build and expose a Machine Learning model REST api?
42,127,532
0
5
5,677
0
java,python,rest,machine-learning,scikit-learn
Well it depends the situation you use python for ML. For classification models like randomforest,use your train dataset to built tree structures and export as nested dict.Whatever the language you uesd,transform the model object to a kind of data structure then you can ues it anywhere. BUT if your situation is a large scale,real-timeing,distributional datesets,far as I know,maybe the best way is to deploy the whole ML process on severs.
0
0
0
0
2017-02-07T02:19:00.000
6
0
false
42,080,598
0
1
1
3
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction) I searched round the internet and found following ways: Write the whole thing in Java - ML model + REST api Write the whole thing in Python - ML model + REST api But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part. I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work? Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution. Thanks in advance.
Django: best way to convert data from model to view
42,090,129
1
2
392
0
python,django
If you are using Django Rest Framework, then you can simply use serializers. But I don't think that is a case. What you want to accomplish seems very similar to the role of django forms, but as such they are only used (conventionally) for saving/updating models i.e. POST requests. Now either you can define a new class for filtering/rendering and use that in your view or just go ahead and use django forms which would automatically provide basic cleaning for different fields.
0
0
0
0
2017-02-07T12:30:00.000
2
1.2
true
42,089,967
0
0
1
1
My django app displays the objects from database in table view. The problem is that these objects (models) are pretty complex: the have 50+ fields. Nearly for each field I have to do some formatting: conver phone numbers from int 71234567689 to "+7 (123) 456789" display long prices with spaces: "7 000 000" instead of "7000000" construct full address from several fields like "street", "house" and so on (logic if pretty complex with several if-else-s) and so on Django templating language has several useful tags for simple cases but I guess is not suitable in general case (like mine) for serious formatting. Create the @property-s in model class is also not an option because the question is about rendering and is not related to model. So I guess I should do my conversions in view: create dict for each obj, fill with converted data and pass to template. But! The model has a lot of fields and I don't want to copy them all :) Moreover, it would be great to preserve model structure to use it in django template (say, regroup) and query set laziness. So the greatest way would be to instruct django "how to render". Is it possible?
Race condition with AWS Lambda
42,102,617
3
0
3,045
0
python,amazon-web-services,amazon-dynamodb,aws-lambda,race-condition
Instead of deleting the hostname from DynamoDB, why not lock the hostname in DynamoDB?. If each item in DynamoDB corresponds to a unique hostname, then you can use a conditional write like the following and only try to acquire a hostname if it is not already acquired. You condition on the instanceid attribute Unused hostname: {hostname: 'tom-sawyer'} UpdateItem to do a conditional write on {hostname: 'tom-sawyer'} where the condition is attribute_not_exists(instanceid) and the update expression is SET instanceid = :instanceid and the ExpressionAttributeValues map is {:instanceid: 'deadbeef'}. Basically, you only allow DynamoDB to assign an instance to a hostname when it does not have an instanceid set. Used hostname: {hostname: 'tom-sawyer', 'instanceid'='deadbeef'} UpdateItem to do a conditional write on {hostname: 'tom-sawyer'} where the condition is attribute_exists(instanceid) AND instanceid = :instanceid and the update expression is REMOVE instanceid. Basically, you only allow DynamoDB to un-assign a specific instance when the instance id being removed is set and matches the record for that hostname.
0
0
0
0
2017-02-07T18:45:00.000
1
1.2
true
42,097,562
0
0
1
1
Workflow : I have a python AWS lambda function that basically looks up a pool of hostnames in dynamo DB (json) and attaches one of them to an instance(that spins up) and then deletes that hostname from dynamo db so as not be used again for another instance. Problem : As soon as instance spins up it sends a notification to SNS service that triggers lambda to assign it a hostname from available hostnames. There are times when multiple instances come up together and they both trigger the same lambda function simultaneously (2 threads). Their could be a race condition where both functions are looking at the dynamo db for available hostnames and sign the same one. How do I resolve this problem ? Any ideas ?
compare time.time() & System.Currenttimemillis()
42,098,334
0
2
1,633
0
java,python,timestamp
Note the differences: time.time() returns the time in seconds since the epoch as a floating point number, according to the timezone defined. System.currentTimeMillis() returns the time in milliseconds since the epoch as a long number, in UTC timezone. So to compare the two you need to: Convert the python time to UTC, e.g. by adding time.timezone to the result of time.time() Convert the adjusted python time from seconds to milliseconds by multiplying by 1000 and rounding/ceil/floor the result. After that you have both times in the same unit and timezone.
0
0
0
0
2017-02-07T19:07:00.000
2
0
false
42,097,944
1
0
1
1
I am working on multiple projects in python and java. I have a timestamp from python project as time.time(). I need to compare it with current timestamp in my java project as System.Currenttimemillis(). How to compare time in time.time() and System.Currenttimemillis()?
How to access several ports of a Docker container inside the same container?
42,114,253
0
0
42
0
python-3.x,networking,nginx,docker
It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used. But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.
0
1
0
0
2017-02-07T22:54:00.000
2
0
false
42,101,552
0
0
1
1
I am trying to put an application that listens to several ports inside a Docker image. At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984. The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer). The only way I can make it work at the moment is by using the -p option twice in the docker run: docker run -p 27019:27019 -p 5984:5984 app-test. Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work. I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port. How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).
Object initialization in IronPython
42,208,014
2
1
356
0
c#,python-2.7,ironpython,roslyn
Found. IronPython can use c# classes, using import and change initializer invocation value= new SomeObject { Name = name } to value = SomeObject(Name = name)
1
0
0
0
2017-02-08T10:00:00.000
1
1.2
true
42,109,930
0
0
1
1
Well, need to translate c# code into IronPython. The current problem is to find the best way to traslate initialization like this for example: case SomeObject.FieldCase: new SomeObject { Width = 600, Height = 400 }.Export(model_, stream); break; Do you have any ideas to make it similar? I'm interesting only in object initialization code, case statement was translated. For translation we use Roslyn, so we can get all syntax nodes. In other cases I make smth like that: model = new Model; model.SomeField = field; model.SomeField2 = field2; But this way is not so easy to develop.
Data transfer between Python and NodeJS in Raspberry Pi
42,114,793
0
0
648
0
node.js,python-2.7,stream,sensors,raspberry-pi3
Depending on the amount of data and the complexity/simplicity that you want to achieve, you can e.g. hit HTTP endpoint of your Node server from the Python program every time there's ne data connect with WebSocket and send new data as messages connect with TCP once and send new data as new lines connect with TCP every time when there's new data send a UDP packet with every new data if the Node and Python programs are running on the same system then you can use IPC, named pipes etc. there are more ways to do it All of those can be done with Node and Python.
0
0
0
1
2017-02-08T13:13:00.000
1
0
false
42,114,174
0
0
1
1
I have a idea for a small project where I will try to transfer real time sensor data that is captured and converted to digital signal using MCP3008 to the NodeJS server that is installed on Raspberry PI. My question is: what is the most efficient and/or fastest way for data transfer from Python program to NodeJS server to be displayed in webpage. Thanks for you advices
PyCharm PRO for Mac GAE upload not working
42,124,236
0
0
40
0
python,macos,google-app-engine,pycharm
I did a clean install recently, and not using the AppEngineLauncher anymore - not sure it even ships with the newer SDK. My GAE is located here: /usr/local/google-cloud-sdk/platform/google_appengine Looks like you might be using an older version of AppEngine SDK
0
1
0
0
2017-02-08T18:09:00.000
1
0
false
42,120,541
0
0
1
1
My colleague and I both have Macs, and we both have PyCharm Professional, same version (2016.3.2) and build (December 28, 2016). We use a repository to keep our project directories in sync, and they are currently identical. Under Preferences, we both have "Enable Google App Engine support" checked, and we both have the same directory shown as "SDK directory", with the same files in that directory. When I choose menu option Tools > Google App Engine > Upload App Engine app..., the App Config Tool panel appears at the bottom of my PyCharm window. The first line is: /usr/bin/python /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/appcfg.py update . and the last line is: appcfg.py > I can also run that update command from a Terminal window. Meanwhile, my colleague can also run the update command from a Terminal window. But when he runs menu option Tools > Google App Engine > Upload App Engine app..., the App Config Tool panel only shows: appcfg.py > We've researched this extensively and made many attempts at repair, no luck so far. Any help will be most appreciated.
Getting what I think is a part of the query string using python 2.7/CGI
42,122,277
0
0
33
0
python-2.7,url,cgi,query-string
Thanks for all the help on what was actually not to complicated a question. What I was looking for was a router/dispatcher that is usually handled by a framework fairly simply though an @route or something similar. Opting for a more efficient approach all I had to do was import os and then look at os.environ.get('PATH_INFO', '') for all the data I could possibly need. For anyone else following the path I was, that is how I found my way.
0
0
0
0
2017-02-08T19:02:00.000
1
0
false
42,121,512
0
0
1
1
I know I am using the wrong search terms and that's why I haven't been able to suss out the answer myself. However, I cannot seem to figure out how to use the CGI module to pull what I think counts as a query string from the url. given a url www.mysite.com/~usr/html/cgi.py/desired/path/info how would one get the desired/path/info out of the url? I understand GET and POST requests and know I can use CGI's FieldStorage class to get at that data to fill my Jinja2 templates out and such. But now I want to start routing from a landing page with different templates to select before proceeding deeper into the site. I'm hoping the context is enough to see what I'm asking because I am lost in a sea of terms that I don't know. Even if it's just the right search term, I need something to help me out here.
How to get a file's timestamp in spotfire?
42,451,598
1
1
1,053
0
timestamp,ironpython,spotfire
To achieve this, you need to add an extra column to one Information Link you are using or have an separate Information link only with this column element, say "Schedule Refresh Time". Initially do not be concerned about what is the DB table or column you are using, just pull any column with data type DateTime. Now add this column to your information link and edit SQL. Here find the column you added in SQL query, and replace the column from something T1."Any Column" as "Schedule Refresh Time" To something like SYSDATE as "Schedule Refresh Time". Done. From now onwards, whenever the Scheduled update will be pull data from Information Link, this column will capture current server Time and that is time of scheduled refresh.
0
0
0
0
2017-02-08T22:54:00.000
1
0.197375
false
42,125,195
0
0
1
1
I have a spotfire template that runs every night using scheduler and I want to show its last refresh time on the template. Is there anyway I can do it within native spotfire functionality or using IronPython ?
How do I get the first name and last name of a logged in user in Django?
42,127,741
4
8
17,555
0
python,django,views,models
You want the first_name and last_name attributes: request.user.first_name and request.user.last_name
0
0
0
0
2017-02-09T03:26:00.000
2
0.379949
false
42,127,684
0
0
1
1
I need to get the first name and last name of a user for a function in my views How do I do that? What is the syntax for it? I used request.user.email for the email, but I don't know see an option for a first name or last name. How do I go about doing this? Should I import the model into the views and then use it?
Django JSON file to Pandas Dataframe
42,133,075
2
2
820
0
python,json,django,pandas
You can also use pd.DataFrame.from_records() when you have json or dictonary df = pd.DataFrame.from_records([ json ]) OR df = pd.DataFrame.from_records([ dict. ]) or you need to provide iterables for pandas dataframe: e.g. df = pd.DataFrame({'column_1':[ values ],'column_2':[ values ]})
0
0
0
0
2017-02-09T08:06:00.000
2
1.2
true
42,131,205
1
1
1
1
I have a simple json in Django. I catch the file with this command data = request.body and i want to convert it to pandas datarame JSON: { "username":"John", "subject":"i'm good boy", "country":"UK","age":25} I already tried pandas read_json method and json.loads from json library but it didn't work.
About Django's widget design principles
42,137,163
0
0
29
0
python,django,widget
A widget makes no sense in the absence of a field. The field is responsible for accepting the input and validating it; so it is the field that has to determine the name attribute, otherwise it would have no way of knowing what value to use in the data. Butu you should never need to render the widget directly by calling its render method. Again, that is the job of the field.
0
0
0
0
2017-02-09T11:49:00.000
1
1.2
true
42,135,962
0
0
1
1
This is a question I've from time ago, as I just don't understand why was this decission headed that way. When we render a widget (e.g., 'cause of using a form), its render functions has a name arg. Why, if a HTML tag name is an attr, cannot be specified as part of the attrs dict passed to that function? Should make more sense to use name only when you no specify an attr name. For understanding, if I set an attrs {"name": "no_one_knows[]"}, when I render the widget its name should be "no_one_knows[]", not the one passed by arg. That way I could have a HTML tag that can be parsed directly as a list (getlist(..)) in the server side (for example).
Exporting data from local standard environment and importing it in Datastore Emulator
42,145,088
1
0
344
0
google-app-engine,google-cloud-datastore,google-app-engine-python
In my tests, I found that the database files created by AppEngine Dev and Datastore Emulator are compatible. I was able to copy the local_db.bin from app-engine database to replace the same file in Datastore Emulator's data directory and was able to access the data.
0
1
0
0
2017-02-09T14:29:00.000
1
0.197375
false
42,139,307
0
0
1
1
We have two app engine apps, which read/save to the same datastore (that is, same project). Datastore is actually the way they "transfer data" to each other. One of the apps is running on standard environment, and the other is running in the flexible environment. In the flexible environment, to run local tests in my machine, without using google datastore servers, I have to use the Datastore Emulator, which it's configured already. What I would like now is to find a simple way to export data saved in the standard environment app (created using dev_appserver.py) and import it in the datastore emulator. I would NOT like to push the data google servers and export from there, if that could be avoidable, instead exporting from the database that ran in my local machine. Is there a feature/library which might help me with this task?
Can we set one database between Java and python?
42,142,621
-1
0
212
0
java,python,mysql,database,database-connection
Yes you can share the DB, you'll have to install the corresponding dependencies for connecting python to the DB as well as for Java. I've done this with postgresql, mysql and mssql
0
0
0
0
2017-02-09T16:57:00.000
2
-0.099668
false
42,142,533
0
0
1
1
I created python script that sends notifications when result declared but I want to make website that takes data of student email id and store in database. Now here problem is that I don't know django framework so it takes time to make website. Java, database connection, Data insertion, Servelet calling easily do that by me. Want to know way that java html css takes input from user and stores in database and then python program retrieves that data. Hope you understand my question.
Difference between Java Interfaces and Python Mixin?
42,143,613
9
2
699
0
java,python,interface,mixins
Well, the 'abstract methods' part is quite important. Java is strongly typed. By specifying the interfaces in the type definition, you use them to construct the signature of the new type. After the type definition, you have promised that this new type (or some sub-class) will eventually implement all the functions that were defined in the various interfaces you specified. Therefore, an interface DOES NOT really add any methods to a class, since it doesn't provide a method implementation. It just adds to the signature/promise of the class. Python, however, is not strongly typed. The 'signature' of the type doesn't really matter, since it simply checks at run time whether the method you wish to call is actually present. Therefore, in Python the mixin is indeed about adding methods and functionality to a class. It is not at all concerned with the type signature. In summary: Java Interfaces -> Functions are NOT added, signature IS extended. Python mixins -> Functions ARE added, signature doesn't matter.
0
0
0
1
2017-02-09T17:33:00.000
1
1.2
true
42,143,261
1
0
1
1
I have been reading about Python-Mixin and come to know that it adds some features (methods) to class. Similarly, Java-Interfaces also provide methods to class. Only difference, I could see is that Java-interfaces are abstract methods and Python-Mixin carry implementation. Any other differences ?
Sharing an ORM between languages
42,144,831
0
0
79
1
python,.net,database,orm
I love questions like that. Here is what you have to consider, your web site has to be fast, and the bottleneck of most web sites is a database. The answer to your question would be - make it easy for .NET to work with SQL. That will require little more work with python, like specifying names of the table, maybe row names. I think Django and SQLAlchemy are both good for that. Another solution could be to have a bridge between database with gathered data and database to display data. On a background you can have a task/job to migrate collected data to your main database. That is also an option and will make your job easier, at least all database-specific and strange code will go to the third component. I've been working with .NET for quite a long time before I switched to python, and what you should know is that whatever strategy you chose it will be possible to work with data in both languages and ORMs. Do the hardest part of the job in the language your know better. If you are a Python developer - pick python to mess with the right names of tables and rows.
0
0
0
0
2017-02-09T18:53:00.000
1
0
false
42,144,698
1
0
1
1
I am making a database with data in it. That database has two customers: 1) a .NET webserver that makes the data visible to users somehow someway. 2) a python dataminer that creates the data and populates the tables. I have several options. I can use the .NET Entity Framework to create the database, then reverse engineer it on the python side. I can vice versa that. I can just write raw SQL statements in one or the other systems, or both. What are possible pitfalls of doing this one way or the other? I'm worried, for example, that if I use the python ORM to create the tables, then I'm going to have a hard time in the .NET space...
Django ID based dynamic URL with base64
42,149,106
0
0
724
0
python,django
Why would you like to have base64 encoded ids, pretty bad choice for URL string as it has signs that aren't URL friendly You should extend your object with extra field that contains for instance random generated slug or UUID and have it as parameter in URL instead of id then you would query in your view by that field
0
0
0
0
2017-02-09T23:14:00.000
1
0
false
42,148,704
0
0
1
1
I'm working on a django application that takes an ID based dynamic URL, but rather than having the URL be straight up the ID, as it is right now: url(r'^history/(?P< id>[0-9]+)/$', views.history) I would like the URL to be the base64 encoded version of the object's ID, and i couldn't find a lot about encoding Django URLs.
Python - Using the django that is installed in the virtualenv
42,149,002
0
0
68
0
python,django,file,static,virtualenv
virtualenv is a tool to create isolated Python environments. It doesn't have anything to do with your code and static_url settings, only thing different is what packages you have there and what django core version you are using.
0
0
0
0
2017-02-09T23:28:00.000
1
0
false
42,148,878
0
0
1
1
I've installed Django 1.8 in the virtualenv and I'm trying to use the static files from it. For Example, if I want to edit the header color of the admin base.html, It keeps using the django global file (1.7) even though I'm working with my virtualenv on. Doesn't my STATIC_URL = '/static/' use the django that is currently running in my virtualenv? Sorry for the bad English
Combining Node JS and Python for CPU and IO intensive web applications
42,151,868
-1
0
182
0
javascript,python,node.js,multithreading,asynchronous
You could use a messaging queue such as RabbitMQ.
0
0
0
0
2017-02-10T04:40:00.000
1
-0.197375
false
42,151,653
1
0
1
1
Node JS seems to be a perfect fit for serving fast lightweight requests asynchronously. However, i'm not convinced that it is a good fit for intensive background work - despite the ability to deploy Node JS in a clustered fashion. I am considering using Node JS to interact with my template rendering engine (Express) and serve requests by building up a range of lightweight micro-services in Node. Further, I am then considering having Node JS pass off intensive work to Python (perhaps via some kind of in-memory technology such as Redis or a dedicated Task Queue). I am familiar with Python and in particular, multi-threading. For example, on a 4 core machine, I might have two cores dedicated to running load balanced background tasks and 2 cores dedicated to a Node JS cluster. Would this be a vaguely sensible approach in comparison to trying to "Javascript all the things"?
Insert bulk data django using raw query
42,154,977
-1
1
311
1
python,django,postgresql
Your problem is not about django. You better carry the data(not necesarry but could be good) to the server that you want to insert and create a simple python program or sth. else to insert the data. Avoid to insert a data at this size by using an http server.
0
0
0
0
2017-02-10T07:24:00.000
1
-0.197375
false
42,153,732
0
0
1
1
I am trying to enter about 1 millions records to PostgreSql since I create table dynamically I don't have any models associated with it so I cant perform bulk_insert of django How is there any method of inserting data in a efficient manner. I am trying using single insert statement but this is very time consuming and too slow
How to use aws transcoder on videos saved in other cms apart from s3 bucket?
42,161,950
0
0
37
0
python,amazon-web-services,amazon-s3,amazon-elastic-transcoder
You have to copy the files to S3.
0
0
0
0
2017-02-10T10:18:00.000
1
0
false
42,156,850
0
0
1
1
I am planning to use aws in my project to encode mp4 to streaming video format. But my videos are not saved in Amazon s3 bucket. When i tried to create pipline i noticed that they are asking s3 bucket name. Is it possible to use aws encoder in this scenario without downloading those videos from other cms to s3 bucket.
How to safely store users' credentials to third party websites when no authentication API exists?
42,170,097
2
3
1,085
0
python,django,postgresql,security,encryption
There’s no such thing as a safe design when it comes to storing passwords/secrets. There’s only, how much security overhead trade-off you are willing to live with. Here is what I would consider the minimum that you should do: HTTPS-only (all passwords should be encrypted in transit) If possible keep passwords encrypted in memory when working with them except when you need to access them to access the service. Encryption in the data store. All passwords should be strongly encrypted in the data store. [Optional, but strongly recommended] Customer keying; the customer should hold the key to unlock their data, not you. This will mean that your communications with the third party services can only happen when the customer is interacting with your application. The key should expire after a set amount of time. This protects you from the rogue DBA or your DB being compromised. And this is the hard one, auditing. All accesses of any of the customer's information should be logged and the customer should be able to view the log to verify / review the activity. Some go so far as to have this logging enabled at the database level as well so all row access at the DB level are logged.
0
0
1
0
2017-02-10T22:45:00.000
2
1.2
true
42,169,854
0
0
1
1
I am developing a web app which depends on data from one or more third party websites. The websites do not provide any kind of authentication API, and so I am using unofficial APIs to retrieve the data from the third party sites. I plan to ask users for their credentials to the third party websites. I understand this requires users to trust me and my tool, and I intend to respect that trust by storing the credentials as safely as possible as well as make clear the risks of sharing their credentials. I know there are popular tools that address this problem today. Mint.com, for example, requires users' credentials to their financial accounts so that it may periodically retrieve transaction information. LinkedIn asks for users' e-mail credentials so that it can harvest their contacts. What would be a safe design to store users' credentials? In particular, I am writing a Django application and will likely build on top of a PostgreSQL backend, but I am open to other ideas. For what it's worth, the data being accessed from these third party sites is nowhere near the level of financial accounts, e-mail accounts, or social networking profiles/accounts. That said, I intend to treat this access with the utmost respect, and that is why I am asking for assistance here first.
Heroku: how to store a variable that mutates?
42,213,846
1
0
93
0
python,heroku
Have to agree with @KlausD, doing what you are suggesting is actually a bit more complex trying to work with a filesystem that won't change and tracking state information (last selected) that you may need to persist. Even if you were able to store the last item in some environmental variable, a restart of the server would lose that information. Adding a db, and connecting it to python would literally take minutes on Heroku. There are plenty of well documented libraries and ORMs available to create a simple model for you to store your list and your cursor. I normally recommend against storing pointers to information in preference to making the correct item obvious due to the architecture, but that may not be possible in your case.
0
0
0
0
2017-02-11T01:28:00.000
1
1.2
true
42,171,188
1
0
1
1
I have deployed a small application to Heroku. The slug contains, among other things, a list in a textfile. I've set a scheduled job to, once an hour, run a python script that select an item from that list, and does something with that item. The trouble is that I don't want to select the same item twice in sequence. So I need to be able to store the last-selected item somewhere. It turns out that Heroku apparently has a read-only filesystem, so I can't save this information to a temporary or permanent file. How can I solve this problem? Can I use os.environ in python to set a configuration variable that stores the last-selected element from the list?
How to open dynamic links in new tab with web py framework?
42,183,038
0
1
486
0
python,web.py
Not a web.py issue. Cannot be done from server-side by any python or non-python framework, must be done in the Client. From the client, you can set target="_blank" in the HTML, or use javascript with something like window.open(url). Javascript will allow you to set size and position of second window.
0
0
0
0
2017-02-11T15:57:00.000
2
0
false
42,177,953
0
0
1
2
I tried web.seeother("link"), but this does not open it in a new tab. Now I can generate a link with a _blank tag, but then the user has to click on the link separately that is separate button for generating the link and another button to follow that link. I want to perform both with a single click. A server side method to do this would be best. I am using the web.py framework.
How to open dynamic links in new tab with web py framework?
42,178,882
0
1
486
0
python,web.py
As the document says web.seeother() is used for redirecting a user to another page. So a more clear way for asking your question is: "how to make web.seeother() open a link in a new tab"? As I have observed the documents, There is no way to do that on server-side.
0
0
0
0
2017-02-11T15:57:00.000
2
0
false
42,177,953
0
0
1
2
I tried web.seeother("link"), but this does not open it in a new tab. Now I can generate a link with a _blank tag, but then the user has to click on the link separately that is separate button for generating the link and another button to follow that link. I want to perform both with a single click. A server side method to do this would be best. I am using the web.py framework.
How do I deploy a Kivy GUI Application as a WebApp in a Web Browser?
42,185,476
10
5
7,754
0
python,web-applications,kivy
Kivy does not currently support working in a browser. There are some experiments to do it, but the result is very slow, to open and to use, and doesn't work in all browsers, more work is needed, and it's not a priority to us, if you want a web app, use a web technology.
1
0
0
0
2017-02-12T00:47:00.000
1
1
false
42,183,014
0
0
1
1
I have developed a Kivy application and was wondering if it was possible to deploy it as a WebApp. I've tried using flask but it is running into some problems. I run the Kivy Application by calling the App builder class while flask does something similar. So can anyone direct me to any tutorials or other information about deploying a Kivy Application in a web browser? I just need the GUI to display in a web browser so I believe the html doesn't need to be too extravagant. Thank you!
Is Heroku blocking my local git commands?
42,214,216
2
0
110
0
python,django,heroku,heroku-toolbelt
I would start over. Destroy the heroku app heroku apps:destroy --app YOURAPPNAME Remove the whole repo (I would even remove the directory) Create new directory, copy files over (do NOT copy old git repo artifacts that may be left over, anything starting with .git) Initialize your git repo, add files, and commit, then push upstream to your remote (like github, if you're using one) git init && git add . && git commit -m 'initial commit' and optionally git push origin master Then perform the heroku create That should remove the conflict.
0
0
0
1
2017-02-13T02:57:00.000
1
1.2
true
42,195,983
0
0
1
1
So i've run a heroku create command on my django repo, and currently it is living on Heroku. What I didnt do prior was create my own local git repo. I run git init, create a .gitignore to filter out my pycharm ide files, all the fun stuff. I go to run git add . to add everything to the initial commit. Odd...it returns: [1] 4270 killed git add. So i run git add . again and get back this: fatal: Unable to create /Users/thefromanguard/thefromanguard/app/.git/index.lock': File exists. "Another git process seems to be running in this repository, e.g. an editor opened by 'git commit'. Please make sure all processes are terminated then try again. If it still fails, a git process may have crashed in this repository earlier: remove the file manually to continue." So I go and destroy the file, run it again; same error. Removed the whole repo, repeated the process, still got the same message. Is Heroku running in the background where ps can't see it?
Installing python modules in production meteor app hosted with galaxy
42,284,125
0
1
192
0
python,node.js,meteor,meteor-galaxy
It really depends on how horrible you want to be :) No matter what, you'll need a well-specified requirements.txt or setup.py. Once you can confirm your scripts can run on something other than a development machine, perhaps by using a virtualenv, you have a few options: I would recommend hosting your Python scripts as their own independent app. This sounds horrible, but in reality, with Flask, you can basically make them executable over the Internet with very, very little IT. Indeed, Flask is supported as a first-class citizen in Google App Engine. Alternatively, you can poke at what version of Linux the Meteor containers are running and ship a binary built with PyInstaller in your private directory.
0
1
0
1
2017-02-14T02:02:00.000
1
0
false
42,216,640
0
0
1
1
I have a meteor project that includes python scripts in our private folder of our project. We can easily run them from meteor using exec, we just don't know how to install python modules on our galaxy server that is hosting our app. It works fine running the scripts on our localhost since the modules are installed on our computers, but it appears galaxy doesn't offer a command line or anything to install these modules. We tried creating our own command line by calling exec commands on the meteor server, but it was unable to find any modules. For example when we tried to install pip, the server logged "Unable to find pip". Basically we can run the python scripts, but since they rely on modules, galaxy throws errors and we aren't sure how to install those modules. Any ideas? Thanks!
django-1.10 still contains deprecated and removed features
42,239,415
1
0
74
0
python,django,python-2.7,virtualenv
The problem was not with the Django-core but with django-user-accounts app that was included with pinax. Upgrading the django-user-accounts app fixed the issue. Thanks to @Selcuk for the solution.
0
0
0
0
2017-02-15T01:20:00.000
1
1.2
true
42,239,173
0
0
1
1
I am trying to run an existing django app. The app has been built in django-1.10. I set up a new virtualenv and installed the requirements and everything. However, I get errors like the following: from django.utils import importlib ImportError: cannot import name importlib Now, the above is from the following source - .virtualenvs/crowd/lib/python2.7/site-packages/account/conf.py When I manually fix the conf.py file, I still keep getting errors to fix either deprecated or removed features from older django versions. Any idea as to how to fix this? I thought the purpose of working in virtualenvs was to avoid such errors. Any suggestions would be much appreciated. Thanks in advance! This is how the question is different: Even after I fix the importlib import statement, it keeps giving me errors like that of the usage of SubFieldBase and so on.
How can python + selenium + chromedriver use mouse wheel?
42,261,282
1
2
418
0
python,selenium,selenium-chromedriver
I'd look at the ajax load event listener (the code that loads more <li>s). You need to trigger whatever that listens for. (aka: does it watch for something entering the view port, or something's y-offset, or a MouseEvent, or a scroll()?) Then you need to trigger that kind of event on the element it listens to.
0
0
1
0
2017-02-15T02:11:00.000
2
0.099668
false
42,239,544
0
0
1
2
There is a <ul>tag in a webpage, and so many <li> tags in the <ul> tag. The <li> tags are loaded by ajax automatically while mouse wheel scroll down continuously. The loading of <li> tags will work well if I use mouse wheel. I want to use selenium to get the loaded info in <li> tags, but the javascript of: document.getElementById(/the id of ul tag/).scrollTop=200; can not work as the new <li> can not be loaded by ajax neither in chrome console nor the selenium execute_script. So, if there is an API of selenium to behave like mouse wheel scroll down? Or is there any other way to solve this problem?
How can python + selenium + chromedriver use mouse wheel?
42,455,875
0
2
418
0
python,selenium,selenium-chromedriver
From now on, there is not any reasonable reason, so I close this question.
0
0
1
0
2017-02-15T02:11:00.000
2
1.2
true
42,239,544
0
0
1
2
There is a <ul>tag in a webpage, and so many <li> tags in the <ul> tag. The <li> tags are loaded by ajax automatically while mouse wheel scroll down continuously. The loading of <li> tags will work well if I use mouse wheel. I want to use selenium to get the loaded info in <li> tags, but the javascript of: document.getElementById(/the id of ul tag/).scrollTop=200; can not work as the new <li> can not be loaded by ajax neither in chrome console nor the selenium execute_script. So, if there is an API of selenium to behave like mouse wheel scroll down? Or is there any other way to solve this problem?
Firebase Authentication and Django/Djangae
53,589,283
0
7
1,473
0
python,django,firebase-authentication
Firebase authentication only supports login/signup, reset password or email. but for that you need firebase admin credentials. For other field you need local model. There is no problem with using django, but also no existing integration I'm aware of, so you'd have to hook it up yourself. if you want auth-system like firebase and other functionality than you can use social-django-restframework. you can integrate all login system with your django app and control user with inbuilt user model.
0
0
0
0
2017-02-15T07:07:00.000
1
0
false
42,242,719
0
0
1
1
I am evaluating if Firebase authentication to see if it works well with Django/Djangae. Here comes some context require email/password authentication, able to additional field like job title, and basic things like reset password email. use Djanage framework (Django that uses datastore as data storage), app engine. really good to make use built-in authentication tool provided by Django, like session, require-loggin, etc. Drop-in authentication seems to be a candidate. Does it work with Django authentication, like permission, group, etc. Thanks for advance.
accessing mysql from within flask
42,281,576
3
1
100
1
python,mysql,flask
Using packages like flask-mysql or Flask-SQLAlchemy, they provided useful defaults and extra helpers that make it easier to accomplish common CRUD tasks. All of such package are good at handling relationships between objects. You only need to create the objects and then the objects contain all the functions and helpers you needed to deal with the database, you don't have to implement such code by yourself and you don't need to worry about the performance of the queries. I had worked on a Django project(I believe the theory in Flask is similar) and its ORM is really amazing, all i need to do is writing Models and encapsulate business logic. All CRUD commands are handled by the built-in ORM, as a developer we don't worry about the SQL statements. Another benefit is that it makes database migration much easier. You can switch it from MySQL to PostgresSQL with minimal code modifications which will speed up development.
0
0
0
0
2017-02-16T17:48:00.000
1
0.53705
false
42,281,212
0
0
1
1
I noticed that most examples for accessing mysql from flask suggest using a plugin that calls init_app(app). I was just wondering why that is as opposed to just using a mysql connector somewhere in your code as you need it? Is it that flask does better resource management with request life cycles?
Stackdriver debug appengine error: python module not found
42,302,731
2
2
120
0
python,google-app-engine,debugging,stackdriver
This usually happens when you try to take a snapshot in source code that is not part of the executing service/version. For example when the code you are using belongs to another running service in the same project.. Please use the console Feedback tool or email [email protected] with this issue. We will help you figure this one out. thanks, .Erez
0
0
0
0
2017-02-17T04:34:00.000
1
1.2
true
42,289,465
0
0
1
1
I am trying to debug and make a capture in the Stackdriver debug appengine tool, it shows me the code, including the error line in StackDriver but when I try to make a capture after a few seconds the message appears in red: "python module not found". Any ideas?
Scrapy - How to put a check into checkboxes in a url then scrape
42,297,186
0
3
1,305
0
python,xpath,web-scraping,scrapy,web-crawler
You are wrong. Scrapy cannot manipulate real browser-like behavior. From the image you linked, I saw you are scraping Amazon, so open that link in browser, and click on checkbox, you will notice the URL in browser will also change according to new filter set. And then put that URL in scrapy code and do your scraping. IF YOU WANT TO MANIPULATE REAL BROWSER-LIKE BEHAVIOR use Python Selenium or PhantomJS or CasperJS.
0
0
1
0
2017-02-17T06:49:00.000
2
1.2
true
42,291,191
0
0
1
1
I need to scrape a url which has checkboxes in it. I wanna click some of the checkboxes and scrape and I wanna scrape again with someother checkboxes clicked. For instance; I wanna click new and then scrape and then I wanna scrape the same url with Used and Very Good clicked. Is there a way to do this without making more than 1 request which is done for getting the url. I guess html changes when you click one of the boxes since the listing will change when you refine the search. Any thoughts? Any suggestions? Best, Can
Django print labels
42,426,037
2
0
754
0
django,python-2.7,ms-word,labels
Not sure if anybody will find this helpful. I had an extremely tough time finding a way to print out mailing labels from my Django app. Instead I decided to export an spreadsheet using the xlwt library. Then you can use MS Word's Mail Merge functions to get Avery labels for each of your contacts.
0
0
0
0
2017-02-17T18:35:00.000
1
1.2
true
42,305,145
1
0
1
1
I have a database full of families with addresses. I want to print out mailing labels for each family. I have various avery labels to use, is there an easy way to do this task? Is there a library or some tutorials you know of that others have used to accomplish this? I used a project that was ported to python 2.6 and used pyPDF to make a pdf with labels of specific dimensions, but I think it may be outdated. The labels printed don't line up. Do I just need to adjust these or is there an easier way to save the data and do a mail merge in Word? If there is not another way, I guess I'll just create a spreadsheet with the fields to import into Word.
python/video - removing frames without re-encoding
50,184,791
0
0
1,097
0
python,video,ffmpeg,moviepy
Since modern video coding standards uses inter frame prediction, general solution of removing frame without re-encoding is not exist. (Removing reference picture breaks inter prediction, so only non-reference picture can be removed.)
0
0
0
0
2017-02-18T10:30:00.000
2
0
false
42,313,951
1
0
1
1
I understand this might be a trivial question, but so far I had no luck with the various solutions I tried and I'm sure there must be a convenient way to achieve this. How would I proceed removing frames/milliseconds from a video file without slicing, merging and re-encoding? All solutions I found involved exporting various times to various formats, and I'm hopeful there will be no need to do so. With ffmpeg/avconv it's necessary to convert the temporary streams to .ts, then concatenate, and finally re-encode in the original format. Python library MoviePy seemed to do quite exactly what I needed but: The cutout function returns a file which can not be exported, as the write_videofile function tries and fails to fetch the removed frames If I instead slice the original file into various clips and then merge them with concatenate_videoclips, the export doesn't fail but takes twice the length of the video. The resulting video has then a faster frame-rate, with only the cue points for the concatenated videos being timely placed, and audio playing at normal speed. It's also worth noting that the output file, despite being 5-7% shorter in duration, is about 15% bigger. Is there any other library I'm not aware of I might look into? What I'm trying to imagine is an abstraction layer providing easy access to the most common video formats, giving me the ability to pop the unwanted frames and update the file headers without delving into the specifics of each and every format.
Dynamic length Django model field
42,319,231
0
1
1,722
0
python,django,django-models
No you can't. The length of string that can be accepted relies on the structure of your database -> you'd have to migrate your database on every max_length change! The solution to your problem would be to simply measure a potential maximum size of variable you'd like to save (here I think about some dev environment). Then set max_length accordingly.
0
0
0
0
2017-02-18T18:40:00.000
2
0
false
42,319,189
0
0
1
1
I have one Django model field definition like this: landMark = models.CharField(max_length=50, blank=True) Can I define the landMark field dynamically so that it is capable of holding a string which has a varying size ?
Not able to install scrapy in my windows 10 x64 machine
45,332,173
1
1
1,285
0
python,windows,scrapy,pypi
Use pip3 insttead of pip since you are using python3
0
1
0
0
2017-02-18T20:15:00.000
3
1.2
true
42,320,197
1
0
1
1
I got pip install scrapy in cmd, it said Collecting scrapy and after a few seconds I got the following error: Command "c:\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\DELL\\AppData\\Local\\Temp\\pip-build-2nfj5t60\\Twisted\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\DELL\AppData\Local\Temp\pip-0bjk1w93-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\DELL\AppData\Local\Temp\pip-build-2nfj5t60\Twisted\ I am not able to get the error.
Start over with django migrations
42,348,573
2
1
678
0
python,django,database-migration
If an app doesn't have a migrations/ directory with an __init__.py file (even on Python 3), Django won't create any migrations if you don't specify the app name. You either have to create the directories and __init__.py files, or you have to call ./manage.py makemigrations <app_name> for each app, which forces Django to create the directory. It doesn't. It may connect to the database during the makemigrations command, but the migrations are purely based on your models.
0
0
0
0
2017-02-20T15:33:00.000
1
1.2
true
42,348,514
0
0
1
1
I have 2 related questions. I have deleted all my migrations, including the migrations directory, from all the apps in my project. I've started with a clean database. But still, when I run ./manage.py makemigrations, django says there are no changes to be made. How do I completely start over with migrations? No squashing, just starting over. It seems that when I call makemigrations, django consults the database. I'd like my codebase to be the only source of truth for my migrations. Is there a way to do this?
PyCharm doesn't start up
42,440,420
0
1
2,139
0
python,pycharm,ide
Thanks for the answer guys! Apparently the problem was I once set a vm ram size (also in system environment) which limits Pycharm to start.
0
0
0
0
2017-02-20T16:17:00.000
2
0
false
42,349,418
1
0
1
1
I'm new to python community and intends to use PyCharm for django framework. However, after installing PyCharm (32 bit launcher same as my OS), the software can't be launched. After double-click the pycharm icon nothing happens, also tried on pycharm.exe, pycharm.bat, and pycharm64.exe, they all failed to start. Have tried to search about the system requirement and I believe everything fits. My OS is 32x bit, 4gb RAM and 100+gb storage available. Hope anyone could give me hints about this issue. Thank you!
Delete time from datetime format in django
42,351,357
1
0
1,390
0
python,django,datetime,time,format
So apparently the only thing i needed to do was to change "SHORT_DATETIME_FORMAT" by "SHORT_DATE_FORMAT", and that way we get rid of the time part. Thanks guys!
0
0
0
0
2017-02-20T16:58:00.000
4
0.049958
false
42,350,259
0
0
1
1
I need to send an email with a date, and the date should be "mm/dd/yyyy". From the database i get the date in this format: 2017-02-26T23:00:00Z So I added from django.utils import formats along with all the imports and then I also added in my function final_date = formats.date_format(input_date, "SHORT_DATETIME_FORMAT") The thing is that i get exactly what i want with the date, but also the time. Is there any way to get rid of the time within my formatting?? I know i can use the function split(), but that seems an ugly way to achieve this. TL;DR What I have --> 02/20/2017 4:40 p.m. What I want --> 02/20/2017
Updating a module's model in Odoo 10
42,359,847
0
0
1,475
1
python,ubuntu,module,openerp,odoo-10
Please check that in addons path there is no any duplicate folder having same name. Sometimes if there is zip file with same name in addons path than it doesn't get affect of any updation.
0
0
0
0
2017-02-20T21:50:00.000
2
0
false
42,354,852
0
0
1
2
I'm having a trouble updating a model in odoo the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it is there a way to make the database synchronized with my model?
Updating a module's model in Odoo 10
42,359,793
0
0
1,475
1
python,ubuntu,module,openerp,odoo-10
If you save changes to the module, restart the server, and upgrade the module - all changes should be applied. Changes to tables (e.g. fields) should only require the module to be upgraded, not a server reboot. Python changes (e.g. contents of a method) require a server restart, not a module upgrade. If the changes are not occurring, then it is possible that you have a different problem. I would look at things like: are you looking at the correct database/tables, are you saving your changes, are the changes being made to the correct files/in the correct locations.
0
0
0
0
2017-02-20T21:50:00.000
2
0
false
42,354,852
0
0
1
2
I'm having a trouble updating a model in odoo the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it is there a way to make the database synchronized with my model?
Install LAMP Stack into Virtual Environment
42,356,311
1
1
708
1
php,python,mysql,virtualenv,lamp
Read about Docker if You want make separate environments without virtual machine.
0
0
0
0
2017-02-20T23:53:00.000
2
0.099668
false
42,356,276
0
0
1
1
I want to isolate my LAMP installment into a virtual environment, I tried using virtualbox but my 4GB of RAM is not helping. My question is if I run sudo apt-get install lamp-server^ while in "venv"... would it install the mysql-server, apache2 and PHP into the virtualenv only or is the installation scope system-wide. I really want a good solution for isolating these dev environments and their dependencies, and am hence exploring simple and efficient options given my system constraints. I have another Django (and mysql and gcloud) solution on the same computer and would like for these new installations to not mess with this. I'm using: OS: Ubuntu 16.04 LTS Python: 2.7
not able to install pip install robotframework on jython 2.7.0 and 2.7.1b3.Getting error as "jython.exe: No module named pip."
42,432,425
0
2
644
0
python,pip,jython,robotframework
In Jython PIP is not a module you install separately, but an already included executable in c:\Jython\bin\pip.exe.
0
0
0
0
2017-02-22T20:53:00.000
2
1.2
true
42,402,076
1
0
1
1
Could any please tell me which Jython package you are using where you are getting all the pip package too. Or could anyone please share me the folder location. Because whatever version of Jython I am using getting an error while running the command : “jython –m pip install robotframework” Error: Jython.exe: No module named pip. N.B: I have both Jython 2.7.0 and jython-installer-2.7.1b3.
couldn't import django in virtualenv but works when deactivated
42,462,927
2
1
13,406
0
python,django,ubuntu,virtualenv,pythonpath
ok I found out what the problem was. It turns out when I started my virtualenv I used sudo command but when I pip install my packages I didn't use the sudo command which caused a permission problem or some sort when installing the packages. So it made django not showing up on the path. When starting a virtual env never use the sudo command...
0
0
0
0
2017-02-23T01:41:00.000
2
0.197375
false
42,405,551
0
0
1
2
I am trying to deploy my Django Projects on Amazon AWS using Ubuntu 16.04. I am running python version 2.7.12 and Django 1.10.5. I created my virtualenv named venv and then activated it. I get this error when I try to run python manage.py runserver. Traceback (most recent call last): File "manage.py", line 17, in "Couldn't import Django. Are you sure it's installed and " ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? Then I realize Django might not be in my python path. So I added export PYTHONPATH="/usr/local/lib/python2.7/dist-packages/django" into my venv/bin/activate script. Now with the virtualenv activated I can go into python and type import sys sys.path ['', '/usr/local/lib/python2.7/dist-packages/django', '/home/ubuntu/TravelBuddy/venv/lib/python2.7', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/plat-x86_64-linux-gnu', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-old', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/local/lib/python2.7/site-packages', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/site-packages'] As you can see now django is indeed in my python path. I thought this was going to fix the problem but it didn't: it still says couldn't import Django. Now I am confused because when I deactivate my virtualenv and import Django it does work. this is what prints out when I deactivate my virtualenv and do sys.path ['', '/usr/local/lib/python2.7/dist-packages/django', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
couldn't import django in virtualenv but works when deactivated
44,610,084
0
1
13,406
0
python,django,ubuntu,virtualenv,pythonpath
1- install python3 brew install python3 2- install django pip3 install django
0
0
0
0
2017-02-23T01:41:00.000
2
0
false
42,405,551
0
0
1
2
I am trying to deploy my Django Projects on Amazon AWS using Ubuntu 16.04. I am running python version 2.7.12 and Django 1.10.5. I created my virtualenv named venv and then activated it. I get this error when I try to run python manage.py runserver. Traceback (most recent call last): File "manage.py", line 17, in "Couldn't import Django. Are you sure it's installed and " ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? Then I realize Django might not be in my python path. So I added export PYTHONPATH="/usr/local/lib/python2.7/dist-packages/django" into my venv/bin/activate script. Now with the virtualenv activated I can go into python and type import sys sys.path ['', '/usr/local/lib/python2.7/dist-packages/django', '/home/ubuntu/TravelBuddy/venv/lib/python2.7', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/plat-x86_64-linux-gnu', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-old', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/local/lib/python2.7/site-packages', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/site-packages'] As you can see now django is indeed in my python path. I thought this was going to fix the problem but it didn't: it still says couldn't import Django. Now I am confused because when I deactivate my virtualenv and import Django it does work. this is what prints out when I deactivate my virtualenv and do sys.path ['', '/usr/local/lib/python2.7/dist-packages/django', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
django values function strange behaviors?
42,421,652
0
0
37
0
python,django
The first query does not return a dictionary with two keys. On the contrary, it returns a ValuesQuerySet; each element of that queryset is a dictionary. The ValuesQuerySet, like any other queryset, retains a connection with the model, and it is therefore able to add any other elements to the query as necessary. The query as a whole is not executed until the queryset is iterated.
0
0
0
0
2017-02-23T16:19:00.000
3
0
false
42,421,036
1
0
1
1
I have +5 hours training to explain how : Item.objects.values('type', 'state') returns a dictionary that contains only two keys. However Item.objects.values('type', 'state').annotate(nb=Count('id')) works !! How does the interpreter knows that id attribute exists if it's not returned by values function ?
How can I persist an authenticated API object across different Celery tasks?
42,439,583
1
1
152
0
python,django,python-requests,celery
You can put these data into the database/memcache and fetch by userid as a key. If these data are stateless - it's fine. Concurrent processes take the authenticating parameters, construct request and send it. If it changes the state (unique incrementing request id, changing token, etc) after each request (or in some requests) - you need to implement a singleton manager to provide correct credentials by request. All tasks should request for credentials from this manager. It can also limit the rate for example. If you would like to pass this object to the task as a parameter - then you need to serialize it. Just make sure it is seriazeable.
0
0
1
0
2017-02-24T12:44:00.000
1
1.2
true
42,438,998
0
0
1
1
How can I persist an API object across different Celery tasks? I have one API object per user with an authenticated session (python requests) to make API calls. A user_id, csrftoken, etc. is sent with each request. I need to schedule different tasks in Celery to perform API requests without re-authenticating for each task. How can I do this?
Interactive shell in Django
42,443,116
0
6
11,245
0
python,django,shell
root the django project then python manage.py shell do the views actions here.
0
0
0
0
2017-02-24T15:58:00.000
4
0
false
42,443,006
0
0
1
1
I would like to get an interactive shell with the code, but I don't even know if such thing exits. Could anyone be able to help me at this point? EDIT : I already know we could use python manage.py shell, but I would like something we could insert in the code in such a way we do not have to re-import all the libraries in the shell.
Query string for messages list returns inconsistent results compared to web interface
42,448,329
2
1
47
0
python,gmail,gmail-api
Yes, message body is searched. Try: "from:example.com OR to:example.com" No, Gmail UI and API search is not case-sensitive. Be aware that service.users().threads().list() would be more consistent with Gmail UI search assuming the user has conversations enabled which is the Gmail UI default. in:anywhere expands the search to Trash and Spam which is not normally included. Archived messages are normally included.
0
0
0
0
2017-02-24T18:17:00.000
1
1.2
true
42,445,637
0
0
1
1
When I use code similar to the example code from the api documentation, the query strings which in the web interface return results don't work. This is listing messages, not retrieving them, so I don't think full vs raw helps. The scope granted is gmail.readonly Is it possible to search on message body with this function? Is there a way to search on domain name (i.e. all messages from or to *@example.com) Is the search case-sensitive? service.users().messages().list(userId=user_id, pageToken=page_token, q=query).execute() I use 'me' for the user_id, and I checked that it's certainly the same email. A Query for in:anywhere on its own returns the full mail list. Thanks for the help! EDIT: The query in question is a single word like a name. Some of them sometimes work with 'name is:anywhere' but not consistently.