Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
How can I create an API for my web app and the mobile version of that web app in python?
40,246,643
1
0
60
0
python,django,api
An API does not care if the client that sends the requests is a mobile app or browser (unless of course you send and use the information on purpose). For example if your API exposes the "www.myapp.com/registeruser/" URL and requires a POST with username and password, you can call this URL with those parameters from any client that is able to send that. If what you want is use the same client-side code for both desktop and mobile (trying to understand what you need!), you can look at responsive websites. A package like django-bootstrap3 works very well with Django and is easy to use.
0
0
0
0
2016-10-25T17:33:00.000
2
0.099668
false
40,246,207
0
0
1
1
I'm new to the API concept.I have a doubt about the API. I created a web app in Python-django framework.I need to create an API for this web application.I need to use this same app in mobile as mobile app.How can I possible this? Can I create seperate API for the mobile app also? I searched this in google. but i can't find a correct answer.Please help me...
Proxy Error 408 when running a script written in Scrapy Python
40,296,417
1
0
497
0
python,proxy,web-scraping,scrapy,web-crawler
Thanks.. I figure out here.. the problem is that some proxy location doesn't work with https.. so I just changed it and now it is working.
0
0
1
0
2016-10-26T15:24:00.000
1
1.2
true
40,266,219
0
0
1
1
I am using a proxy (from proxymesh) to run a spider written in scrapy python, the script is running normally when I don't use the proxy, but when I use it, I am having the following error message: Could not open CONNECT tunnel with proxy fr.proxymesh.com:31280 [{'status': 408, 'reason': 'request timeout'}] Any clue about how to figure out? Thanks in advance.
WSGI with RESTful post-processing
40,368,532
0
1
253
0
python-3.x,wsgi,restful-architecture
I implemented a solution by creating a new Python thread and attaching the second transaction to it. To ensure it kicks of after the first transaction, i put a small delay in the thread before it starts the second transaction. Hoping there are no issues introduced with threading.
0
0
0
0
2016-10-26T23:06:00.000
1
0
false
40,273,524
0
0
1
1
We have a WSGI application with Python3 running under Apache Linux. We want to interact with an external API after acknowledging a request / notification received via the Web server Sample WSGI python code: def application(environ, start_response): path= environ.get('PATH_INFO', '') if path == "/ProcessTransact": import sys sys.stderr.write("Entering /ProcessTransact, Checking validity ...\n" ) # Get the context of the notification/request from Post parameters etc, assume it is a valid ... status = '200 OK' body = b"Acknowledge the valid submit with '200 OK'" response_headers = [ ('Content-Type', 'text/html'), ('Content-Length', str(len(body))) ] start_response(status, response_headers) return [body] # we have acknowledged the context of the above request # we want to do an HTTP POST based on the context # When we return [body], we lost the processing thread import requests #or maybe something else sys.stderr.write("POST RESTful transactions here after acknowledging the request (we never get here).\n") Our code is slightly different to the sample code (using Werkzeug). What is the best way to solve this? We are purposefully not using any frameworks (except Werkzeug) and we want to avoid large changes in architecture (thousands of lines of code) Thank you, Kris
Flask return nothing, instead of having the re-render template
40,277,255
0
0
795
0
python,flask
If you want the user to stay in place, you should send the form using JavaScript asynchronously. That way, the browser won't try to fetch and render a new page. You won't be able to get this behavior from the Flask end only. You can return effectively nothing but the browser will still try to get it and render that nothing for the client.
0
0
0
0
2016-10-27T06:01:00.000
1
0
false
40,277,199
0
0
1
1
I have written a python function using flask framework to process some data submitted via a web form. However I don't want to re-render the template, I really just want to process the data and the leave the web form, it the state it was in, when the POST request was created. Not sure how to do this ... any suggestions ?
Django: broken migrations
40,289,636
0
0
1,968
0
python,django,python-2.7,django-south,django-1.4
Answering this question: So the only solution I can think of is exporting the database from the old machine, where it is working, to the new one. Would that work? Yes, this can work if you are sure that your database is in sync with your models. It is actually the way to go, if you want to be best prepared of updating your production environment. get a dump from the current production machine create a new database and load the dump check whether there are differences between the models and the migration history (this is more reliable with the new Django migrations, South was an external tool and had not all of the possibilities) (e.g. ./manage.py showmigrations (1.10), ./manage.py migrate --list (1.7-1.9 and South) If you are confident that no migrations have to be run but the listing shows differences then do: ./manage.py migrate --fake Note, in newer versions you can do ./manage.py migrate and it will report that everything is in order if the models and the migrations are in sync. This can be a sanity check before you deploy onto production.
0
0
0
0
2016-10-27T16:01:00.000
1
1.2
true
40,289,327
0
0
1
1
I am trying to setup a Django app locally in a new machine but migrations seem to be totally broken. They need to be performed in a particular order, which worked in the first machine I set the environment in a couple months ago, but now there are inconsistencies (although I am pretty sure no new migrations were generated). So the only solution I can think of is exporting the database from the old machine, where it is working, to the new one. Would that work? This would not solve the broken migrations issue, but at least I can work on the code till there's a proper soltuion.
Upload questions to Retrieve and Rank using cURL, visible in webtool
40,302,783
1
0
85
0
python,curl,ibm-watson,retrieve-and-rank
Sorry, no - there isn't a public supported API for submitting questions for use in the tool. (That wouldn't stop you looking to see how the web tool does it and copying that, but I wouldn't encourage that as the auth step alone would make that fairly messy).
0
0
1
1
2016-10-27T20:14:00.000
1
0.197375
false
40,293,466
0
0
1
1
Is there a way to upload questions to "Retrieve and Rank" (R&R) using cURL and have them be visible in the web tool? I started testing R&R using web tool (which I find very intuitive). Now, I have started testing the command line interface (CLI) for more efficient uploading of question-and-answer pairs using train.py. However, I would still like to have the questions visible in web tool so that other people can enter the collection and perform training there as well. Is it possible in the present status of R&R?
How to run debug in view django with Atom python-debugger?
40,298,315
0
0
1,402
0
python,django,python-3.x,atom-editor
Django have a debugger enviroment: $ mkdir myvenv $ cd myvenv $ python3 -m venv myvenv $ source myvenv/bin/activate Now your prompt is: (myvenv)diego@AspireM1640 ~/www/myvenv $ Go to your project folder, run the server debug: python manage.py runserver or for intranet with IP:python manage.py runserver 192.168.1.33:8000
0
0
0
0
2016-10-28T02:38:00.000
1
0
false
40,297,196
0
0
1
1
I'm using Atom for development python. I created a simple project with python and Django. I have already installed python-debugger But how can I run debug it in view of django.
How to use PHP/HTML as interface and Java/Python as function in background?
40,332,050
0
0
177
0
java,php,python,html,desktop-application
You could expose data from Java or Python as JSON via GET request and use PHP to access it. There are multiple libraries for each of these languages both for writing and reading JSON. GET request can take parameters if needed.
0
0
0
1
2016-10-30T18:07:00.000
2
0
false
40,332,032
0
0
1
1
I'm thinking about writing a desktop application that the GUI is made with either HTML or PHP, but the functions are run by a separate Java or python code, is there any heads up that I can look into?
Python Pyro running on Linux to open a COM object on a remote windows machine, is it possible?
40,347,576
0
1
433
0
python,linux,windows,com,pyro
Yes this is a perfect use case for Pyro, to create a platform independent wrapper around your COM access code. At least I assume you have some existing Python code (using ctypes or pywin32?) that is able to invoke the COM object locally? You wrap that in a Pyro interface class and expose that to your linux box. I think the only gotcha is that you have to make sure you pythoncom.CoInitialize() things properly in your Pyro server class to be able to deal with the multithreading in the server, or use the non-threaded multiplex server.
0
1
0
1
2016-10-31T02:34:00.000
1
0
false
40,335,862
0
0
1
1
I have a project that requires the usage of COM objects running on a windows machine. The machine running the Python Django project is on a Linux box. I want to use Pyro and the django App to call COM objects on the remote windows machine. Is it possible? Any suggestion is appreciated?
Setting up a server to host multiple domains using django, virtualenv, gunicorn and nginx
40,339,161
0
1
709
0
python,django,postgresql,nginx,virtualenv
I wouldn't consider it advisable. By doing that, you are creating a dependency between the projects which means you'll never be able to upgrade one without all the others. Which would be a massive PITA. Eventually it would get to a point where you could never upgrade because Project A's dependency foo doesnt work with django 1.N but Project B's dependency bar requires at least 1.N - At which point you fall back to the cleaner solution anyway, separate environments. That applies to the django side of things at least, it may work slightly better with Postgres and Nginx.
0
0
0
0
2016-10-31T09:01:00.000
3
0
false
40,339,098
0
0
1
3
I am setting up a new server machine, which will host multiple django websites. I must point out that I own (developed and are in absolute control of) all websites that will be run on the server. I am pretty certain that ALL of the websites will be using the same version of: django gunicorn nginx postgreSQL and psycopg2 (all though some websites will be using geospatial and other extensions) The only thing that I know will differ between the django applications are: python modules used (which may have implications for version of python required) I can understand using virtualenv to manage instances of where a project has specific python modules (or even python version requirements), but it seems pretty wasteful to me (in terms of resources), to have each project (via virtualenv), to have separate installations of django, nginx, gunicorn ... etc. My question then is this: Is it 'acceptable' (or considered best practice in scenarios such as that outlined above) to globally install django, gunicorn, nginx, postgreSQL and psycopg2 and simply use virtualenv to manage only the parts (e.g. python modules/versions) that differ between projects?. Note: In this scenario there'll be one nginx server handling multiple domains. Last but not the least, is it possible to use virtualenv to manage different postgreSQL extensions in different projects?
Setting up a server to host multiple domains using django, virtualenv, gunicorn and nginx
40,339,173
1
1
709
0
python,django,postgresql,nginx,virtualenv
No. It would probably work, but it would be a bad idea. Firstly, it's not clear what kind of "resources" you think would be wasted. The only relevant thing is disk space, and we're talking about a few megabytes only; not even worth thinking about. Secondly, you'd now make it impossible to upgrade any of them individually; for anything beyond a trivial upgrade, you'd need to test and release them all together, rather than just doing what you need and deploying that one on its own.
0
0
0
0
2016-10-31T09:01:00.000
3
1.2
true
40,339,098
0
0
1
3
I am setting up a new server machine, which will host multiple django websites. I must point out that I own (developed and are in absolute control of) all websites that will be run on the server. I am pretty certain that ALL of the websites will be using the same version of: django gunicorn nginx postgreSQL and psycopg2 (all though some websites will be using geospatial and other extensions) The only thing that I know will differ between the django applications are: python modules used (which may have implications for version of python required) I can understand using virtualenv to manage instances of where a project has specific python modules (or even python version requirements), but it seems pretty wasteful to me (in terms of resources), to have each project (via virtualenv), to have separate installations of django, nginx, gunicorn ... etc. My question then is this: Is it 'acceptable' (or considered best practice in scenarios such as that outlined above) to globally install django, gunicorn, nginx, postgreSQL and psycopg2 and simply use virtualenv to manage only the parts (e.g. python modules/versions) that differ between projects?. Note: In this scenario there'll be one nginx server handling multiple domains. Last but not the least, is it possible to use virtualenv to manage different postgreSQL extensions in different projects?
Setting up a server to host multiple domains using django, virtualenv, gunicorn and nginx
40,340,613
0
1
709
0
python,django,postgresql,nginx,virtualenv
I would suggest to use docker virtualization so that every project has it's own scope and doesn't interfere with other projects. I'm currently having such configuration on multiple servers and I'm really happy with it because I'm really flexible and what is really important - I'm secure, because if any of projects has critical bugs in it, other projects are still safe.
0
0
0
0
2016-10-31T09:01:00.000
3
0
false
40,339,098
0
0
1
3
I am setting up a new server machine, which will host multiple django websites. I must point out that I own (developed and are in absolute control of) all websites that will be run on the server. I am pretty certain that ALL of the websites will be using the same version of: django gunicorn nginx postgreSQL and psycopg2 (all though some websites will be using geospatial and other extensions) The only thing that I know will differ between the django applications are: python modules used (which may have implications for version of python required) I can understand using virtualenv to manage instances of where a project has specific python modules (or even python version requirements), but it seems pretty wasteful to me (in terms of resources), to have each project (via virtualenv), to have separate installations of django, nginx, gunicorn ... etc. My question then is this: Is it 'acceptable' (or considered best practice in scenarios such as that outlined above) to globally install django, gunicorn, nginx, postgreSQL and psycopg2 and simply use virtualenv to manage only the parts (e.g. python modules/versions) that differ between projects?. Note: In this scenario there'll be one nginx server handling multiple domains. Last but not the least, is it possible to use virtualenv to manage different postgreSQL extensions in different projects?
Django-rest-framework-jwt won't return JWToken for nonstaff accounts (django admin error?)
40,354,393
0
0
171
0
python,django,rest,django-rest-framework,jwt
Found the issue. My account api allowed bad passwords to fall through, so my users model wasn't able to log that password.
0
0
0
0
2016-10-31T12:38:00.000
1
0
false
40,342,364
0
0
1
1
So I've created a register page that allows visitors to register an account on my website. These accounts have no staff status or administrative privileges. I've also created a login page that takes the username and password and sends an ajax post request to an auth url. The url links to obtain_jwt_token (django-rest-framework-jwt's view) which checks the username and password and then returns a jwt token to the visitor's localstorage. This is all fine and dandy, and it works well. Only problem is... well it works only for administrator accounts. For some reason the accounts with no staff status aren't validated. Json Web Tokens aren't returned for these accounts. Is this an issue with django.admin.auth? or is it an issue with drt-jwt? Is drt-jwt using the django admin page to authenticate users? Because that's not what I want. I don't just want admins to be able to log in to my website.
Does User ID on Xing OAuth change when using Test Key?
40,347,385
0
2
81
0
oauth,python-social-auth,xing
Sry, found the answer. The user-id relies to the xing-app that asks for the user, which can only be configured by one developer account... So you need to use the same credentials to ask for the users to be able to find the same user by using the user-id again.
0
0
0
0
2016-10-31T16:25:00.000
1
1.2
true
40,346,352
0
0
1
1
I get a slightly changing User ID's on sign up/login when authenticating with xing oauth. Something like: 37578662_a467ef and 37578662_76a7fe. Does somebody know if the user id changes when using a xing test key? Or if I could rely on the first part (before underline) to be equal and consistent on login? Using python-social-auth and Django Best Johannes
How to handle ordering of flask error handlers
40,360,531
5
3
453
0
python,python-3.x,flask
Error handlers follow the exception class MRO, or method resolution order, and a handler is looked up in that order; the specific exception type first, then it's direct parent class, etc, all the way down to BaseException and object. There is no need to order anything; if you registered a handler for Exception, then it'd be used for any exception for which no more specific handler was found.
0
0
0
0
2016-11-01T11:59:00.000
1
0.761594
false
40,359,630
0
0
1
1
How does one ensure that the flask error handlers get the most specific exception? From some simple tests and looking at the source code, it looks like the flask error handling code just takes the first register error handler for a given exception type instead of the most specific type possible. I guess the answer is to put the error handler for Exception at the very end?
Jupyter notebook browser crashed but running in command prompt
40,380,231
0
1
199
0
python,jupyter-notebook
I think you are out of luck here, the one thing you can do is check .ipynb_checkpoints/ if you can get a recent checkpoint.
0
0
0
0
2016-11-02T11:59:00.000
1
1.2
true
40,379,402
1
0
1
1
I was scraping the web using jupyter notebook and it had been running for 20 hours. Now, because it was taking up a lot of ram, the browser eventually crashed however the command prompt instance is still running. Is there a way to retrieve the browser contents back with the data which is already scraped?
Python 2.7 - b'' appears in front of a string in output file
40,503,197
0
1
358
0
python,string,unicode,encoding
I added this in my code reload(sys) sys.setdefaultencoding('UTF8') then removed the function str() before the output of my string and now it works fine, thanks!
0
0
0
0
2016-11-03T16:24:00.000
1
1.2
true
40,406,618
0
0
1
1
I wrote a Python script running a SQL query and creating an external file from the output. It works well on my computer but when I try to run the exact same script on another computer the output file is different. In mine the content of the content of the output file looks like this : FR, DE, CA and with the other computer it looks like this: b'FR', b'DE', b'CA' There is this b'' around the string and I don't know what I should configure in the 2nd computer to remove that. Both computers are using Python 2.7.11. I noticed the b'' thing appears in the 2nd computer after I use the function: smart_str from django.utils.encoding Before I pass the string to the output file I do: str(x) but the b'' is not removed. Thanks in advance for your help!
Running R Models using rpy2 interface on Docker. I am facing issue related to opening the port
41,900,410
0
0
59
0
python,r,docker,rpy2
Finally resolve this problem myself. This is very python script specific problem. In R command call from python, just need to change TBATS and BATS function. (very specific problem if someone works with R timeseries library)
0
1
0
0
2016-11-03T16:59:00.000
1
0
false
40,407,263
0
0
1
1
First, I ran R models in windows system using rpy2 python interface. It's running fine. Then, I migrate it to linux environment using docker. Now I'm executing same code with Docker run command, facing "rpy2.rinterface.RRuntimeWarning:port 11752 cannot be opened ". Note: my application running four R models using rpy2. That means it's create four robjects. So I think same time they are using same port. However, I'm not sure. Help in this issue really appreciable. Thanks in advance.
Wordpress - Integrating custom web application
40,452,079
0
0
62
0
php,python,wordpress,web-applications
After consulting with someone who's developed with Wordpress before, he recommended to build a plugin. And since I have no experience with Wordpress, he helped me build it. It was literally 3 lines of PHP. Thank you all.
0
0
0
0
2016-11-04T13:21:00.000
1
1.2
true
40,423,769
0
0
1
1
I was tasked to create a file upload workflow that integrates with Wordpress. I created a backend that is called via REST that does a lot of custom workflows. Thus, I cannot use the current plugins. It is a single page application that accepts a file as well as some metadata. My current dilemma: I need to integrate this web application within Wordpress and have no clue where to start.
Transfer Data from Click Event Between Bokeh Apps
40,429,660
1
0
157
0
javascript,python,cookies,bokeh
The cookies idea might work fine. There are a few other possibilities for sharing data: a database (e.g. redis or something else, that can trigger async events that the app can respond to) direct communication between the apps (e.g. with zeromq or similiar) The Dask dashboard uses this kind of communication between remote workers and a bokeh server. files and timestamp monitoring if there is a shared filesystem (not great, but sometimes workable in very simple cases) Alternatively if you can run both apps on the same single server (even though they are separate apps) then you could probably communicate by updating some mutable object in a module that both apps import. But this would not work in a scale-out scenario with more than one Bokeh server running. Any/all of these somewhat advanced usages, an working example would make a great contribution for the docs so that others can use them to learn from.
0
1
0
0
2016-11-04T15:03:00.000
1
1.2
true
40,425,856
0
0
1
1
I have two Bokeh apps (on Ubuntu \ Supervisor \ Nginx), one that's a dashboard containing a Google map and another that's an account search tool. I'd like to be able to click a point in the Google map (representing a customer) and have the account search tool open with info from the the point. My problem is that I don't know how to get the data from A to B in the current framework. My ideas at the moment: Have an event handler for the click and have it both save a cookie and open the account web page. Then, have some sort of js that can read the cookie and load the account. Throw my hands up, try to put both apps together and just find a way to pass it in the back end.
Create AWS RDS on specific VPC
40,433,014
1
0
214
1
python,amazon-web-services,deployment,amazon-ec2,boto3
It is certainly best practice to have your Amazon EC2 instances in the same VPC as the Amazon RDS database. Recommended security is: Create a Security Group for your web application EC2 instances (Web-SG) Launch your Amazon RDS instance in a private subnet in the same VPC Configure the Security Group on the RDS instance to allow incoming MySQL (3306) traffic from the Web-SG security group If your RDS instance is currently in a different VPC, you can take a snapshot and then create a new database from the snapshot. If you are using an Elastic Load Balancer, you could even put your Amazon EC2 instances in a private subnet since all access will be via the Load Balancer.
0
0
1
0
2016-11-04T15:50:00.000
2
0.099668
false
40,426,863
0
0
1
1
I have web application which dynamically deployed on EC2 instances (scalable). Also I have RDS mysql instance which dynamically created by python with boto3. Now port 3306 of RDS is public, but I want to allow connection only from my EC2's from specific VPC. Can I create RDS on specific VPC (same one with EC2 instances)? What is best practice to create such set EC2 + RDS ?
Running flask app on DigitalOcean: should I keep the ssh console open all the time?
40,433,397
0
1
282
0
python,ssh,flask,digital-ocean
You needn't keep the console on, the app will still running after you close the console on your computer. But you may need to set a log to monitor it.
0
1
0
1
2016-11-04T23:45:00.000
3
0
false
40,433,243
0
0
1
1
I created a droplet that runs a flask application. My question is when I ssh into the droplet and restart the apache2 server, do I have to keep the console open all the time (that is I should not shut down my computer) for the application to be live? What if I have a dynamic application that runs scripts in the background, do I have to keep the console open all the time for the dynamic parts to work? P.S: there's a similar question in SO about a NodeJs app but some parts of the answer they provided are irrelevant to my Flask app.
Use external python libraries on Pyramid
40,450,003
5
1
107
0
python,pyramid
Yes, all of those frameworks are simply running Python code to handle requests. Within limits you can use external libraries just fine. The limits usually are dictated by the WSGI server and the nature of HTTP requests; if your library changes the event model (like gevents) or relies heavily on changing the interpreter state (global state, localization) or takes a long,long time to produce results, then you may need to do more work to integrate.
0
0
0
0
2016-11-06T13:38:00.000
1
0.761594
false
40,449,935
0
0
1
1
Can I use any external libraries that are developed for python on Pyramid? I mean is it the 'normal python' to which I can import external libraries as I do with the standard python downloaded from python.org What is the situation for Django and Flask and Bottle? My intention is to create backend for a mobile app. I want to do it specifically in Python because I need to learn python. The app is a native android app. Therefore the there is no need to use response with nice html code. I just want Django/Flask/Pyramid to direct http request to relevant python functions. Everything else including user auth, database is handled by my code I write. Is there a better more simpler way to map http request/responses with the relevant functions without using these 3 platforms? In case I use one of these can I still use my own libraries?
Can we send multiple responses in intervals to a single request, Using Django as server?
40,451,675
2
0
2,762
0
python,django,rest
Not It is not possible my friend .. because it is not related to Django or other web framework .. that's Http Rules and you can't change Them .. Every http request has only one http response ..
0
0
0
0
2016-11-06T15:59:00.000
2
1.2
true
40,451,372
0
0
1
1
I am just working on an app, which sends a request to the server asking it to report the details of a device every 10 seconds for the next 60 seconds. I am using Django framework as the backend server. Can we send multiple responses to a single request from the app? If yes, Can you point me in the right direction.
How to structure a very small Django Project?
40,452,589
3
1
318
0
python,django
meaning it's really static Use nginx to serve static files. Do not use django. You will setup project structure when it will be required.
0
0
0
0
2016-11-06T17:50:00.000
3
0.197375
false
40,452,529
0
0
1
2
It is a little oxymoron now that I am making a small Django project that it is hard to decide how to structure such project. Before I will at least will have 10 to 100 apps per project. Now my project is just a website that presents information about a company with no use database, meaning it's really static, with only 10 to 20 pages. Now how do you start, do you create an app for such project.
How to structure a very small Django Project?
40,452,810
1
1
318
0
python,django
Frankly I won't use Django in that case, I would use Flask for such small projects. it's easy to learn and setup a small website. PS: I use Flask in small and large apps!
0
0
0
0
2016-11-06T17:50:00.000
3
0.066568
false
40,452,529
0
0
1
2
It is a little oxymoron now that I am making a small Django project that it is hard to decide how to structure such project. Before I will at least will have 10 to 100 apps per project. Now my project is just a website that presents information about a company with no use database, meaning it's really static, with only 10 to 20 pages. Now how do you start, do you create an app for such project.
foobar Google - Error 403 permission denied - programming challenge
60,907,357
0
1
604
0
python
you have to sign in and associate your foobar to your Gmail then you should be able to request a new challenge.
0
0
1
0
2016-11-07T00:55:00.000
2
0
false
40,456,337
0
0
1
2
I am doing a challenge for Google FooBar, and am having trouble submitting my code. My code is correct, and I have checked my program output against the answers provided by Google, and my output is correct. However, when I try and submit, I get a Error 403: Permission denied message. I cannot submit feedback either because I receive the same error message. Does any one have any advice?
foobar Google - Error 403 permission denied - programming challenge
44,912,746
0
1
604
0
python
I also faced the same issue. You can solve this by closing the current foobar session and opening a new in another tab. This will definitely solve this problem.
0
0
1
0
2016-11-07T00:55:00.000
2
0
false
40,456,337
0
0
1
2
I am doing a challenge for Google FooBar, and am having trouble submitting my code. My code is correct, and I have checked my program output against the answers provided by Google, and my output is correct. However, when I try and submit, I get a Error 403: Permission denied message. I cannot submit feedback either because I receive the same error message. Does any one have any advice?
Generate 2D barcodes and and arrange on a grid for bulk printing
40,477,133
1
1
657
0
python,pdf,printing,barcode,barcode-printing
Just FWIW, Code-128 is NOT a 2D barcode, it is a "simple" 1D barcode. That said, there are Code-128 fonts around, which means you can use them in PDF form fields, which you can fill, maybe flatten the document, and send to the printer. No need to fiddle around with layout, after you created your base PDF. To fill, you could use command line tools, such as FDFMerge by Appligent, where you can easily create data files from your database system, and merge that data with the base PDF.
0
0
0
0
2016-11-07T14:14:00.000
2
0.099668
false
40,467,306
0
0
1
2
What tools do I need to render 60,000+ unique Code-128 barcodes and arrange them in a grid in a PDF file for volume printing? Printing this many barcodes digitally seems like a challenge on its own, so there must be some lore from folks who have dealt with warehousing and bulk labelling. Existing projects and commercial products focus barcode generation instead of layout and printing. I messed around with some Python that renders a PDF, but the tough part is dealing with various labelling templates and understanding that printers print better or worse barcodes depending on the rotation of the heads. Should I even be using PDF for this? I have spent too much time already trying to line up the output of an HTML page for a crappy labelling template. I would appreciate a link to an open source library or even commercial tool for laying out barcodes at this scale.
Generate 2D barcodes and and arrange on a grid for bulk printing
41,408,481
0
1
657
0
python,pdf,printing,barcode,barcode-printing
We use LaTeX with the textpos package to get absolute positioning. To create the actual barcode symbols we use the pst-barcode package. We generate the LaTeX source file in a script language and then run pdflatex to get the pdf with the symbols. It is really easy when using LaTeX.
0
0
0
0
2016-11-07T14:14:00.000
2
0
false
40,467,306
0
0
1
2
What tools do I need to render 60,000+ unique Code-128 barcodes and arrange them in a grid in a PDF file for volume printing? Printing this many barcodes digitally seems like a challenge on its own, so there must be some lore from folks who have dealt with warehousing and bulk labelling. Existing projects and commercial products focus barcode generation instead of layout and printing. I messed around with some Python that renders a PDF, but the tough part is dealing with various labelling templates and understanding that printers print better or worse barcodes depending on the rotation of the heads. Should I even be using PDF for this? I have spent too much time already trying to line up the output of an HTML page for a crappy labelling template. I would appreciate a link to an open source library or even commercial tool for laying out barcodes at this scale.
Google App Engine ndb memcache when to use memcache
40,473,578
0
0
148
0
python,google-app-engine
One case would be inside a transaction in which you want to read some related entity values but you don't care about accessing those particular entities consistently or not (in the context of that transaction). In such case reading from the datastore would unnecessarily include those related entities in the transaction which contributes to datastore contention and could potentially cause exceeding various per-transaction limits. Reading memcached values for those related entities instead would not include the entities in the transaction itself. Now I'm not 100% certain if this is applicable to ndb's memcache copy of an entity (I don't even know how to access that), I used my own memcache copies of such entities, updated whenever I modify these entities.
0
1
0
0
2016-11-07T17:27:00.000
1
1.2
true
40,471,023
0
0
1
1
If read/writes into the ndb datastore automatically caches both in-context and via memcache, in what cases would you want to call the memcache api directly (in the context of the datastore)? To elaborate, would I ever need to set the memcache for a particular datatstore read/write and get reads from the memcache instead of the datastore directly?
Django - beginner- what is the process for passing information to a view via a url?
40,474,452
2
2
47
0
python,django
Yes, generally POST is a better way of submitting data than GET. There is a bit of a confusion about terminology in Django. While Django is, indeed MVC, models are models, but views are in fact controllers and views are templates. Since you are going to use AJAX to submit and retrieve the data, you don't care about templates. So what you most likely want is something like this in your urls.py as part of your urlpatterns variable url(r'mything/$', MyView.as_view()) in your views.py from django.views import View from django.http import HttpResponse class MyView(View): def post(self, request): data = request.POST ... do your thing ... return HttpResponse(results) and in your javascript jQuery.post('/mything/', data, function() { whatever you do here })
0
0
0
0
2016-11-07T17:33:00.000
2
1.2
true
40,471,132
0
0
1
1
I am working on my first django project which is also my first backend project. In the tutorials/reading I have completed, I haven't come across passing information back to django without a modelform. My intention is to calculate a value on a page using javascript and pass it to django when a user hits a submit button on that page. The submit button will also be a link to another page. I know I could process the information in a view via the url if I knew how to pass the information back to django. I'm aware that django uses MVC and as I have my models and views in place, I am lead to believe that this has something to do with controllers. Basically, I would like to know how to pass information from a page to django as a user follows a link to another page. I understand that this isn't the place for long step by step tutorials on specific topics but I would appreciate any links to resources on this subject. I don't know what this process is even called so I can't search documentation for it. EDIT: From further reading, I think that I want to be using the submit button to GET or POST the value. In this particular case, POST is probably better. Could someone confirm that this is true?
ways to avoid previous reload tornado
40,517,136
0
0
47
0
python,angularjs,templates,tornado
This is not really a Tornado question, as this is simply how Web works. One possible solution is to have only one form, but display its fields so that they look like two forms; in addition, have two separate submit buttons, each with its own name and value. Now, when you click on either button the whole form will be submitted, but in the handler you can process only the fields associated with the clicked button, while still displaying values in all the fields.
0
1
0
0
2016-11-08T08:31:00.000
1
0
false
40,482,242
0
0
1
1
I have two forms, when I submit form#1 I get some corresponding file, but when I submit form#2 thenafter, the corresponding file gets shown but form#1 goes empty. So basically I want some thing like a SPA(e.g angular) but I am taking form#1 and form#2 as separate requests routes and each render my index.html every time, so form#2 is wiped off when I submit form#1 and vice-versa. I dont want a working code but any ideas on how I do that with Tornado (not angular, or say Tornado + Angular ? ) I think one way for example is to handle these requests via a controller and do an AJAX post to corresponding Tornado Handler, which after the file is rendered, displays / serves the very file back again. But this uses AngularJS as a SPA. Any other solution possible? Thanks in Advance
Is request data already sanitized by Flask?
40,491,639
8
7
3,019
0
python,flask
Flask does nothing to request data besides parsing it from the raw HTTP request. It has no way to know what constraints an arbitrary function has. It's up to you to check any constraints. All data will be strings by default. Don't use eval or exec. Use your database driver's parametrized queries to avoid SQL injection. If you render a template with Jinja it will escape data for use in HTML by default.
0
0
0
0
2016-11-08T15:51:00.000
1
1.2
true
40,491,145
0
0
1
1
Should data which comes from the user (like cookie values, variable parts in a route, query args) be treated as insecure and processed in a particular way? Does Flask already sanitize escape input data so passing it to a function test(input_data) is secure?
How to install pip and selenium and phantomjs on ubuntu
40,514,085
5
0
4,788
0
python,selenium,phantomjs,pip
Here are the answers: 1) sudo apt-get install python-pip 2) sudo pip install selenium 3) sudo apt-get install phantomjs tested working. i hope it helps you.
0
0
1
0
2016-11-09T18:57:00.000
1
0.761594
false
40,514,084
1
0
1
1
I run a python program that uses selenium and phantomjs and got these errors 2) and 3) then when I run pip install selenium i got error 1): 1) The program 'pip' is currently not installed. 2) ImportError: No module named 'selenium' 3) selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. All done on Ubuntu 14.04 x64
Installing Django in a Virtualenv means that i have to re-install it everytime I make another project?
40,517,103
0
0
499
0
python,django,installation,virtualenv
you must activete that new venv ...Scripts/activate.py and enjoy ...and install inside venv django, after that go to you project and run 'python manage.py runserver thats all
0
0
0
0
2016-11-09T20:42:00.000
1
0
false
40,515,674
1
0
1
1
Sorry if this is an obvios or dump question but the thing is that i've been having problems with the installation of Djandgo, and the virtualenvs. I'm a windows 10 user and I've following a series of tutorials of Django in wich they create a virtualenv an inside of it, using pip, they proceed with the installation of the framework. The problem is that, I drop the old project or virtualenv wich had Django installed and started a new one, a new virtualenv (creating a new folder and typing virtualenv .), and reinstalled django on it but now, when i go throught the cmd to the directory J:\project2\Scripts\django-admin.py I receive and error: Traceback (most recent call last): File "J:\project2\Scripts\django-admin.py", line 2, in from django.core import management ImportError: No module named django.core is it because I re-installed again Django in another new virtualenv? Thanks to all :)
How to prevent Django messages from leaking out to other modules?
40,524,619
0
0
42
0
python,django,messages
You don't have to iterate over messages to expire them. Django does that for you. When one request gets a message it's iterated over with the next request, gets displayed if the template allows it and is removed from request data. That means it's shown once and is removed. The only way to get a message from your email module to be displayed in the account module is to redirect the user to an account page directly after the action that adds the message has been completed (after an email has been sent, for example). You have complete control over this from your views.
0
0
0
0
2016-11-10T06:50:00.000
1
1.2
true
40,521,509
0
0
1
1
I am currently using the built-in django-messages framework of django version 1.10. However, since the messages are stored in the request, and therefore not "namespaced" as it were for different modules, I am concerned that this might lead to potential circumstances where messages created by one module (e.g. a messaging framework "your message has been sent") might bleed into another. Is there a way to "namespace" these messages so we dont have this unintended affect? In addition, the documentation says that messages expire if they are iterated over, does that mean that if I forget to iterate over them, they have the potential to build up over multiple requests?
How to decode token and get back information for djangorestframework-jwt packagefor Django
70,026,089
1
7
11,026
0
django,python-2.7,django-rest-framework,jwt
Do this jwt.decode(token,settings.SECRET_KEY, algorithms=['HS256'])
0
0
0
0
2016-11-10T07:33:00.000
4
0.049958
false
40,522,177
0
0
1
1
I have started using djangorestframework-jwt package instead of PyJWT , I just could not know how to decode the incoming token (I know there is verify token methode).... All I need to know is how to decode the token and get back info encoded......
Django - Two Users Accessing The Same Data
40,536,028
1
0
401
0
python,django,multithreading
Don't share in memory objects if you're going to mutate them. Concurrency is super hard to do right and premature optimization is evil. Give each user their own view of the data and only share data via the database (using transactions to make your updates atomic). Keep and increment counters in your database every time you make an update, make transactions fail if those number have changed since the data was read (as somebody else has mutated it). Also, don't make important architectural decisions when tired! :)
0
0
0
0
2016-11-10T18:07:00.000
2
0.099668
false
40,534,282
0
0
1
2
Let's say that I have a Django web application with two users. My web application has a global variable that exist on the server (a Pandas Dataframe created from data from an external SQL database). Let's say that a user makes an update request to that Dataframe and now that Dataframe is being updated. As the Dataframe is being updated, the other user makes a get request for that Dataframe. Is there a way to 'lock' that Dataframe until user 1 is finished with it and then finish the request made by user 2? EDIT: So the order of events should be: User 1 makes an update request, Dataframe is locked, User 2 makes a get request, Dataframe is finished updating, Dataframe is unlocked, User 2 gets his/her request. Lines of code would be appreciated!
Django - Two Users Accessing The Same Data
40,534,608
2
0
401
0
python,django,multithreading
Ehm... Django is not a server. It has a single-threaded development server in it, but it should not be used for anything beyond development and maybe not even for that. Django applications are deployed using WSGI. WSGI server running your app is likely to start several separate worker threads and will be killing and restarting these threads according to the rules in its configuration. This means, that you cannot rely on multiple requests hitting the same process. Django app lifecycle is between getting a request and returning a response. Anything that is not explicitly made persistent between those two events should be considered gone. So, when one of your users updates a global variable, this variable only exists in the one process this user randomly accessed. The second user might or might not hit the same process and therefore might or might not get the same copy of the variable. More than that, the process will sooner or later be killed by the WSGI server and all the updates will be gone. What I am getting at is that you might want to rethink your architecture before you bother with the atomic update problems.
0
0
0
0
2016-11-10T18:07:00.000
2
0.197375
false
40,534,282
0
0
1
2
Let's say that I have a Django web application with two users. My web application has a global variable that exist on the server (a Pandas Dataframe created from data from an external SQL database). Let's say that a user makes an update request to that Dataframe and now that Dataframe is being updated. As the Dataframe is being updated, the other user makes a get request for that Dataframe. Is there a way to 'lock' that Dataframe until user 1 is finished with it and then finish the request made by user 2? EDIT: So the order of events should be: User 1 makes an update request, Dataframe is locked, User 2 makes a get request, Dataframe is finished updating, Dataframe is unlocked, User 2 gets his/her request. Lines of code would be appreciated!
Is there a way to scrape Facebook comments and IDs from a Facebook page like nytimes or the guardian for analytical purposes?
68,362,541
-1
1
827
0
python,web-scraping,facebook-apps
for using their API, you'll need to "verify" your app to get access to their "pages_read_user_content" or "Page Public Content Access" at first using the API you might "GET" the page id / page post id / the permalink to the post in the page your own but to scrape the comments with API you'll need to verify a business account.
0
0
1
0
2016-11-11T23:16:00.000
2
-0.099668
false
40,557,678
0
0
1
1
Is there a way to scrape Facebook comments and IDs from a Facebook page like nytimes or the guardian for analytical purposes !?
Django print in prod' server
40,560,572
3
3
1,138
0
python,django,pythonanywhere
On production server your print statements will output log to your webserver log files In case of pythonanywhere there are three log files Access log:yourusername.pythonanywhere.com.access.log Error log:yourusername.pythonanywhere.com.error.log Server log:yourusername.pythonanywhere.com.server.log those logs are accessible in your web tab page. The logs you are looking for will be in server.log
0
0
0
0
2016-11-12T07:04:00.000
2
1.2
true
40,560,439
1
0
1
1
I've gotten use to using print in my python code to show contents of variable and checking the shell output. But i have now migrated all my work onto a online server. Pythonanywhere I don't have the foggiest idea how to do the same now? Can someone point me in the right direction? Print to web console? To a file? Or even to the shell session? Thanks
Creation of basic "stock-program"
40,565,682
0
0
204
0
python,html,web-scraping,yahoo-finance,google-finance
It's always best to use the provided API if you get all the information you need from it. If the API doesn't exist or is not good enough, then you go on the scraping path and it usually is more work than using API. So I would definitely use try using APIs first.
0
0
1
0
2016-11-12T17:31:00.000
2
0
false
40,565,660
0
0
1
1
i'm relatively new to python, hence the perhaps low level of my question. Anyway, i am trying to create a basic program for just displaying a couple of key statistics for different stocks (beta-value, 30-day high/low, p/e, p/s etc...). I have the GUI finished, but i'm not sure how to proceed with my project. Have been researching for a few hours but can't seem to decide which way to go. Would you recommend HTML-scraping or yahoo/google finance API or anything else for downloading the data? After i have it downloaded i am pretty much just going to print it on the GUI.
How to deploy a python client script on heroku?
40,571,033
0
0
45
0
python,heroku
Define a "worker" process type in your Procfile that invokes your script.
0
0
0
1
2016-11-13T02:55:00.000
1
1.2
true
40,570,092
0
0
1
1
Basically, I have a python script which using python-twitter api fetches tweets particular hashtag and store it in a database. Script doess this after every 30 seconds. How do i deploy the script to run it on heroku.
Managing databases in Django models, sqlite and mongoengine
40,752,618
0
0
1,468
1
python,django,mongodb,sqlite,mongoengine
I found a solution, It's very simple. If you want your model to use mongoDB database, just create model class witch Document parameter (or EmbeddedDocument, for example class Magazine(Document):). but if you prefer default database type defined, just create class, like in django documentation (example class Person(models.Model):).
0
0
0
0
2016-11-15T05:28:00.000
2
1.2
true
40,602,640
0
0
1
1
I'm developing some project in Django, something to manage assets in warehouses. I want to use two databases to this. First is sqlite database, which contains any data about users. Second is mongoDB database,in which want to store all the data related to assets. The question is, how to tell to my model classes, which database they should use (models responsible for user registration etc - sqlite, models responsible for managing assets data - mongoDB)? I read about DATABASE_ROUTERS and using Meta classes, but it's solutions for supported databases by Django (or maybe I don't know something), I dont know if it's good and possible to integrate it with mongoengine. Thanks for any tip!
How to display data depending on country in Django
40,617,047
1
0
24
0
python,django,django-models
Based on what you're describing you should probably be setting up parallel stacks and using either your DNS, Apache, your whatever your HTTP routing tech of choice is to do the separation. Use a separate database, possibly even a separate server (or WSGI configuration), and keep your code clean. Creating duplicate "models" based on the value of a field like you're describing breaks a lot of Python's DRY principles.
0
0
0
0
2016-11-15T17:16:00.000
1
1.2
true
40,615,878
0
0
1
1
So I have a Django site that works perfectly and displays everything I want it to in the US. It automatically displays the data from the US data model. What I want to be able to do is basically have an exact clone of my site, maybe under like mysite.com/canada for example, that displays the data from canada. One approach was for me to just add in all the data into the database and add a field that says which country it's from, but I'd rather for each countries data to be in a completely different model. With pure HTML/CSS this would be easy, I would just copy the entire site directory into a sub directory and that would be it for the country. Was wondering if there is something similiar I can do with Django.
Do I Need to Migrate to Link my Database to Django
40,616,612
0
0
38
1
python,mysql,django
SHORT ANSWER: Yes. MEDIUM ANSWER: Yes. But you will have to figure out how Django would have created the table, and do it by hand. That's not terribly hard. Django may also spit out some warnings on startup about migrations being needed...but those are warnings, and if the app works, then you're OK. LONG ANSWER: Yes. But for the sake of your sanity and sleep quality, get a completely separate development environment and test your backups. (But you knew that already.)
0
0
0
0
2016-11-15T17:24:00.000
1
0
false
40,616,036
0
0
1
1
I'm working on a project that I inherited, and I want to add a table to my database that is very similar to one that already exists. Basically, we have a table to log users for our website, and I want to create a second table to specifically log users that our site fails to do a task for. Since I didn't write the site myself, and am pretty new to both SQL and Django, I'm a little paranoid about running a migration (we have a lot of really sensitive data that I'm paranoid about wiping). Instead of having a django migration create the table itself, can I create the second table in MySQL, and the corresponding model in Django, and then have this model "recognize" the SQL table? without explicitly using a migration?
AWS lambda function to retrieve any uploaded files from s3 and upload the unzipped folder back to s3 again
40,632,303
0
0
1,257
0
java,python,amazon-web-services,amazon-s3,aws-lambda
Lambda would not be a good fit for the actual processing of the files for the reasons mentioned by other posters. However, since it integrates with S3 events it could be used as a trigger for something else. It could send a message to SQS where another process that runs on EC2 (ECS, ElasticBeanstalk, ECS) could handle the messages in the queue and then process the files from S3.
0
0
0
1
2016-11-16T08:38:00.000
3
0
false
40,627,395
0
0
1
1
I have an s3 bucket which is used for users to upload zipped directories, often 1GB in size. The zipped directory holdes images in subfolders and more. I need to create a lambda function, that will get triggered upon new uploads, unzip the file, and upload the unzipped content back to an s3 bucket, so I can access the individual files via http - but I'm pretty clueless as to how I can write such a function? My concerns are: Pyphon or Java is probably better performance over nodejs? Avoid running out of memory, when unzipping files of a GB or more (can I stream the content back to s3?)
Passing and receiving multiple arguments with Python in Flask RestAPI
40,684,339
0
0
171
0
python,rest,curl
I am nominally embarrassed. The issue was NOT the python code at all, it was within Curl. So I both switched to HTTPie and changed the format to Schema=LONGSCHEMANAME All of my tests started working so clearly I was not specifying the right string in curl. The -d option was beating me. So I apologize for wasting time. Thanks
0
0
1
0
2016-11-16T10:19:00.000
1
0
false
40,629,548
0
0
1
1
I am clearly confused but not sure if I am screwing up the code, or curl I would like to use a rest api to pass a schemaname, a queryname, and a number of rows. I've written the python code using a simple -s schemaname -q queryname -r rows structure. Thats seems easy enough. But I am having trouble finding a good example of passing multiple arguments in a restapi. No matter which version of the todos example I choose as a model, I just cannot figure out how to extend for the second and 3rd argument. If it uses a different structure (JSON) for input, I am fine. The only requirement is that it run from CURL. I can find examples of passing lists, but not multiple arguments. If there is a code example that does it and i have missed it, please send me along. As long as it has a curl example I am good. Thank you
How to scrape AEM forms?
40,650,411
1
0
85
0
python,selenium,beautifulsoup,aem
I would recommend Selenium as it provides complete browser interface and is mostly used for automation. Selenium will make it more easy to implement and most importantly maintain.
0
0
1
0
2016-11-17T08:42:00.000
1
1.2
true
40,650,154
0
0
1
1
I'm trying to figure out how to scrape dynamic AEM sign-in forms using python. The thing is I've been trying to figure out which module would be best to use for a sign-in form field that dynamically pops up over a webpage. I've been told Selenium is a good choice, but so is BeautifulSoup. Any pointers to which one would be best to use for dynamically scraping these?
Using same webInstance which executing a testsuite
40,651,174
1
1
35
0
python,unit-testing,selenium,selenium-webdriver
This is how is suppose to work. Tests should be independent else they can influence each other. I think you would want to have a clean browser each time and not having to clean session/cookies each time, maybe now not, but when you will have a larger suite you will for sure. Each scenario has will start the browser and it will close it at the end, you will have to research which methods are doing this and do some overriding, this is not recommended at all.
0
0
1
0
2016-11-17T09:26:00.000
1
0.197375
false
40,651,064
0
0
1
1
I have created a testsuite which has 2 testcases that are recorded using selenium in firefox. Both of those test cases are in separate classes with their own setup and teardown functions, because of which each test case opens the browser and closes it during its execution. I am not able to use the same web browser instance for every testcase called from my test suite. Is there a way to achieve this?
Is there a security risk to serve Django admin page on regular http rather than Https?
40,658,108
2
0
139
0
python,django
Whenever you have a login mask in a browser and transfer user credentials from browser to webserver it is highly recommended to use https, because otherwise the credentials can easily be read by others. This applies to everything, not just django admin.
0
0
0
0
2016-11-17T14:53:00.000
2
0.197375
false
40,658,012
0
0
1
1
I just finished my first experience with Django on real application and we are running it on apache2. Since I am newbie I am wondering if it is right to have admin page served on http? Is https a better solution? how much of risk will I be experiencing by not having it run on https?
Database and Data Framework
40,658,564
1
0
66
1
python,sql,database,pandas
Database is place where you store data collection. You can manipulate data by DML statement and some statements can be more difficult (like pivots or functions). Data framework is tool to make your computations, pivots and other manipulating much more easier (for example with drag and drop option).
0
0
0
0
2016-11-17T15:05:00.000
1
1.2
true
40,658,287
0
0
1
1
I am interesting in building databases and have been reading about SQL engines and Pandas framework, and how they interact, but am still confused about the difference between a database and a data framework. I wonder if somebody could point me to links which clarify the distinction between them, and which is the best starting point for a data analysis project.
Add query string parameter in Flask POST response
40,671,727
1
0
1,516
0
python,post,flask
Short answer: The client should add the query parameter when submitting the form data (e.g. in the action parameter of the form tag). Explanation: The server is responding to a request to a particular URL. There is no way for the server to "change the URL" of the request. The only thing the server can do is ask the client to send another request to a different URL by returning a redirect. The problem with this approach, as you mentioned, is that the form data will be lost. You could save the form data using cookies or some similar mechanism, but it's much easier to just have the client submit the form to the correct URL in the first place.
0
0
0
0
2016-11-18T07:30:00.000
1
1.2
true
40,671,611
0
0
1
1
I have page with form which. It is working perfect, after form submitted it is replaced with "thank you" message. Initially form is accessible by url http://localhost:5000/contact and after submit it has the same URL. I want that after submit url changed to http://localhost:5000/contact?aftersubmit. I.e. add query string parameter on server side. I know that I ca do it with redirection, but thus I am losing post-submit rendered content. Also I do not want that if user input http://localhost:5000/contact?aftersubmit could see post-submit content, i.e. I cannot analyze query string on client side and update HTML. It must be done on server side. How it could be done?
How to filter against multiple values for ForeignKey using DjangoFilterBackend
40,674,813
1
0
365
0
python,django,django-rest-framework,django-filter
However if I actually pass this url it returns rows that filters against last parameter value This is because ForeignKey fields default to ModelChoiceFilter, which just takes a single value from the GET QueryDict. If you declare your fields as ModelMultipleChoiceFilter they will take the list of values you require.
0
0
0
0
2016-11-18T08:29:00.000
1
1.2
true
40,672,436
0
0
1
1
I have model with following fields: loading_port discharge_port carrier supplier All these fields are ForeignKey to models that contains name field. Also I have viewset, which uses DjangoFilter backend for filtering. At this moment I want to make possible filtering multiple values for each field, like: loading_port__name=PORT_1&loading_port__name=PORT_2&supplier__name=SUPP_NAME_1&supplier__name=SUPP_NAME_2 and so on. However if I actually pass this url it returns rows that filters against last parameter value (in this example for loading_port - PORT_2, for supplier - SUPP_NAME_2). How can I fix filtering so it will meet my requirements?
Automate python test with testlink and hudson
40,722,012
0
0
299
0
python,automated-tests,hudson,testlink
I found how to do it : I used testLink-API-Python-client.
0
0
0
1
2016-11-18T14:53:00.000
1
1.2
true
40,680,022
0
0
1
1
I want to run automated test on my python script using Hudson and testlink. I configured Hudson with my testlink server but the test results are always "not run". Do you know how to do this?
Use custom domain for flask app with Google authenticated login
40,699,877
0
0
447
0
python
I was able to solve this just now! I went through my DNS settings of my domain and pointed the DNS A record to the IP address that my flask application is running on. Previously, I was using a redirect on the domain, which was not working.
0
0
0
0
2016-11-18T20:15:00.000
1
1.2
true
40,685,275
0
0
1
1
I have built a flask web application that makes use of Google's authenticated login to authenticate users. I currently have it running on localhost 127.0.0.1:5000 however I would like to point a custom domain name to it that I would like to purchase. I have used custom domains with Flask applications before, I'm just not sure how to do it with this. I'm confused as to what I would do with my oauth callback. My callback is set to http://127.0.0.1:5000/authorized in my Google oauth client credentials. I don't think it would just be as easy as running the app on 0.0.0.0. I would need to be able to match the flask routes to the domain. i.e be able to access www.mydomain.com/authorized.
Software as a service in Django - many companies should be able to have the same users
40,691,076
2
1
50
0
python,django
The solution is to have a ManyToMany relation between User and Company. All Users are admin of their own Company (happens when they create their account), but in addition they are also candidates of other companies. They can add Items for all companies they are in, but only invite new people for the company they're owner of, all using the same user account. You'll need some way to switch the company they're currently working as, or showing all of them on the same screen, etc.
0
0
0
0
2016-11-19T08:57:00.000
1
0.379949
false
40,691,002
0
0
1
1
Workflow: User in registration form gives his email, password and company name. The company with the same name is automatically creates during registration process(model Company). This user is automatically admin of this company(in User model I have role field). Company admin can invite candidates. In form gives candidate email, first and last name. Application sends an email with activation link to candidate. The candidate by clicking the link is transfered to the page with form where sets his password and is redirected to login page Candidate can log in and add new items to database(model Item) The problem is that many companies should be able to have the same user(the same email address). Currently application returns that email is already in use(in other company but it can't be like that). So this is something like Software as a Service. Any ideas how to solve this problem?
Python Jupyter Notebook: Specify cell execution order
40,695,507
3
10
2,975
0
python,jupyter-notebook,jupyter
Such a functionality, (to my knowledge) is not available in Jupyter as of yet. However, if you are really worried about having a lot of function definitions at the beginning and want to hide them, you can do the following alternative: Define the functions in a Python script. Add the script execution to the first coding cell of your notebook Add the remaining of the code to the consecutive cells of the notebook Optionally, show its contents at the end of the notebook for viewers' convenience.
0
0
0
0
2016-11-19T16:44:00.000
4
0.148885
false
40,695,393
1
0
1
2
I have a Jupyter notebook. In the cell 1, I defined a lot of functions, which need to run before other things. Then in the following cells, I start to present result. However, when I convert to HTML, this layout is ugly. Readers have to scroll a long time to see the result and they may not care about the functions at all. But I have to put the code in that order because I need those functions. So my question is, is there a way I could control the run order of cells after I click run all? or is there a way I could do something like the following. I put all my function definitions in cell 20, then in cell 1, I could say tell Jupyter something like "run cell 20". Just curious if this is doable. Thanks.
Python Jupyter Notebook: Specify cell execution order
40,695,575
4
10
2,975
0
python,jupyter-notebook,jupyter
I would save the functions as a separate module, then import this module at the beginning.
0
0
0
0
2016-11-19T16:44:00.000
4
0.197375
false
40,695,393
1
0
1
2
I have a Jupyter notebook. In the cell 1, I defined a lot of functions, which need to run before other things. Then in the following cells, I start to present result. However, when I convert to HTML, this layout is ugly. Readers have to scroll a long time to see the result and they may not care about the functions at all. But I have to put the code in that order because I need those functions. So my question is, is there a way I could control the run order of cells after I click run all? or is there a way I could do something like the following. I put all my function definitions in cell 20, then in cell 1, I could say tell Jupyter something like "run cell 20". Just curious if this is doable. Thanks.
Django Model Entries Not Available to Other Developers
40,712,191
0
0
24
0
python,django,migration
Is your database file also included with "project files". If you use the local sqlite3 file generated by Django or really any other local database file that isn't in production and the other developers don't have this why would they see your updates to the DB?
0
0
0
0
2016-11-21T03:06:00.000
1
1.2
true
40,712,162
0
0
1
1
I have manually added entries (rows) to the models in my Django project through the admin interface. I have also ran the following commands python3 manage.py makemigrations & python3 manage.py migrate The issue is I am the only one that can see the data in the database, and other developers cannot see them. They are all using the same project files as present on my computer.
Freeswitch JWT Integration
40,747,794
1
1
533
0
python,lua,jwt,sip,freeswitch
It seems that the following solution should be used. In order to allow FS work with JWT for authentication it is necessary to send JWT inside the custom header from the user agent to the FS. Also it is important to put some known password to the user agent. When UA is connecting to the FS and when building dynamically the directory using lua script (xml-handler-script, xml-handler-bindings) it is possible to validate the JWT and provide the right directory entry for the user simply by reading the custom-header fields. If JWT was valid then correct password (known one) will be used to allow FS to proceed with that, otherwise - another non valid password will be provided and FS will drop the connection. Hope that helps to somebody,
0
0
0
0
2016-11-21T11:59:00.000
2
0.099668
false
40,719,659
0
0
1
1
I am trying to make an integration between a sip client and FS system. SIP Client sends a JWT token as a password during the authentication stage. In order to authenticate a client, FS creates a directory entry with the password field and compares it to the password received from the client, in my case I need to override this behaviour by getting the "token" which appears as password, verifying it and returning the answer to FS about the result of the verification so it will know whether to accept or to reject the user. I am not sure how to override this behaviour in FS without change of the source code. I would prefer to write a python or lua plugins to deal with that. Many thanks,
Install Scrapy on Mac OS X error SSL pip
40,731,300
0
0
490
0
python,macos,scrapy,pip
Temporarily (just for this module), you could manually install it. Download it from wherever you can, extract it if it is zipped then use python setup.py install
0
1
0
0
2016-11-21T21:46:00.000
2
0
false
40,729,995
0
0
1
1
Good, I am currently trying to install Scrapy in my MacOS but everything is problems, the first thing I introduce in terminal is: pip install scrapy And it returns me: You are using pip version 7.0.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already satisfied (use --upgrade to upgrade): scrapy in /usr/local/lib/python2.7/site-packages/Scrapy-1.2.1-py2.7.egg Collecting Twisted>=10.0.0 (from scrapy) Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to pypi.python.org timed out. (connect timeout=15)')': /simple/twisted/ Could not find a version that satisfies the requirement Twisted>=10.0.0 (from scrapy) (from versions: ) No matching distribution found for Twisted>=10.0.0 (from scrapy) Seeing the consideration that makes of updating, I realize it ... pip install --upgrade pip And it returns me the following: You are using pip version 7.0.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already up-to-date: pip in /usr/local/lib/python2.7/site-packages/pip-7.0.1-py2.7.egg The truth is that yesterday I was doing a thousand tests and gave me another type of error: "SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed" But this last mistake no longer shows me.
Log into secured website, automatically print page as pdf
40,732,242
0
1
237
0
python,selenium,pdf,salesforce,pdfkit
log in using requests use requests session mechanism to keep track of the cookie use session to retrieve the HTML page parse the HTML (use beautifulsoup) identify img tags and css links download locally the images and css documents rewrite the img src attributes to point to the locally downloaded images rewrite the css links to point to the locally downloaded css serialize the new HTML tree to a local .html file use whatever "HTML to PDF" solution to render the local .html file
0
0
1
0
2016-11-21T23:53:00.000
1
0
false
40,731,567
0
0
1
1
I have been exploring ways to use python to log into a secure website (eg. Salesforce), navigate to a certain page and print (save) the page as pdf at a prescribed location. I have tried using: pdfkit.from_url: Use Request to get a session cookie, parse it then pass it as cookie into the wkhtmltopdf's options settings. This method does not work due to pdfkit not being able to recognise the cookie I passed. pdfkit.from_file: Use Request.get to get the html of the page I want to print, then use pdfkit to convert the html file to pdf. This works but the page format and images are all missing. Selenium: Use a webdriver to log in then navigate to the wanted page, call the windows.print function. This does not work because I can't pass any arguments to the window's SaveAs dialog. Does anyone have any idea to get around?
Is it possible to scrape a "dynamical webpage" with beautifulsoup?
40,733,402
0
4
278
0
python,html,selenium,beautifulsoup
It depends. If the data is already loaded when the page loads, then the data is available to scrape, it's just in a different element, or being hidden. If the click event triggers loading of the data in some way, then no, you will need Selenium or another headless browser to automate this. Beautiful soup is only an HTML parser, so whatever data you get by requesting the page is the only data that beautiful soup can access.
0
0
1
0
2016-11-22T02:35:00.000
1
0
false
40,732,906
0
0
1
1
I am currently begining to use beautifulsoup to scrape websites, I think I got the basics even though I lack theoretical knowledge about webpages, I will do my best to formulate my question. What I mean with dynamical webpage is the following: a site whose HTML changes based on user action, in my case its collapsible tables. I want to obtain the data inside some "div" tag but when you load the page, the data seems unavalible in the html code, when you click on the table it expands, and the "class" of this "div" changes from something like "something blabla collapsible" to "something blabla collapsible active" and this I can scrape with my knowledge. Can I get this data using beautifulsoup? In case I can't, I thought of using something like selenium to click on all the tables and then download the html, which I could scrape, is there an easier way? Thank you very much.
How to abort Django data migration?
40,750,681
2
0
621
0
python,django,django-migrations
A custom exception will give a slightly prettier message. On the other hand, migrations are usually one-off with basically no code dependencies with each other or the rest of the project, meaning you cannot reuse exception classes to ensure that each migration stands alone. The only purpose of any exception here is to provide immediate feedback since it will not be caught by anything (except possibly error logging in production environments... Where it really shouldn't be failing). I think the disadvantages outweigh the advantages - just raise a plain one-off Exception.
0
0
0
0
2016-11-22T19:30:00.000
1
1.2
true
40,750,130
0
0
1
1
I have a datamigration I actually want to roll back if certain condition happen. I know that migrations are automatically enclosed in a transaction, so I am safe just raising an exception, and then trust all changes to be rolled back. But which exception should I raise to abort my Django data migration? Should I write my own exception, or am I fine with raise Exception('My message explaining the problem')? What is best practice?
How to generate reports in Behave-Python?
72,303,866
0
7
27,370
0
python,report,bdd,python-behave
To have the possibility of generaring execution reports in an easy way, we have implemented the following wrapper on top of Behave, called BehaveX, that not only generates reports in HTML format, but also in xml and json formats. It also allows us to execute tests in parallel and provides some additional features that simplify the implementation of agile practices: https://github.com/hrcorval/behavex
0
0
0
1
2016-11-23T11:19:00.000
6
0
false
40,763,066
0
0
1
1
For Java there are external report generation tools like extent-report,testNG. The Junit produces the xml format output for individual feature file. To get a detailed report, I don't see an option or wide approach or solution within the Behave framework. How to produce the reports in Behave, do any other tools or framework needs to be added for the report generation in Behave?
Python processes, threads as compared to PHPs for web hosting
40,776,697
0
1
26
0
php,python,apache,lamp,uwsgi
the problem of scaling is always the shared data . ie how your processes are going to communicate with each other so it's not a python (GIL) problem
0
0
0
1
2016-11-24T00:14:00.000
1
0
false
40,776,231
1
0
1
1
On a traditional LAMP stack it's easy to stack up quite a few web sites one a single VPS and get very decent performance, with the VPS serving lots of concurrent requests, thanks to the web server using processes and threads making best use of multi cores cpu despite PHP (as python) being single threaded. Is the management of processes and threads the same on a python web stack (uwsgi + ngnix) ? On such a properly configured python stack, is it possible to achieve same result as the LAMP stack and stack several sites on same VPS with good reliability and performance making best use of cpu resources ? Does the GIL make it any different ?
Upload a CSV file and read it in Bokeh Web app
40,795,462
1
12
6,493
0
javascript,python,file,upload,bokeh
As far as I know there is no widget native to Bokeh that will allow a file upload. It would be helpful if you could clarify your current setup a bit more. Are your plots running on a bokeh server or just through a Python script that generates the plots? Generally though, if you need this to be exposed through a browser you'll probably want something like Flask running a page that lets the user upload a file to a directory which the bokeh script can then read and plot.
0
0
0
0
2016-11-24T20:36:00.000
2
0.099668
false
40,794,180
0
1
1
1
I have a Bokeh plotting app, and I need to allow the user to upload a CSV file and modify the plots according to the data in it. Is it possible to do this with the available widgets of Bokeh? Thank you very much.
How to write a AWS lambda function with S3 and Slack integration
65,574,478
0
3
1,589
0
python,amazon-web-services,lambda
You can use S3 Event Notifications to trigger the lambda function. In bucket's properties, create a new event notification for an event type of s3:ObjectCreated:Put and set the destination to a Lambda function. Then for the lambda function, write a code either in Python or NodeJS (or whatever you like) and parse the received event and send it to Slack webhook URL.
0
0
1
1
2016-11-25T08:40:00.000
2
0
false
40,800,757
0
0
1
1
I have a use case where i want to invoke my lambda function whenever a object has been pushed in S3 and then push this notification to slack. I know this is vague but how can i start doing so ? How can i basically achieve this ? I need to see the structure
Odoo 9 Salary Rule based on country
40,805,427
0
0
265
0
python,openerp,odoo-9
employee.country_id will return you the object of res.country table with respective record but you need to extend employee.country_id.name to get the character name field from database record.
0
0
0
0
2016-11-25T08:46:00.000
2
0
false
40,800,848
0
0
1
1
I am trying to define a salary rule in Odoo 9 Payroll. The rule condition has to be based on the employee's country. I tried the python expression code below but it does not work. result = (employee.country_id=="Malaysia") or False I'm aware that the field type of employee's country (nationality) is many2one with relation of res.country. I just couldn't figure out how it works.
Push data from backend (python) to JS
40,806,824
0
1
1,931
0
javascript,python
try to use socketio, the backend create an event with data on socketio and your frontend receives the event and download the data i resolve a similar problem in this way. i call the backend only when a socketio event was create from the backend. you must setup a socketio server with nodejs,somewhere
0
0
1
0
2016-11-25T14:02:00.000
4
0
false
40,806,743
0
0
1
1
Front-end part I have an AJAX request which is trying to GET data from my back-end handle every second. If there is any data, I get this data, add it to my HTML page (without reloading), and continue pulling data every second waiting for further changes. Back-end part I parse web-pages every minute with Celery. Extract data from them and pass it to an array (that is a trigger for AJAX request that there is new data). Question It seems to me that there is another solution for this issue. I don't want to ask for data from JS to back-end. I want to pass data from back-end to JS, when there are any changes. But without page reload. How can I do this?
Integrating GAE Search API with Datatstore
40,816,106
1
0
68
0
python,google-app-engine,google-cloud-datastore,google-search-api
There is no first class support for this, your best bet is to make the document id match the datastore key and route all put/get/search requests through a single DAO/repository tier to ensure some level of consistency. You can use parallel Async writes to keep latency down, but there's not much you can do about search not participating in transactions. It also has no defined consistency, so assume it is eventual, and probably much slower than datastore index propagation.
0
1
0
0
2016-11-25T21:27:00.000
2
1.2
true
40,812,470
0
0
1
2
When a document is stored into both the Cloud datastore and a Search index, is it possible when querying from the Search index, rather than returning the Search index documents, returning each corresponding entity from the Cloud datastore instead? In other words, I essentially want my search query to return what a datastore query would return. More background: When I create an entity in the datastore, I pass the entity id, name, and description parameters. A search document is built so that its doc id is the same as the corresponding entity id. The goal is to create a front-end search implementation that will utilize the full-text search api to retrieve all relevant documents based on the text query. However, I want to return all details of that document, which is stored in the datastore entity. Would the only way to do this be to create a key for each search doc_id returned from the query, and then use get_multi(keys) to retrieve all relevant datastore entities?
Integrating GAE Search API with Datatstore
40,820,702
0
0
68
0
python,google-app-engine,google-cloud-datastore,google-search-api
You can store any information that you need in the Search API documents, in addition to their text content. This will allow you to retrieve all data in one call at the expense of, possibly, storing some duplicate information both in the Search API documents and in the Datastore entities. Obviously, having duplicate data is not ideal, but it may be a good option for rarely changing data (e.g. document timestamp, author ID, title, etc.) as it can offer a significant performance boost.
0
1
0
0
2016-11-25T21:27:00.000
2
0
false
40,812,470
0
0
1
2
When a document is stored into both the Cloud datastore and a Search index, is it possible when querying from the Search index, rather than returning the Search index documents, returning each corresponding entity from the Cloud datastore instead? In other words, I essentially want my search query to return what a datastore query would return. More background: When I create an entity in the datastore, I pass the entity id, name, and description parameters. A search document is built so that its doc id is the same as the corresponding entity id. The goal is to create a front-end search implementation that will utilize the full-text search api to retrieve all relevant documents based on the text query. However, I want to return all details of that document, which is stored in the datastore entity. Would the only way to do this be to create a key for each search doc_id returned from the query, and then use get_multi(keys) to retrieve all relevant datastore entities?
Should I use Redis or Neo4J for the following use case?
40,814,518
0
0
169
1
python,django,postgresql,neo4j,redis
Neo4J is a graph database, which is good for multiple-hops relation search. Say, you want to get the top N posts of A's brother's friend's sister... AFAIK, it's a standalone instance, you CANNOT partition your data on several nodes. Otherwise, the relation between two person might cross machines. Redis is a key-value store, which is good for searching by key. Say you want to get the friend list of A, or get the top N list of A. You can have a Redis cluster to distribute your data on several machines. Which is better? It depends on your scenario. It seems that you don't need multiple-hops relation search. So Redis might be better. You can have a SET to save the friend list of each person. and have a LIST to save the post ids of each person. When you need show posts for user A, call SMEMBERS or SSCAN to get the friend list, and then call LRANGE for each friend to get the top N post ids.
0
0
0
0
2016-11-25T23:41:00.000
1
0
false
40,813,514
0
0
1
1
I am building a social network where each user has 3 different profiles - Profile 1, Profile 2 and Profile 3. This is my use case: User A follows Users B, C and D in Profile 1. User A follows Users C, F and G in Profile 2. User C follows Users A and E in profile 3. Another question is that any user on each of these profiles would need to see the latest or (say top N) posts of the users they are following on their respective profiles (whether it is profile 1, 2 or 3). How can we best store the above information? Context: I am c using Django framework and a Postgres DB to store user’s profile information. User’s posts are being stored on and retrieved from a Cloud CDN. Which is the best way to go implementing these use cases i.e. the choice of technologies to best suit this scenario? Scalability is an other important factor that comes into play here.
How to disable robots.txt when you launch scrapy shell?
40,823,612
13
10
10,247
0
python,scrapy,web-crawler,robots.txt,scrapy-shell
In the settings.py file of your scrapy project, look for ROBOTSTXT_OBEY and set it to False.
0
0
0
0
2016-11-26T21:49:00.000
2
1.2
true
40,823,516
0
0
1
1
I use Scrapy shell without problems with several websites, but I find problems when the robots (robots.txt) does not allow access to a site. How can I disable robots detection by Scrapy (ignored the existence)? Thank you in advance. I'm not talking about the project created by Scrapy, but Scrapy shell command: scrapy shell 'www.example.com'
What is the right place to initialize third-party API in Django project for later Celery use
40,824,208
1
0
117
0
python,django,celery
With Celery better in the task function
0
0
0
0
2016-11-26T22:40:00.000
1
1.2
true
40,823,881
0
0
1
1
I have a twitter API used in my Django project. Right now I keep initialization in my settings.py and in tasks.py I just import API from django.conf.settings. I am not sure if it's a good practice as I've never seen it. Do I need to create API instance somewhere in celery.py or even in the task function?
How to create a thread safe method in django
40,839,961
0
0
555
0
python,django,multithreading,python-3.x,http
It depends on how you deploy Django app. See Gunicorn or Uwsgi. Usually, there is a pool of processes. Maybe db transaction could help you.
0
0
0
0
2016-11-28T08:16:00.000
2
0
false
40,839,757
0
0
1
1
I am using django 1.10.2 with python 3.5.2 in a Linux machine. I have 2 questions that are related: What is spawn when a client connect to django? Is it a new thread for every client or a new process for every client? I need to have a method in django that must only be accessed by the client one at a time. Basically this must be a thread safe method with perhaps a lock mechanism. How do I accomplish this in django. Thanks in advance!
Cancel makemigrations Django
40,847,290
1
0
1,952
0
python,django,django-models
You can check the django_migrations table in your database to see what migrations are applied and delete the other ones from yourapp/migrations
0
0
0
0
2016-11-28T14:49:00.000
1
1.2
true
40,847,099
0
0
1
1
I'm developping a website in django but yesterday i did bad thing with my models. I ran the "makemigrations" command but when I tried to do a "migrate" command, it did not work. So, I would like to cancel all my "makemigrations" that are not "migrate". Is that possible ?? Thanks!
Cannot upload huge file in IPython notebook
65,730,461
0
3
1,817
0
ipython,ipython-notebook
I was able to upload 6 GB file also in the later version of the Anaconda Distribution, I think the same was fixed in the later versions. I am currently using conda 4.5.11.
0
0
0
0
2016-11-28T15:29:00.000
1
0
false
40,847,935
1
0
1
1
I am trying to upload a weblog file which is of size 500MB in my IPython notebook. But I get the error "Cannot upload the file >25Mb". Is there a way I can overcome this error. Any help would be appreciated. Thanks.
How to visit html static site inside web2py project
40,870,054
2
0
110
0
python,web,web2py
web2py serves static files from the application's /static folder, so just put the files in there. If you need to generate links to them, you can use the URL helper: URL('static', 'path/to/static_file.html') (where the second argument represents the path within the /static folder).
0
0
0
0
2016-11-29T02:14:00.000
1
1.2
true
40,856,714
0
0
1
1
Say I have some comp html files designer gave me and I want to just use it right away in a web2py website running on 127.0.0.1, with web2py MVC structure, how can I achieve that?
Python BS4 Scraping Script Timer
40,860,108
0
0
243
0
python,web-scraping,beautifulsoup,bs4
Try using multi threading or multiprocessing to spawn threads, i think it will spawn a thread for every request and it won't skip over the url if it's taking too long.
0
0
1
0
2016-11-29T04:26:00.000
1
0
false
40,857,813
0
0
1
1
I have been trying to get this web scraping script working properly, and am not sure what to try next. Hoping someone here knows what I should do. I am using BS4 and the problem is whenever a URL takes a long time to load it skips over that URL (leaving an output file with fewer inputs in times of high page load times). I have been trying to add on a timer so that it only skips over the url if it doesn't load in x seconds. Can anyone point me in the right direction? Thanks!
How can I "catch" action edit in odoo
40,881,979
1
1
299
0
python,openerp,odoo-9
if you don't want to use the models in the addons store, You can create a new class inherit from models.Model and overrid the create and write method to save audit in another model and create new Model that inherit the new model not the models.Model class this when ever a create or write is happen it will call the create and write of the parent class not the create
0
0
0
0
2016-11-29T11:16:00.000
2
1.2
true
40,864,539
0
0
1
1
I want to create a new module that save history of a record when someone edit it but doesn't find out any documents regards how to catch an edit action. Does anyone know how to do it ?
Python threading or multiprocessing for web-crawler?
40,894,613
3
0
1,352
0
python,multithreading,web-crawler,python-multithreading
The rule of thumb when deciding whether to use threads in Python or not is to ask the question, whether the task that the threads will be doing, is that CPU intensive or I/O intensive. If the answer is I/O intensive, then you can go with threads. Because of the GIL, the Python interpreter will run only one thread at a time. If a thread is doing some I/O, it will block waiting for the data to become available (from the network connection or the disk, for example), and in the meanwhile the interpreter will context switch to another thread. On the other hand, if the thread is doing a CPU intensive task, the other threads will have to wait till the interpreter decides to run them. Web crawling is mostly an I/O oriented task, you need to make an HTTP connection, send a request, wait for response. Yes, after you get the response you need to spend some CPU to parse it, but besides that it is mostly I/O work. So, I believe, threads are a suitable choice in this case. (And of course, respect the robots.txt, and don't storm the servers with too many requests :-)
0
0
1
0
2016-11-30T17:23:00.000
2
1.2
true
40,894,487
1
0
1
1
I've made simple web-crawler with Python. So far everything it does it creates set of urls that should be visited, set of urls that was already visited. While parsing page it adds all the links on that page to the should be visited set and page url to the already visited set and so on while length of should_be_visited is > 0. So far it does everything in one thread. Now I want to add parallelism to this application, so I need to have same kind of set of links and few threads / processes, where each will pop one url from should_be_visited and update already_visited. I'm really lost at threading and multiprocessing, which I should use, do I need some Pools, Queues?
GCloud App needs FTP - do I need a VM or can I create an FTP app?
40,904,184
0
0
50
0
php,python,google-app-engine,ftp
App Engine projects are not based on server virtual machines. App Engine is a platform as a service, not infrastructure as a service. Your code is packaged up and served on Google App Engine in a manner that can scale easily. App Engine is not a drop-in replacement for your old school web hosting, its quite a bit different. That said, FTP is just a mechanism to move files. If your files just need to be processed by a job, you can look at providing an upload for your users where the files end up residing on Google Cloud Storage and then your cron job reads from that location and does any processing that is needed. What results from that processing might result in further considerations. Don't look at FTP being a requirement, but rather a means to moving files and you'll probably have plenty of options.
0
1
0
0
2016-12-01T03:40:00.000
1
1.2
true
40,902,238
0
0
1
1
I'm running a PHP app on GCloud (Google App Engine). This app will require users to submit files for processing via FTP. A python cron job with process them. Given that dev to prod is via the GAE deployment, I'm assuming there is no FTP access to the app folder structure. How would I go about providing simple one-way FTP to my users? Can I deploy a Python project that will be a server? Or do I need to run a VM? I've done some searching which suggests the VM option, but surely there are other options?
Optional NFC login to web based system
40,928,045
0
0
1,332
0
jquery,python,json,linux,nfc
The progress of log in: The user starts the browser and go to your website and instead of manually entering credentials he clicks on the "log in via NFC." Server retains identification session from that IP and date (and maybe other info about client hardware for safe) to the database and "expects" NFC incoming data. On the client PC / Phone you'll have to install your application/service, which will be able to receive data from the NFC scanner (who usually works as a keyboard) and sends them to your server, eg. Via ASP.NET WebAPI or other REST ... The server will accept data from the IP and find a record in database of that IP perform log in (+ a time limit? + checking client hardware for safe?). Then the server side you have confirmed logon and the user can proceed (you can redirect him to our secure site). Note. 1 The critical point is to pair the correct and safe Identification client browser and PC/mobile application which reads NFC tags. Note. 2 You will need to select the appropriate NFC scanner, which will ideally have a standardized drivers built-in Win / Linux OS (otherwise you often solve the problem of missing / non-functional NFC drivers).
0
0
1
0
2016-12-02T08:17:00.000
1
1.2
true
40,927,573
0
0
1
1
I have a web system where staff can log in with a username and password, then enter data. Is there a way to add the option for users to seamlessly log in just by swiping the card against an NFC scanner? The idea is to have multiple communal PCs people can walk up to and quickly authenticate. It's important that the usual text login form works too for people using the site on PCs or phones without the NFC option. The web client PCs with an NFC scanner could be linux or windows. (The web system is a bootstrap/jquery site which gets supplied with JSON data from a python web.py backend. I'm able to modify the server and the client PCs.)
How to use visual studio code to debug django
63,690,785
0
26
35,237
0
python,django,web-applications,visual-studio-code,atom-editor
Nothing worked for me until I had disabled auto reload (--noreload as an argument is crucial, not really sure why it causes problem with debugging)
0
0
0
0
2016-12-02T17:10:00.000
5
0
false
40,937,544
1
0
1
1
I'm new at django development and come from desktop/mobile app development with Xcode and related IDE. I have to use Django and I was wondering if there was an efficient way to debug it using Visual Studio Code (or Atom). Any help related to Django IDE would be helpful too.
Difference between Pluggable Views and Blueprint in Python Flask
58,601,756
2
2
3,114
0
python,flask
As far as routing goes, Pluggable Views (aka class-based views) is far more superior than Blueprints, which are just a bunch of functions with routes decorators. In Pluggable Views paradigm, it facilitates code reuse code by organizing view logics in classes and subclass them. URL routes are registered with app.add_url_rule() call, this is great because it follows the S in SOLID principles (separate of concerns). In Blueprints approach, each front end view logic are encapsulated within the view functions, which aren't suited well for code reuse.
0
0
0
0
2016-12-03T03:21:00.000
2
0.197375
false
40,943,923
0
0
1
1
What's the difference between PluggableViews and Blueprint in Python Flask?
How to bundle python pip dependencies with project so don't need to run pip install
40,963,832
0
1
2,017
0
python,pip
I've done this before. I created a virtualenv for my project so all dependencies (including the python executable) are contained within the project sub-directory tree. Then just zip up that directory tree. To install elsewhere, just unzip and run it.
0
0
0
0
2016-12-04T21:19:00.000
3
0
false
40,963,775
1
0
1
1
For a project, I can't let users use pip install before running the app. My project is a python flask app that I used pip to grab the dependencies. How do I bundle it so the apps can run without using pip install?
ldap3 bind syntax error in flask application
40,977,871
0
0
1,094
0
python-3.x,flask,compiler-errors
I got this sorted out . Perhaps because I had set raise_exceptions=True, Setting it to False resolved the issue
0
0
0
0
2016-12-05T10:52:00.000
1
1.2
true
40,972,550
0
0
1
1
I am trying to validate username and password of users in a flask app using ldap3. Normal ldap is not installing in python 3.5. The user is entering username and password through login form, I am trying to authenticate user with username / password and allow them to access the index page if it is authenticated. Does the authentication return true of false so that I can redirect to next page based on outcome. The LDAP_PROVIDER_URL = "ldaps://appauth.corp.domain.com:636"; Please help me with the code for this. When I type appauth.corp.domain.com or corp.domain.com as HOST I get the following error (r_web) C:\Users\dasa17\r_web\RosterWeb\RosterWeb>python Roster.py Traceback (most recent call last): File "Roster.py", line 10, in s = Server(appauth.corp.domain.com, port=636, get_info=ALL) NameError: name 'appauth' is not defined (r_web) C:\Users\dasa17\r_web\RosterWeb\RosterWeb>python Roster.py Traceback (most recent call last): File "Roster.py", line 10, in s = Server(corp.domain.com, port=636, get_info=ALL) NameError: name 'corp' is not defined I made some modifications , now I am able to run it by giving dummy username and password. However, I am getting a different error now.>>> c = Connection(s,user='dasa17',password='',check_names=True, lazy=False,raise_exceptions=False) c.open() Traceback (most recent call last): File "", line 1, in c.open() File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 57, in open self.connection.refresh_server_info() File "C:\Python35\lib\site-packages\ldap3\core\connection.py", line 1017, in refresh_server_info self.server.get_info_from_server(self) File "C:\Python35\lib\site-packages\ldap3\core\server.py", line 382, in get_info_from_server self._get_dsa_info(connection) File "C:\Python35\lib\site-packages\ldap3\core\server.py", line 308, in _get_dsa_info get_operational_attributes=True) File "C:\Python35\lib\site-packages\ldap3\core\connection.py", line 571, in search response = self.post_send_search(self.send('searchRequest', request, controls)) File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 140, in post_send_search responses, result = self.get_response(message_id) File "C:\Python35\lib\site-packages\ldap3\strategy\base.py", line 298, in get_response responses = self._get_response(message_id) File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 158, in _get_response responses = self.receiving() File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 92, in receiving raise communication_exception_factory(LDAPSocketReceiveError, exc) (self.connection.last_error) ldap3.core.exceptions.LDAPSocketReceiveError: error receiving data: [WinError 10054] An existing connection was forcibly closed by the remote host
NZ Property for sale API
41,084,870
0
0
453
0
python,rest,gis
realestate.co.nz seems to have both Javascript and Ruby APIs. I'm going to investigate the possibility of building a Python port as their code is on github/realestate.co.nz I have no financial interest in either TradeMe or realestate.co.nz, for the record. Just a guy trying to avoid screen scraping.
0
0
0
1
2016-12-05T12:07:00.000
2
0
false
40,973,950
0
0
1
2
I failed to get approval for my application that I started to write against the TradeMe API. My API access was not approved. I'm therefore looking for alternatives. Any NZ property for sale APIs out there? I have seen realestate.co.nz which according to the github repo, might provide something in PHP and Ruby, but the Ruby repo hasn't been touched in several years. Google API perhaps? I'm specifically interested in obtaining geo-location information for the properties on sale.
NZ Property for sale API
41,067,476
0
0
453
0
python,rest,gis
The sandbox should let you access trademe without the need to access the main server.
0
0
0
1
2016-12-05T12:07:00.000
2
0
false
40,973,950
0
0
1
2
I failed to get approval for my application that I started to write against the TradeMe API. My API access was not approved. I'm therefore looking for alternatives. Any NZ property for sale APIs out there? I have seen realestate.co.nz which according to the github repo, might provide something in PHP and Ruby, but the Ruby repo hasn't been touched in several years. Google API perhaps? I'm specifically interested in obtaining geo-location information for the properties on sale.
How do I find the latest migration created w/ flask-migrate?
64,959,830
0
2
1,521
0
python,flask,alembic,flask-migrate
You can also check in your database and the current version should be displayed in a table called alembic_version.
0
0
0
0
2016-12-05T18:16:00.000
2
0
false
40,980,731
0
0
1
2
My flask application now has 20+ migrations built with flask-migrate and they all have hashed file names like: 389d9662fec7_.py I want to double check the settings on the latest migration that I ran, but don't want to open every file to look for the correct one. I could create a new dummy migration and look at what it references as the down_revision but that seems clunky. I'm using flask-script, flask-migrate, and flask-sqlalchemy My question is: How can I quickly find the latest migration that I created?
How do I find the latest migration created w/ flask-migrate?
40,980,983
3
2
1,521
0
python,flask,alembic,flask-migrate
./manage.py db history -r current: will show the migrations in the order they will be applied. -r current: shows only the migrations since the currently applied one. ./manage.py db heads will show the most recent migration for each branch (typically there's only one branch). ./manage.py db upgrade would apply all migrations to get to the head. Use the -v flag to get verbose output, including the full path to the migration.
0
0
0
0
2016-12-05T18:16:00.000
2
1.2
true
40,980,731
0
0
1
2
My flask application now has 20+ migrations built with flask-migrate and they all have hashed file names like: 389d9662fec7_.py I want to double check the settings on the latest migration that I ran, but don't want to open every file to look for the correct one. I could create a new dummy migration and look at what it references as the down_revision but that seems clunky. I'm using flask-script, flask-migrate, and flask-sqlalchemy My question is: How can I quickly find the latest migration that I created?
How to start afresh with a new database in Django?
40,987,703
4
2
3,214
0
python,django,sqlite
Delete all the folders named 'migrations'. And go to terminal and run ./manage.py makemigrations, ./manage.py migrate --run-syncdb.
0
0
0
0
2016-12-06T02:51:00.000
1
0.664037
false
40,987,039
0
0
1
1
I deleted my database. I want to start afresh with a new database. How can I do that ? I tried making a new datasource but it gives me an error while applying migrations/or migrating that it couldn't find the tables? Which is true because its an empty database. A similar scenario would be when some one pulls a version of my code. He wouldn't have migrations or the database (untracked). How would he run the application?
How to run manage.py inside venv?
41,004,545
3
0
4,324
0
python,django,bash,python-venv
run virtualenv venv in you desired directory after install from t run : source \your_folder\venv\bin\activate now you sohuld see (venv) before $ in the shell that mean you env is active install packages run pip install package_name run pip freeze to get installed packages go to project folder that include manage.py file run python manage.py runserver to make sure that evrything run fine to access django-shell run python manage.py shell
0
0
0
0
2016-12-06T17:40:00.000
2
1.2
true
41,001,551
0
0
1
1
I have been given an existing project to work on and I am really struggling to get the environment set up. The project folder firstly contains manage.py server, which I use as an entry point to run the server. There is also a venv folder which contains all the modules etc. I need. So when I do runserver on manage.py, I get that "No module named sqlserver_ado.base". Even when I have activated the virtual environment and am in bash.... this module for instance is in venv folder in a venv\Lib\site-packages. I am so very confused. I have also tried copying whatever modules are said to be missing and have ran into other issues this way also.
Optimize displaying results with django-haystack RealTimeSignalProcessor
41,016,811
1
0
78
0
python,django,django-models,solr,django-haystack
There is no need to keep updating a expires_in field in your database - keep an expires_at with the time when the ad expires, and calculate the time left in your retrieval method in your model or in your view. This way you'll avoid having to write more data to your database as traffic increases, and if the expiry date changes you won't run into a possible race condition if people are viewing the page at the same time while you're updating the expiry time.
0
0
0
0
2016-12-06T23:06:00.000
1
1.2
true
41,006,591
0
0
1
1
I use Django as backend for my web-app and django-haystack(with Solr) for searching & displaying results. I use the RealTimeSignalProccessor form django-haystack , but I have one problem: - I have an Auction model and expires-(DateTimeField). When I'm displaying the results I'm doing it similar like e-bay (ex. Expires in: 1h 23m 5s ). The problem is that on the page that all Auctions are displayed, if you want to update the Expires in parameter on every time you visit this view (as I've read in the django-haystack documentation) , you'll have to use the object.save() method to update the Solr indexing database. But if I do that for 30 results everytime i go to that view where all auctions are listed , it's very slow and it's not efficient. Is there any other solution ? What do you suggest ?
For distributing calculation task, which is better celery or spark
41,021,060
2
2
3,004
0
python,apache-spark,celery,distributed,jobs
Adding to the above answer, there are other areas also to identify. Integration with the existing big data stack if you have. Data pipeline for ingestion You mentioned "backend for web application". I assume its for read operation. The response times for any batch application might not be a good fit for any web application. Choice of streaming can help you get the data into the cluster faster. But it will not guarantee the response times needed for web app. You need to look at HBase and Solr(if you are searching). Spark is undoubtedly better and faster than other batch frameworks. In streaming there may be few other. As I mentioned above, you should consider the parameters on which your choice is made.
0
1
0
0
2016-12-07T06:09:00.000
2
0.197375
false
41,010,560
0
1
1
2
Problem: calculation task can be paralleled easily. but it is needed real-time response. There can be two approaches. 1. using Celery: runs job in parallel from scratch 2. using Spark: runs job in parallel with spark framework I think spark is better in scalability perspective. But is it OK Spark as backend of web-application?
For distributing calculation task, which is better celery or spark
41,012,633
1
2
3,004
0
python,apache-spark,celery,distributed,jobs
Celery :- is really a good technology for distributed streaming And its supports Python language . Which is it self strong in computation and easy to write. The streaming application in Celery supports so many features as well . Its little over head on CPU. Spark- Its supports various programming language Java,Scala,Python. its not pure streaming its micro batch streaming as per the Spark documentation If your task can only be full filled by streaming and you dont need the SQl like feature . Then Celery will be the best. But you need various feature along with streaming then SPark will be better . In that case you can take scenario you application will generate the data in how many batches within second .
0
1
0
0
2016-12-07T06:09:00.000
2
1.2
true
41,010,560
0
1
1
2
Problem: calculation task can be paralleled easily. but it is needed real-time response. There can be two approaches. 1. using Celery: runs job in parallel from scratch 2. using Spark: runs job in parallel with spark framework I think spark is better in scalability perspective. But is it OK Spark as backend of web-application?
Django: call python function when clicking on button
41,021,233
0
0
981
0
python,django,django-templates,django-views
In your views you can handle any incoming get/post requests. And based on handler for that button (and this button obviously must send something to server) you can call any function.
0
0
0
1
2016-12-07T15:03:00.000
2
0
false
41,020,807
0
0
1
1
Let's say that I have a python function that only sends email to myself, that I want to call whenever the user clicks on a button in a template, without any redirection (maybe just a popup messege). Is there a way to do that?