Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I have a python script (on my local machine) that queries Postgres database and updates a Google sheet via sheets API. I want the python script to run on opening the sheet. I am aware of Google Apps Script, but not quite sure how can I use it, to achieve what I want. Thanks
0
2
0.197375
0
false
42,327,384
0
6,619
1
0
0
42,218,932
you will need several changes. first you need to move the script to the cloud (see google compute engine) and be able to access your databases from there. then, from apps script look at the onOpen trigger. from there you can urlFetchApp to your python server to start the work. you could also add a custom "refresh" menu to the sheet to call your server which is nicer than having to reload the sheet. note that onOpen runs server side on google thus its impossible for it to access your local machine files.
1
0
0
Running python script from Google Apps script
2
python,google-apps-script,google-sheets,google-spreadsheet-api
0
2017-02-14T06:00:00.000
Now the question is a little tricky.... I have 2 tables that i want to compare them for their content. The tables have same no. of columns and same column names and same ordering of columns(if there is such thing). Now i want to compare their contents but the trick is the ordering of their rows can be different ,i.e., row no. 1 in table 1 can be present in row no. 1000 in table 2. I want to compare their contents such that the ordering of the rows don't matter. And also remember that their is no such thing as primary key. Now i can use and design Data structures or i can use an existing library to do the job. I want to use some existing APIs (if any). So can any1 point me in the right direction??
0
0
0
0
false
42,227,865
0
1,752
2
0
0
42,227,567
You'd need to be more precise on how you intend to compare the tables' content and what is the expected outcome. Sqlite3 itself is a good tool for comparison and you can easily query the comparison results you wish to get. If these tables however are located in different databases, you can dump them into temporary db using python's sqlite3 bulit-in module. You can also dump the query results into a data collection such as list and then perform your comparison but then again we can't help you if we don't know the expected outcome.
1
0
0
Comparing two sqlite3 tables using python
3
python,database,sqlite
0
2017-02-14T13:34:00.000
Now the question is a little tricky.... I have 2 tables that i want to compare them for their content. The tables have same no. of columns and same column names and same ordering of columns(if there is such thing). Now i want to compare their contents but the trick is the ordering of their rows can be different ,i.e., row no. 1 in table 1 can be present in row no. 1000 in table 2. I want to compare their contents such that the ordering of the rows don't matter. And also remember that their is no such thing as primary key. Now i can use and design Data structures or i can use an existing library to do the job. I want to use some existing APIs (if any). So can any1 point me in the right direction??
0
0
0
0
false
42,228,061
0
1,752
2
0
0
42,227,567
You say "there is no PRIMARY KEY". Does this mean there is truly no way to establish the identity of the item represented by each row? If that is true, your problem is insoluble since you can never determine which row in one table to compare with each row in the other table. If there is a set of columns that establish identity, then you would read each row from table 1, read the row with the same identity from table 2, and compare the non-identity columns. If you find all the table 1 rows in table 2, and the non-identity columns are identical, then you finish up with a check for table 2 rows with identities that are not in table 1. If there is no identity and if you don't care about identity, but just whether the two tables would appear identical, then you would read the records from each table sorted in some particular order. Compare row 1 to row 1, row 2 to row 2, etc. When you hit a row that's different, you know the tables are not the same. As a shortcut, you could just use SQLite to dump the data into two text files (again, ordered the same way for both tables) and compare the file contents. You may need to include all the columns in your ORDER BY clause if there is not a subset of columns that guarantee a unique sort order. (If there is such a subset of columns, then those columns would constitute the identity for the rows and you would use the above algorithm).
1
0
0
Comparing two sqlite3 tables using python
3
python,database,sqlite
0
2017-02-14T13:34:00.000
I am trying to move an Excel sheet say of index 5 to the position of index 0. Right now I have a working solution that copies the entire sheet and writes it into a new sheet created at the index 0, and then deletes the original sheet. I was wondering if there is another method that could push a sheet of any index to the start of the workbook without all the need of copy, create and write.
0
0
0
0
false
42,244,164
0
2,015
1
0
0
42,243,861
Maybe the function from XLRD module can help you where you can get the sheet contents by index like this: worksheet = workbook.sheet_by_index(5) and then you can copy that into some other sheet of a different index, like this: workbook.sheet_by_index(0) = worksheet
1
0
0
Python - Change the sheet index in excel workbook
1
python,excel
0
2017-02-15T08:12:00.000
I want to import and use dataset package of python at AWS Lambda. The dataset package is about MySQL connection and executing queries. But, when I try to import it, there is an error. "libmysqlclient.so.18: cannot open shared object file: No such file or directory" I think that the problem is because MySQL client package is necessary. But, there is no MySQL package in the machine of AWS Lambda. How to add the third party program and how to link that?
1
0
0
0
false
42,268,813
0
131
1
0
0
42,267,553
You should install your packages in your lambda folder : $ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER And then, compress your whole directory in a zip to upload in you lambda.
1
0
0
How to use the package written by another language in AWS Lambda?
3
mysql,python-2.7,amazon-web-services,aws-lambda
1
2017-02-16T07:30:00.000
I noticed that most examples for accessing mysql from flask suggest using a plugin that calls init_app(app). I was just wondering why that is as opposed to just using a mysql connector somewhere in your code as you need it? Is it that flask does better resource management with request life cycles?
1
3
0.53705
0
false
42,281,576
1
100
1
0
0
42,281,212
Using packages like flask-mysql or Flask-SQLAlchemy, they provided useful defaults and extra helpers that make it easier to accomplish common CRUD tasks. All of such package are good at handling relationships between objects. You only need to create the objects and then the objects contain all the functions and helpers you needed to deal with the database, you don't have to implement such code by yourself and you don't need to worry about the performance of the queries. I had worked on a Django project(I believe the theory in Flask is similar) and its ORM is really amazing, all i need to do is writing Models and encapsulate business logic. All CRUD commands are handled by the built-in ORM, as a developer we don't worry about the SQL statements. Another benefit is that it makes database migration much easier. You can switch it from MySQL to PostgresSQL with minimal code modifications which will speed up development.
1
0
0
accessing mysql from within flask
1
python,mysql,flask
0
2017-02-16T17:48:00.000
I'm having a trouble updating a model in odoo the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it is there a way to make the database synchronized with my model?
0
0
0
0
false
42,359,847
1
1,475
2
0
0
42,354,852
Please check that in addons path there is no any duplicate folder having same name. Sometimes if there is zip file with same name in addons path than it doesn't get affect of any updation.
1
0
0
Updating a module's model in Odoo 10
2
python,ubuntu,module,openerp,odoo-10
0
2017-02-20T21:50:00.000
I'm having a trouble updating a model in odoo the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it is there a way to make the database synchronized with my model?
0
0
0
0
false
42,359,793
1
1,475
2
0
0
42,354,852
If you save changes to the module, restart the server, and upgrade the module - all changes should be applied. Changes to tables (e.g. fields) should only require the module to be upgraded, not a server reboot. Python changes (e.g. contents of a method) require a server restart, not a module upgrade. If the changes are not occurring, then it is possible that you have a different problem. I would look at things like: are you looking at the correct database/tables, are you saving your changes, are the changes being made to the correct files/in the correct locations.
1
0
0
Updating a module's model in Odoo 10
2
python,ubuntu,module,openerp,odoo-10
0
2017-02-20T21:50:00.000
I want to isolate my LAMP installment into a virtual environment, I tried using virtualbox but my 4GB of RAM is not helping. My question is if I run sudo apt-get install lamp-server^ while in "venv"... would it install the mysql-server, apache2 and PHP into the virtualenv only or is the installation scope system-wide. I really want a good solution for isolating these dev environments and their dependencies, and am hence exploring simple and efficient options given my system constraints. I have another Django (and mysql and gcloud) solution on the same computer and would like for these new installations to not mess with this. I'm using: OS: Ubuntu 16.04 LTS Python: 2.7
1
1
0.099668
0
false
42,356,311
1
708
1
0
0
42,356,276
Read about Docker if You want make separate environments without virtual machine.
1
0
0
Install LAMP Stack into Virtual Environment
2
php,python,mysql,virtualenv,lamp
0
2017-02-20T23:53:00.000
I'm working on a Project with python and openpyxl. In a Excel file are some cells with conditional formatting. These change the infillcolor, when the value changes. I need to extract the color from the cell. The "normal" methode worksheet["F11"].fill.start_color.index doesn't work. Excel doesn't interpret the infillcolor from the conditional formatting as infillcolor so i get a '00000000' back for no infill. Anyone knows how to get the infillcolor? Thanks!
2
1
1.2
0
true
42,372,863
0
496
1
0
0
42,372,121
This isn't possible without you writing some of your own code. To do this you will have to write code that can evaluate conditional formatting because openpyxl is a library for the file format and not a replacement for an application like Excel.
1
0
0
Python/openpyxl get conditional format
1
python-3.x,spyder,openpyxl
0
2017-02-21T15:58:00.000
I have a google drive folder with hundreds of workbooks. I want to cycle through the list and update data. For some reason, gspread can only open certain workbooks but not others. I only recently had this problem. It's not an access issue because everything is in the same folder. I get raise SpreadsheetNotFound when I open_by_key(key). But then when I take the key and paste it into an URL, the sheet opens. Which means it's not the key. What's going on here? I'm surprised other people are not encountering this error. Have I hit my limit on the number of Google sheets I can have? I have about 2 thousand. Update: I find that if I go into the workbook and poke around, the sheet is then recognized??!! What does this mean? It doesn't recognize the sheet if the sheet isn't recently active??? Also if I try using Google App Script SpreadsheetApp.openById, the key is recognized! So the sheet is there, I just can't open it with gspread. I have use Google script to write something to the sheet first before it is recognized by gspread. I'm able to open the sheet using pygsheets but since it is new and so buggy, i can't use it. It looks like a APIv4 issue? Some sheets can't be opened with APIv3? update: here is another observation. Once you open the workbook with APIv4, you can no longer open it with V3.
12
4
0.26052
0
false
42,867,483
0
3,398
1
0
0
42,382,847
I've run into this issue repeatedly. The only consistent fix I've found is to "re-share" the file with the api user. It already lists the api user as shared (since it's in the same shared folder as everything else), but after "re-sharing" I can connect with gspread no problem. Based on this I believe it may actually be a permissions issue (Google failing to register the correct permission for the API user when accessing it through APIv3).
1
0
0
gspread "SpreadsheetNotFound" on certain workbooks
3
python,google-sheets,gspread
0
2017-02-22T04:40:00.000
I'm embarking on a software project, and I have a bit of an idea on how to attack it, but would really appreciate some general tips, advice or guidance on getting the task done. Project is as follows: My company has an ERP (Enterprise Resource Planning) system that we use to record all our business activity (i.e. create purchase orders, receive shipments, create sales orders, manage inventory etc..). All this activity is data entry into the ERP system that gets stored in a SQL Server database. I would like to push this activity to certain Slack channels via text messages. For example, when the shipping department creates a 'receipt entry' (they receiving in a package) in the ERP system, then production team would get a text saying 'item X has been received in' in their Slack channel. My current napkin sketch is this: For a given business activity, create a function that executes a SQL query to return the most recent data entry. Store this in my own external database. Routinely execute these calls (Maybe create a Windows scheduler to execute a program that runs through all the functions every 30 minutes or so??), which will compare the data from the query to the data last saved in my external database. If the same, do nothing. But if they're different: Replace the data from my external database with this new data, then use Slacks API to post a message of this new data to Slack. I'm not too certain about the mechanics of executing a program to check for new activity in the ERP system, and also uncertain about using a second database as a means of remembering what was sent to Slack previously. Any advice would be greatly appreciated. Thanks! Josh
1
2
0.197375
0
false
42,415,248
0
192
1
0
0
42,394,615
Epicor ERP has a powerful extension system built in. I would create a Business Process Method (BPM) for ReceiptEntry.Update. This wouldn't check for added rows but more specifically where the Recieved flag has been changed to set. This will prevent you getting multiple notifications every time a user saves an incomplete record. In the BPM you can reference external assemblies and call the Slack APIs from there. I strongly recommend you avoid trying to do this at the database level instead of the application level. The schema can change and it is much harder to maintain the system if someone has been adding code to the database. If it isn't done carefully it can break the Data Model Regeneration in the Epicor Administration Console and prevent you from adding UD fields or upgrading your database.
1
0
0
Project Advice: push ERP/SQL transaction data to Slack
2
python,sql,architecture,slack-api,epicorerp
0
2017-02-22T14:43:00.000
I generally use Pandas to extract data from MySQL into a dataframe. This works well and allows me to manipulate the data before analysis. This workflow works well for me. I'm in a situation where I have a large MySQL database (multiple tables that will yield several million rows). I want to extract the data where one of the columns matches a value in a Pandas series. This series could be of variable length and may change frequently. How can I extract data from the MySQL database where one of the columns of data is found in the Pandas series? The two options I've explored are: Extract all the data from MySQL into a Pandas dataframe (using pymysql, for example) and then keep only the rows I need (using df.isin()). or Query the MySQL database using a query with multiple WHERE ... OR ... OR statements (and load this into Pandas dataframe). This query could be generated using Python to join items of a list with ORs. I guess both these methods would work but they both seem to have high overheads. Method 1 downloads a lot of unnecessary data (which could be slow and is, perhaps, a higher security risk) whilst method 2 downloads only the desired records but it requires an unwieldy query that contains potentially thousands of OR statements. Is there a better alternative? If not, which of the two above would be preferred?
0
0
0
1
false
42,406,043
0
352
1
0
0
42,405,493
I am not familiar with pandas but strictly speaking from a database point of view you could just have your panda values inserted in a PANDA_VALUES table and then join that PANDA_VALUES table with the table(s) you want to grab your data from. Assuming you will have some indexes in place on both PANDA_VALUES table and the table with your column the JOIN would be quite fast. Of course you will have to have a process in place to keep PANDA_VALUES tables updated as the business needs change. Hope it helps.
1
0
0
Selecting data from large MySQL database where value of one column is found in a large list of values
1
python,mysql,sql,python-3.x,pandas
0
2017-02-23T01:36:00.000
I have about the 3GB 4-5 table in google bigquery and I want to export these table to Postgres. Reading the docs I found I have to do following steps. create a job that will extract data to CSV in the google bucket. From google storage to local storage. Parse all CSV to database So in the above step is there any efficient way to do all this. I know that step 1 and 2 can't skip no chance of efficiency but in step 3 from reading online, i found that it will take 2-3 hours to do this process. Can anyone suggest me an efficient way to do this
1
0
0
0
false
42,454,600
0
691
1
0
0
42,443,016
Make an example project and see what times you get, if you can accept those times it's too early to optimize. I see all this is possible in about 3-5 minutes if you have 1Gbit internet access and server running on SSD.
1
0
0
Dump Data from bigquery to postgresql
1
python,postgresql,google-bigquery
0
2017-02-24T15:58:00.000
I have an instance of an object (with many attributes) which I want to duplicate. I copy it using deepcopy() then modify couple of attributes. Then I save my new object to the database using Python / PeeWee save() but the save() actually updates the original object (I assume it is because that the id was copied from the original object). (btw no primary key is defined in the object model) How do I force save the new object? can I change its id? Thanks.
2
4
0.379949
0
false
42,451,623
0
1,374
1
0
0
42,449,783
Turns out that I can set the id to None (obj.id = None) which will create a new record when performing save().
1
0
0
Copy object instance and insert to DB using peewee creates duplicate ID
2
python,mysql,peewee
0
2017-02-24T23:06:00.000
I'm starting my Python journey with a particular project in mind; The title explains what I'm trying to do (make json api calls with python3.6 and sqlite3). I'm working on a mac. My question is whether or not this setup is possible? Or if I should use MySQL, PostgreSQL or MongoDB? If it is possible, am I going to have to use any 3rd party software to make it run? Sorry if this is off topic, I'm new to SO and I've been trying to research this via google and so far no such luck. Thank you in advance for any help you can provide.
1
1
1.2
0
true
42,489,154
0
451
1
0
0
42,489,060
Python 3.6 and sqlite both work on a Mac; whether your json api calls will depends on what service you are trying to make calls to (unless you are writing a server that services such calls, in which case you are fine). Any further recommendations are either a) off topic for SO or b) dependent on what you want to do with these technologies.
1
0
1
Python3 & SQLite3 JSON api calls
1
python-3.x,sqlite,json-api
0
2017-02-27T15:06:00.000
Could someone give me an example of using whoosh for a sqlite3 database, I want to index my database. Just a simple connect and searching through the database would be great. I searched online and was not able to find an examples for sqlite3.
3
0
0
0
false
51,001,220
0
466
1
0
0
42,493,984
You need to add a post-save function index_data to your database writers. This post-save should get the data to be written in database, normalize it and index it. The searcher could be an independent script given an index and queries to be searched for.
1
0
0
Using Whoosh with a SQLITE3.db (Python)
1
python,python-2.7,indexing,sqlite,whoosh
0
2017-02-27T19:16:00.000
How to remove and add completly new db.sqlite3 database to django project written in pycharm? I did something wrong and I need completelty new database. The 'flush' command just removes data from databse but it't dosent remove tables schema. So the question is how to get get back my databse to begin point(no data, no sql table)
1
4
1.2
0
true
42,515,036
1
1,356
1
0
0
42,514,902
A SQLite database is just a file. To drop the database, simply remove the file. When using SQLite, python manage.py migrate will automatically create the database if it doesn't exist.
1
0
0
How to remove and add completly new db.sqlite3 to django project written in pycharm?
1
python,sql,django
0
2017-02-28T17:14:00.000
The ping service that I have in mind allows users to keep easily track of their cloud application (AWS, GCP, Digital Ocean, etc.) up-time. The part of the application's design that I am having trouble with is how to effectively read a growing/shrinking list of hostnames from a database and ping them every "x" interval. The service itself will be written in Python and Postgres to store the user-inputted hostnames. Keep in mind that the list of hostnames to ping is variable since a user can add and also remove hostnames at will. How would you setup a system that checks for the most up-to-date list of hostnames, executes pings across said list of hostnames, and store the results, at a specific interval? I am pretty new to programming. Any help or pointers in the right direction will be greatly appreciated
1
0
0
0
false
42,525,826
0
45
1
1
0
42,524,336
Let me put it like this. You will be having these 4 statements in the following way. In the simplest way you could keep a table of users and a table of hostnames which will have following columns -> fk to users, hostname, last update and boolean is_running. You will need the following actions. UPDATE: You will run this periodically on the whole table. You could optimize this by using a select with a filter on the last update column. INSERT and DELETE: This is for when the user adds or removes hostnames. During inserting also ping the hostname and update the last update column as the current time. For the above 3 operations whenever they run they'd be using a lock on the respective rows. After each of the latter 2 operations you could notify the user. Finally the READ: This is whenever the user wants to see the status of his hostnames. If he has added or removed a hostname recently he will be notified only after the commit. Otherwise do a select * from hostnames where user.id = x and send him the result. Everytime he hits refresh you could run this query. You could also put indices on both the tables as the read operation is the one that has to be fastest. You could afford slightly slower times on the other 2 operations. Do let me know if this works or if you've done differently. Thank you.
1
0
0
Designing a pinging service
1
python,architecture
0
2017-03-01T06:01:00.000
I'm working with sqlalchemy and oracle, but I don't want to store the database password directly in the connection string, how to store a encrypted password instead?
2
0
0
0
false
70,789,442
0
3,865
1
0
0
42,776,941
Encrypting the password isn't necessarily very useful, since your code will have to contains the means to decrypt. Usually what you want to do is to store the credentials separately from the codebase, and have the application read them at runtime. For example*: read them from a file read them from command line arguments or environment variables (note there are operating system commands that can retrieve these values from a running process, or they may be logged) use a password-less connection mechanism, for example Unix domain sockets, if available fetch them from a dedicated secrets management system You may also wish to consider encrypting the connections to the database, so that the password isn't exposed in transit across the network. * I'm not a security engineer: these examples are not exhaustive and may have other vulnerabilities in addition to those mentioned.
1
0
0
How to use encrypted password in connection string of sqlalchemy?
3
python,oracle,sqlalchemy
0
2017-03-14T02:53:00.000
in aws cli we can set output format as json or table. Now I can get json output from json.dumps is there anyway could achieve output in table format? I tried pretty table but no success
0
1
0.197375
0
false
42,952,740
0
980
1
0
1
42,787,327
Python Boto3 does not return the data in the tabular format. You will need to parse the data and use another python lib to output the data in the tabular format . Pretty table works good for me, read the pretty table lib docs and debug your code.
1
0
0
Is it possible to get Boto3 | python output in tabular format
1
json,python-2.7,boto3,aws-cli,prettytable
0
2017-03-14T13:27:00.000
I have an .ods file that contains many links that must be updated automatically. As I understand there is no easy way to do this with macros or libreoffice command arguments, so I am trying to make all links update upon opening the file and then will save the file and exit. All links are DDE links which should be able to update automatically (and are set to do so in Edit > Links), and I have also enabled this in Tools > Options > Calc > General > Always Update Links When Opening, as well as Tools > Options > Calc > Formulas > Always Recalculate. However, I am still being prompted with a popup to manually update links upon opening, and links will not be up to date if I do not select Update. I need these DDE links to update automatically, why isn't this working? If there is no solution there, I am also willing to try to update links via Python. Will Uno work with libreoffice to do this without ruining any preexisting graphs in the file like openpyxl does?
0
0
0
0
false
42,806,325
0
986
1
0
0
42,788,839
The API does not provide a method to suppress the prompt upon opening the file! I've tried running StarBasic code to update DDE links on "document open" event, but the question keeps popping up. So, I guess you're out of luck: you have to answer "Yes" if you want the actual values. [posted the comment to OP's question here again as answer, as suggested by @Jim K]
1
0
0
Libreoffice - update links automatically upon opening?
2
python,libreoffice,dde
0
2017-03-14T14:34:00.000
I'm attempting to access a Google Cloud SQL instance stored on one Cloud Platform project from an App Engine application on another project, and it's not working. Connections to the SQL instance fail with this error: OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38") I followed the instructions in Google's documentation and added the App Engine service account for the second project to the IAM permissions list for the project housing the Cloud SQL instance (with "Cloud SQL Editor" as the role). The connection details and configuration I'm using in my app are identical to those being used in a perfectly functioning App Engine app housed in the same project as the Cloud SQL instance. The only thing that seems off about my configuration is that in my second GCP project, while an App Engine service account that looks like the default one ([MY-PROJECT-NAME]@appspot.gserviceaccount.com) appears in the IAM permissions list, this service account is not listed under the Service Accounts tab of IAM & Admin. The only service account listed is the Compute Engine default service account. I haven't deleted any service accounts; there's never been an App Engine default service account listed here, but apart from the MySQL connection the App Engine app runs fine. Not sure if it's relevant, but I'm running a Python 2.7 app on the App Engine Standard Environment, connecting using MySQLdb.
3
6
1
0
false
42,827,972
1
3,103
1
1
0
42,826,560
Figured it out eventually - perhaps this will be useful to someone else encountering the same problem. Problem: The problem was that the "Cloud SQL Editor" role is not a superset of the "Cloud SQL Client", as I had imagined; "Cloud SQL Editor" allows administration of the Cloud SQL instance, but doesn't allow basic connectivity to the database. Solution: Deleting the IAM entry granting Cloud SQL Editor permissions and replacing it with one granting Cloud SQL Client permissions fixed the issue and allowed the database connection to go through.
1
0
0
Can't access Google Cloud SQL instance from different GCP project, despite setting IAM permissions
1
python,mysql,google-app-engine,google-cloud-sql
0
2017-03-16T06:14:00.000
How to achieve a read-only connection to the secondary nodes of the MongoDB. I have a primary node and two secondary nodes. I want a read-only connection to secondary nodes. I tried MongoReplicaSetClient but did not get what I wanted. Is it possible to have a read-only connection to primary node?
1
1
0.099668
0
false
42,850,060
0
1,793
1
0
0
42,849,056
Secondaries are read-only by default. However, you can specify the read preference to read from secondaries. By default, it reads from the primary. This can be achieved using readPreference=secondary in connection string
1
0
0
How to achieve a read only connection using pymongo
2
python,mongodb,pymongo
0
2017-03-17T04:06:00.000
I understand you can do DELETE FROM table WHERE condition, but I was wondering if there was a more elegant way? Since I'm iterating over every row with c.execute('SELECT * FROM {tn}'.format(tn=table_name1)), the cursor is already on the row I want to delete.
0
2
0.379949
0
false
42,889,578
0
346
1
0
0
42,888,269
A cursor is a read-only object, and cursor rows are not necessarily related to table rows. So this is not possible. And you must not change the table while iterating over it. SQLite computes result rows on demand, so deleting the current row could break the computation of the next row.
1
0
0
While iterating over the rows in an SQLite table, is it possible to delete the cursor's row?
1
python,python-3.x,sqlite,sql-delete,delete-row
0
2017-03-19T15:16:00.000
I need help with the pyodbc Python module. I installed it via Canopy package management, but when I try to import it, I get an error (no module named pyodbc). Why? Here's the output from my Python interpreter: import pyodbc Traceback (most recent call last): File "", line 1, in import pyodbc ImportError: No module named 'pyodbc
0
0
0
0
false
42,963,908
0
62
1
0
0
42,935,980
For the record: the attempted import was in a different Python installation. It is never good, and usually impossible, to use a package which was installed into one Python installation, in another Python installation.
1
0
1
No Module After Install Package via Canopy Package Management
1
python,pyodbc,canopy
0
2017-03-21T19:00:00.000
I downloaded the pyodbc module as a zip and installed it manually using the command python setup.py install. Although I can find the folder inside the Python directory which I pasted, while importing I am getting the error: ImportError: No module named pyodbc I am trying to use to this to connect with MS SQL Server. Help
1
0
1.2
0
true
42,947,078
0
8,261
1
0
0
42,944,116
As installation error showed, installing Visual C++ 9.0 solves problem because setup.py tries to compile some C++ libraries while installing plugin. I thing Cygwin C++ will also work due to contents of setup.py.
1
0
0
ImportError: No module named pyodbc
1
python,sql-server,database-connection,pyodbc
0
2017-03-22T06:20:00.000
I am trying to generate a flask-sqlalchemy for an existing mysql db. I used the following command flask-sqlacodegen --outfile rcdb.py mysql://username:password@hostname/tablename The project uses python 3.4. Any clues? ```Traceback (most recent call last): File "/var/www/devaccess/py_api/ds/venv/bin/flask-sqlacodegen", line 11, in sys.exit(main()) File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/main.py", line 59, in main Traceback (most recent call last): File "/var/www/devaccess/py_api/ds/venv/bin/flask-sqlacodegen", line 11, in sys.exit(main()) File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/main.py", line 59, in main args.flask, ignore_cols, args.noclasses) File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/codegen.py", line 606, in init model = ModelClass(table, links[table.name], inflect_engine, not nojoined) File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/codegen.py", line 335, in init relationship_ = ManyToManyRelationship(self.name, target_cls, association_table, inflect_engine) File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/codegen.py", line 501, in init self.kwargs['secondary'] = repr(assocation_table.schema + '.' + assocation_table.name) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ```
1
1
0.197375
0
false
49,161,491
1
362
1
0
0
43,008,166
try specifying your database schema with option --schema
1
0
0
flask-sqlacodegen suports python 3.4?
1
python-3.x,sqlalchemy,flask-sqlalchemy,sqlacodegen
0
2017-03-24T20:00:00.000
What I want is that when I have looked up a user in a table, I want to list all the file urls that the user have access to. My first thought was to have a field in the table with a list of file URLs. However, I have now understood that there are no such field type. I was then thinking that maybe ForeignKeys might work, but I am having trouble getting my head around it. Another solution maybe is to have one table for each user, with each row representing each file. What would you say is best practice in this case? I am also going to expand into having shared files, but thought that I'd address this issue first.
0
0
0
0
false
43,179,890
1
25
1
0
0
43,066,877
2 tables: user and user_uri_permission? 2 columns in the second: userID and URI. When the User-URI pair is in the table the use has access.
1
0
0
Mapping users to all of their files(URLs) in a mysql database.
2
mysql,mysql-python
0
2017-03-28T10:18:00.000
I have a big-ol' dbm file, that's being created and used by my python program. It saves a good amount of RAM, but it's getting big, and I suspect I'll have to gzip it soon to lower the footprint. I guess usage will involve un-gzipping it to the disk, using it, and erasing the extracted dbm when I'm done. I was wondering whether there perhaps exists some nice way of compressing the dbm and keep working with it somehow. In my spcific usecase, I only need to read from it. Thanks.
1
0
0
0
false
43,080,032
0
109
1
0
0
43,069,291
You can gzip the value or use a key/value store that support compression like wiredtiger.
1
0
0
recipe for working with compressed (any)dbm files in python
1
python,compression,gzip,key-value-store,dbm
0
2017-03-28T12:14:00.000
After updating PyCharm (version 2017.1), PyCharm does not display sqlite3 database tables anymore. I've tested the connection and it's working. In sqlite client I can list all tables and make queries. Someone else has get this problem? And in this case could solve anyway?
2
1
0.099668
0
false
43,075,527
0
3,401
1
0
0
43,075,420
After clicking on the View => Tools => Window => Database click on the green plus icon and then on Data Source => Sqlite (Xerial). Then, on the window that opens install the driver (it's underneath the Test Connection button) that is proposing (Sqlite (Xerial)). That should do it both for db.sqlite3 and identifier.sqlite. I have never any problem with Sqlite database, showing on PyCharm IDE.
1
0
0
Pycharm does not display database tables
2
python,django,sqlite,pycharm
0
2017-03-28T16:50:00.000
So is it possible to mix 2 ORM's in same web app,and if so how optimal would it be ? Why so? - I'm working on a web app in flask using flask-mysqldb and I came to a point where I need to implement an auth system, and on flask-mysqldb there's no secure way to do it. - With that said now I'm trying to implement flask-security but it only works on flask-sqlalchemy so I'm trying to mix sqlalchemy with mysqldb and before that I want to know if it's optimal and if it works.That would lead to using user auth along sqlalchemy and other data to mysqldb.Thanks!
0
1
1.2
0
true
43,098,951
1
145
2
0
0
43,098,668
It's possible, but not recommended. Consider this: Half of your app will not benefit from anything a proper ORM offers Adding a field to the table means editing raw SQL in many places, and then changing the model. Don't forget to keep them in sync. Alternatively, you can port everything that uses raw mysqldb to use SQLAlchemy: Need to add a field to your table? Just change the model in one place. Don't like SQL queries that ORM generates for you? You still have a low-level control over this.
1
0
0
Is it possible to mix 2 ORMS in same web app?
2
python,flask,flask-sqlalchemy,flask-mysql
0
2017-03-29T16:03:00.000
So is it possible to mix 2 ORM's in same web app,and if so how optimal would it be ? Why so? - I'm working on a web app in flask using flask-mysqldb and I came to a point where I need to implement an auth system, and on flask-mysqldb there's no secure way to do it. - With that said now I'm trying to implement flask-security but it only works on flask-sqlalchemy so I'm trying to mix sqlalchemy with mysqldb and before that I want to know if it's optimal and if it works.That would lead to using user auth along sqlalchemy and other data to mysqldb.Thanks!
0
3
0.291313
0
false
43,098,934
1
145
2
0
0
43,098,668
You can have a module for each orm. One module can be called auth_db and the other can be called data_db. In your main app file just import both modules and initialize the database connections. That being said, this approach will be harder to maintain in the future, and harder for other developers to understand what's going on. I'd recommend moving your flask-mysqldb code to sqlalchemy so that you are only using one ORM.
1
0
0
Is it possible to mix 2 ORMS in same web app?
2
python,flask,flask-sqlalchemy,flask-mysql
0
2017-03-29T16:03:00.000
I am thinking to use AWS API Gateway and AWS Lambda(Python) to create a serverless API's , but while designing this i was thinking of some aspects like pagination,security,caching,versioning ..etc so my question is: What is the best approach performance & cost wise to implement API pagination with very big data (1 million records)? should i implement the pagination in postgresql db? (i think this would be slow) should i not use postgresql db pagination and just cache all the results i get from db into aws elastic cache and then do server side pagination in lambda. I appreciate your help guys.
1
3
0.53705
0
false
43,126,859
1
1,679
1
0
1
43,113,198
If your data is going to live in a postgresql data base anyway I would start with your requests hitting the database and profile the performance. You've made assumptions about it being slow but you haven't stated what your requirements for latency are or what your schema is, so any assertions people would make about whether or not it would fit your case is completely speculative. If you do decide that after profiling that it is not fast enough, than adding a cache would make sense, though storing the entire contents in the cache seems wasteful unless you can guarantee your clients will always iterate through all results. You may want to consider a mechanism that prefetches blocks of data that would service a few requests rather than trying to cache the whole data. TL;DR : Don't prematurely optimize your solution. Quantify how you want your system to respond and test and validate your assumptions.
1
0
0
AWS API Gateway & Lambda - API Pagination
1
python,postgresql,amazon-web-services,aws-lambda,aws-api-gateway
0
2017-03-30T09:04:00.000
I am working on a python/tornado web application. I have several options to save in my app. Thoses options can by changed by the user, and those options will be access very often. I have created an sqlite database but there is some disk operation and i am asking you what is the best location for those options. Does tornado embed a feature for custom user options ? Thanks
0
0
0
0
false
43,171,572
1
28
1
1
0
43,145,705
Yes, there is the tornado.options package, which does pretty much what you need. Keep in mind, however, that the values saved here are not persisted between requests; if you need that kind of functionality, you will have to implement an external persistence solution, which you already have done with SQLite.
1
0
0
Where should i save my tornado custom options
1
python,tornado
0
2017-03-31T16:39:00.000
I have PhpMyAdmin to view and edit a database and a Flask + SQLAlchemy app that uses a table from this database. Everything is working fine and I can read/write to the database from the flask app. However, If I make a change through phpmyadmin, this change is not detected by SQLAlchmey. The only to get those changes is by manually refreshing SQLAlchmey connection My Question is how to tell SQLAlchemy to reload/refresh its Database connection?
0
0
0
0
false
43,703,427
1
665
1
0
0
43,149,092
I suggest you to look at Server Sent Events(SSE). I am looking for code of SSE for postgres,mysql,etc. It is available for reddis.
1
0
0
Flask App using SQLAlcehmy: How to detect external changes committed to the database?
2
python,flask,sqlalchemy,flask-sqlalchemy
0
2017-03-31T20:22:00.000
Are there any solutions (preferably in Python) that can repair pdfs with damaged xref tables? I have a pdf that I tried to convert to a png in Ghostscript and received the following error: **** Error: An error occurred while reading an XREF table. **** The file has been damaged. This may have been caused **** by a problem while converting or transfering the file. However, I am able to open the pdf in Preview on my Mac and when I export the pdf using Preview, I am able to convert the exported pdf. Is there any way to repair pdfs without having to manually open them and export them?
6
1
1.2
0
true
43,154,410
0
8,136
1
0
0
43,149,372
If the file renders as expected in Ghostscript then you can run it through GS to the pdfwrite device and create a new PDF file which won't be damaged. Preview is (like Acrobat) almost certainly silently repairing the problem in the background. Ghostscript will be doing the same, but unlike other applications we feel you need to know that the file has a problem. Firstly so that you know its broken, secondly so that if the file renders incorrectly in Ghostscript (or indeed, other applications) you know why. Note that there are two main reasons for a damaged xref; firstly the developer of the application didn't read the specification carefully enough and the file offsets in the xref are correct, but the format is incorrect (this is not uncommon and a repair by GS will be harmless), secondly the file genuinely has been damaged in transit, or by editing it. In the latter case there may be other problems and Ghostscript will try to warn you about those too. If you don't get any other warnings or errors, then its probably just a malformed xref table.
1
0
0
Repairing pdfs with damaged xref table
1
python,pdf,ghostscript
0
2017-03-31T20:42:00.000
I am directing this question to experienced, Django developers, so as in subject, I have been learning Django since September 2016, but I've started to learn it without any knowledge about databases syntax. I know basic concepts and definitions, so I can easily implement in Django models. Summarizing, have I to know SQL to create web apps in Django? Thanks in advance.
1
1
0.099668
0
false
43,164,854
1
1,361
1
0
0
43,161,718
You do not have to be a wizard at it but understanding relations between data sets can be extremely helpful especially if you have a complicated data hierarchy. Just learn as you go. If you want you can look at the SQL code Django executes for you in the migrations.py file of each app.
1
0
0
Do I need to know SQL when I work with Django
2
python,sql,django
0
2017-04-01T20:33:00.000
I'm creating and writing into an excel file using xlsxwriter module. But when I open the excel file, I get this popup: We found a problem with some content in 'excel_sheet.xlsx'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes. If I click Yes, it says Repaired Records: String properties from /xl/sharedStrings.xml part (Strings) and then I can see the contents. I found that this occurs because of the cells I wrote using write_rich_string. my_work_sheet.write_rich_string(row_no, col_no,format_1, "Some text in format 1", format_2, "Text in format 2", format_1, "Again in format 1") If I write it using write_string this doesn't occur. format_1 and format_2 has font name, color, size and vertical align set. Can anyone suggest what goes wrong here?
1
2
1.2
0
true
43,248,203
0
1,256
1
0
0
43,199,359
I was trying to recreate(Thanks to @jmcnamara) the problem and I could figure out where it went wrong. In my command to write_rich_string, sometimes it was trying to format the empty string. my_work_sheet.write_rich_string(row_no, col_no,format_1, string_1, format_2, string_2, format_1, string_3) I came to know that at some point of time the value of one among string_1, string_2 and string_3 becomes ''. Now I use write_rich_string only after ensuring they are not ''.
1
0
0
Python xlsxwriter Repaired Records: String properties from /xl/sharedStrings.xml part (Strings)
2
python-3.x,xlsxwriter
0
2017-04-04T05:59:00.000
So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels in Grafana and adding a delay to one, but this doesn't give a good representation as the graphs are not on the same panel so it is more difficult to see the differences. I am currently working on a script to copy the databases in question and alter the timestamps so that the two newly created databases look like the data was taken at the same time. I am wondering if anyone has any idea how to change the timestamp, and if so, what would be the best way to to do so with a large amount of data points? Thanks.
0
1
0.099668
1
false
52,570,244
0
569
2
0
0
43,215,443
I believe this is currently available via kapacitor, but assume a more elegant solution will be readily accomplished using FluxQL. Consuming the influxdb measurements into kapacitor will allow you to force equivalent time buckets and present the data once normalized.
1
0
0
InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel
2
python,influxdb,grafana
0
2017-04-04T18:55:00.000
So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels in Grafana and adding a delay to one, but this doesn't give a good representation as the graphs are not on the same panel so it is more difficult to see the differences. I am currently working on a script to copy the databases in question and alter the timestamps so that the two newly created databases look like the data was taken at the same time. I am wondering if anyone has any idea how to change the timestamp, and if so, what would be the best way to to do so with a large amount of data points? Thanks.
0
0
0
1
false
43,306,424
0
569
2
0
0
43,215,443
I can confirm from my grafana instance that it's not possible to add a shift to one timeseries and not the other in one panel. To change the timestamp, I'd just simply do it the obvious way. Load a few thousands of entries at a time to python, change the the timestamps and write it to a new measure (and indicate the shift in the measurement name).
1
0
0
InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel
2
python,influxdb,grafana
0
2017-04-04T18:55:00.000
Currently I am using celery to build a scheduled database synchronization feature, which periodically fetch data from multiple databases. If I want to store the task results, would the performance be better if I store them in Redis instead of a RDB like MySQL?
2
2
0.379949
0
false
43,264,780
0
931
1
1
0
43,264,701
Performance-wise it's probably going to be Redis but performance questions are almost always nuance based. Redis stores lists of data with no requirement for them to relate to one another so is extremely fast when you don't need to use SQL type queries against the data it contains.
1
0
0
Celery: Is it better to store task results in MySQL or Redis?
1
python,mysql,django,redis,celery
0
2017-04-06T19:57:00.000
I have a Django project with 5 different PostgreSQL databases. The project was preemptively separated in terms of model routing, but has proven quite problematic so now I'm trying to reverse it. Unfortunately, there's some overlap of empty, migrated tables so pg_dump's out of the question. It looks like django-dumpdb may suit my needs but it doesn't handle per-database export/import. Additionally, Django's dumpdata/loaddata are installing 0 of the records from generated fixtures. Can I have some suggestions as to the least painful way to merge the data?
2
1
0.099668
0
false
43,267,208
1
650
1
0
0
43,266,059
there's always the dump data from django, which is pretty easy to use. or you could do this manually: if the 2 databases share the same data (they are mirror one to another) and the same table structure, you could just run a syncdb from django to create the new table structure and then dump and import (i'm assuming you're using mysql, but the general idea is the same) the old database into the new one if the two databases share different data (still with the same structure) you should import every single row of the two databases: this way, you'll keep relations etc, but you'll have your unique id updated to the new sole db. if the two databases are different in both data and structure, you'll have to run two sincdb and two imports, but this doesn't seem to be your case
1
0
0
How to intelligently merge Django databases?
2
python,django,postgresql,django-models,django-database
0
2017-04-06T21:27:00.000
I need a small help. I am new to postgres and django. I am creating a project in django where there will n number of clients and their data is saved into the database on monthly basis. So my doubts is should i go with only a single table and save all the data inside it or do I have an option to create individual tables dynamically as the user approaches and then save the values into those table?
0
1
0.197375
0
false
43,369,451
1
324
1
0
0
43,367,732
In fact you do not need to create a special table for each customer. SQL databases is designed in a manner to keep all similar data in one table. It is much easier to work with them in such a way. At a moment I'd like to recommend to read about relational databases to better understand ways how to store data in it. Then you'll see how to better design application and data storage.
1
0
0
Creating dynamic tables in postgres using django
1
python,django,postgresql
0
2017-04-12T11:01:00.000
I'm using Python 2.7 and flask framework with flask-sqlalchemy module. I always get the following exception when trying to insert : Exception Type: OperationalError. Exception Value: (1366, "Incorrect string value: \xF09... I already set MySQL database, table and corresponding column to utf8mb4_general_ci and I can insert emoji string using terminal. Flask's app config already contains app.config['MYSQL_DATABASE_CHARSET'] = 'utf8mb4', however it doesn't help at all and I still get the exception. Any help is appreciated
4
0
0
0
false
43,557,984
1
1,485
1
0
0
43,557,926
Add config file main file and set set 'charset' => 'utf8mb4' you have to edit field in which you want to store emoji and set collation as utf8mb4_unicode_ci
1
0
0
Flask SQLAlchemy can't insert emoji to MySQL
4
python,mysql,flask
0
2017-04-22T10:08:00.000
Can I perform a PATCH request to collection? Like UPDATE table SET foo=bar WHERE some>10 in SQL.
0
0
0
0
false
43,589,357
0
229
1
0
0
43,581,457
No that is not supported, and probably should not (see Andrey Shipilov comment).
1
0
0
Bulk PATCH in python eve
1
python,mongodb,eve
0
2017-04-24T06:45:00.000
I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has all the permissions, and sometimes it works. So if user did not have the permissions it should always throw no permission error. So what might be the reasons behind this and how to find the problem?
0
0
0
0
false
43,630,653
1
292
2
0
0
43,622,277
Is the system_auth keyspace RF the same as the amount of nodes? Did you try to run a repair on the system_auth keyspace already? If not do so. For me it sounds like a consistency issue.
1
0
0
Cassandra: occasional permission errors
2
python,django,cassandra,permissions,cqlengine
0
2017-04-25T22:54:00.000
I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has all the permissions, and sometimes it works. So if user did not have the permissions it should always throw no permission error. So what might be the reasons behind this and how to find the problem?
0
0
0
0
false
43,645,204
1
292
2
0
0
43,622,277
If you have authentication enabled, make sure you set appropriate RF for keyspace system_auth (should be equal to number of nodes). Secondly, make sure the user you have created has following permissions on all keyspaces. {'ALTER', 'CREATE', 'DROP', 'MODIFY', 'SELECT'}. If you have the user as a superuser make sure you add 'AUTHORIZE' as a permission along with the ones listed above for that user. Thirdly, you can set off a read-repair job for all the data in system_auth keyspace by running CONSISTENCY ALL; SELECT * from system_auth.users ; SELECT * from system_auth.permissions ; SELECT * from system_auth.credentials ; Hope this will resolve the issue !
1
0
0
Cassandra: occasional permission errors
2
python,django,cassandra,permissions,cqlengine
0
2017-04-25T22:54:00.000
I use SQLite3 in python because my school computers don't allow us to install anything to python so I used the pre installed SQLite3 module. I'm working on a program whose back end relies on an SQLite3 database, however the databases are created and stored on their computer. Is it possible for me to "Host" an SQLite3 database on let's say a server and allow my script to access them remotely (my script could edit the database from my school computer)? By the way, I'm using python 3.X EDIT i made a database api that runs in python 3, its called TaliffDb to install type pip3 install TaliffDB in your terminal. im working on a documentation, but please do comment if you have any questions
1
1
1.2
0
true
43,647,246
0
1,615
1
0
0
43,647,227
Write an API on the remote server, yes. This could be hosted by a web framework of your choice. You won't get a direct network connection to a file
1
0
0
How to connect to SQLite3 database in python remotely
1
python,python-3.x,sqlite
0
2017-04-27T01:37:00.000
I am deploying a Django App using Elastic Beanstalk on AWS. The app has a function whereby user can register their details. The problem is when I make small changes to my app and deploy this new version I loose the registered users since their information isn't in my local database (only the database on aws). Is there any way to download the modifications to the database during production so that I can keep these changes when I redeploy. I'm not using AWS RDS, I simply bundle the .SQLite file with my source code and deploy to Elastic Beanstalk. Thanks in Advance.
0
0
0
0
false
43,666,611
1
137
1
0
0
43,650,204
Don't bundle the development .sqlite file with the production stuff. It needs to have its own .sqlite file and you just need to run migrations on the production one.
1
0
0
Update sqlite database based on changes in production
1
python,django,sqlite,amazon-web-services,amazon-elastic-beanstalk
0
2017-04-27T06:31:00.000
I have written a piece of python code that scrapes the odds of horse races from a bookmaker's site. I wish to now: Run the code at prescribed increasingly frequent times as the race draws closer. Store the scraped data in a database fit for extraction and statistical analysis in R. Apologies if the question is poorly phrased/explained - I'm entirely self taught and so have no formal training in computer science. I don't know how to tell a piece of python code to run itself every say n-minutes and I also have no idea how to correctly approach building such a data base or what factors I should be considering. Can someone point me in the right direction for getting started on the above?
0
0
0
1
false
43,670,482
0
73
1
0
0
43,670,334
In windows, you can use Task Scheduler or in Linux crontab. You can configure these to run python with your script at set intervals of time. This way you don't have a python script continuously running preventing some hangup in a single call from impacting all subsequent attempts to scrape or store in database. To store the data there are many options which could either be a flat file(write to a text file), save as a python binary(use shelf or pickle), or install an actual database like MySQL or PostgreSQL and many more. Google for info. Additionally an ORM like SQLAlchemy may make controlling, querying, and updating your database a little easier since it will handle all tables as objects and create the SQL itself so you don't need to code all queries as strings.
1
0
0
How to run python code at prescribed time and store output in database
1
python,database-design,web-scraping
0
2017-04-28T01:07:00.000
I'm using boto3 and trying to upload files. It will be helpful if anyone will explain exact difference between file_upload() and put_object() s3 bucket methods in boto3 ? Is there any performance difference? Does anyone among these handles multipart upload feature in behind the scenes? What are the best use cases for both?
49
51
1.2
0
true
43,744,495
0
16,616
1
0
1
43,739,415
The upload_file method is handled by the S3 Transfer Manager, this means that it will automatically handle multipart uploads behind the scenes for you, if necessary. The put_object method maps directly to the low-level S3 API request. It does not handle multipart uploads for you. It will attempt to send the entire body in one request.
1
0
0
What is the Difference between file_upload() and put_object() when uploading files to S3 using boto3
3
python,amazon-web-services,amazon-s3,boto3
0
2017-05-02T13:40:00.000
I process a report that consists of date fields. There are some instances wherein the date seen in the cell is not a number (how do I know? I use the isnumber() function from excel to check if a date value is really a number). Using a recorded macro, for all the date columns, I do the text to columns function in excel to make these dates pass the isnumber() validation. And then I continue further processing using my python sccipt. But now I need to replicate the text to column action in excel, and imitate this in Python openpyxl. I naively tried to do the int(cell.value) but this didn't worked. So to sum up the questions, is there a way in Python to convert a date represented as text, to be changed to a date represented as a number?
0
0
0
0
false
43,783,883
0
940
1
0
0
43,779,887
Sounds like you might want to take advantage of the type guessing in openpyxl. If so, open the workbook with guess_types=True and see if that helps. NB. this feature is more suited to working with text sources like CSV and is likely to be removed in future releases.
1
0
1
How to convert date formatted as string to a number in excel using openpyxl
1
python,excel,openpyxl
0
2017-05-04T10:07:00.000
When attempting to connect to a PostgreSQL database with ODBC I get the following error: ('08P01', '[08P01] [unixODBC]ERROR: Unsupported startup parameter: geqo (210) (SQLDriverConnect)') I get this with two different ODBC front-ends (pyodbc for Python and ODBC.jl for Julia), so it's clearly coming from the ODBC library itself. Is there a way to stop it from passing this "geqo" parameter? An example in pyodbc would be very useful. Thanks.
1
-1
-0.099668
0
false
43,872,906
0
913
1
1
0
43,789,951
Config SSL Mode: allow in ODBC Driver postgres, driver version: 9.3.400
1
0
0
unsupported startup parameter geqo when connecting to PostgreSQL with ODBC
2
python,postgresql,odbc,julia
0
2017-05-04T18:08:00.000
I have some pretty big, multi level, documents with LOTS of fields (over 1500 fields). While I want to save the whole document in mongo, Ido not want to define the whole schema. Only a handful of fields are important. I also need to index those "important" fields. Is this something that can be done? Thank you
0
1
0.197375
0
false
43,792,616
0
96
1
0
0
43,792,282
Nevermind... found it... (ALLOW_UNKNOWN)
1
0
0
Is it possible to define a partial schema Python-eve?
1
python,eve
0
2017-05-04T20:31:00.000
After doing a bit of research I am finding it difficult to find out how to use mysql timestamps in matplotlib. Mysql fields to plot X-axis: Field: entered Type: timestamp Null: NO Default: CURRENT TIMESTAMP Sample: 2017-05-08 18:25:10 Y-axis: Field: value Type: float(12,6) Null: NO Sample: 123.332 What date format is matplotlib looking for? How do I convert to this format? I found out how to convert from unix timestamp to a format that is acceptable with matplotlib, is unix timestamp better than the timestamp field type I am using? Should I convert my whole table to unix timestamps instead? Would appreciate any help!
0
0
1.2
1
true
43,860,724
0
202
1
0
0
43,859,988
you can use datetime module,although i use now() function to extract datetime from mysql,but i consider the format is the same。 for instance python>import datetime as dt i put the datetime data into a list named datelist,and now you can use datetime.strptime function to convert the date format to what you want python>dates = [dt.datetime.strptime(d,'%Y-%m-%d %H:%M:%S') for d in datelist At last,you can put the list named dates into plot X-axis I hope it helps you.
1
0
0
Python - convert mysql timestamps type to matplotlib and graph
1
python,mysql,matplotlib
0
2017-05-09T02:17:00.000
I am trying to access a Spreadsheet on a Team Drive using gspread. It is not working. It works if the spreadsheet is on my Google Drive. I was wondering if gspread has the new Google Drive API v3 capability available to open spreadsheets on Team Drives. If so, how do I specify the fact I want to open a spreadsheet on a Google Team Drive and not my own Google drive? If not, when will that functionality be available? Thanks!
1
0
0
0
false
65,546,184
0
426
1
0
0
43,897,009
Make sure you're using the latest version of gspread. The one that is e.g. bundled with Google Colab is outdated: !pip install --upgrade gspread This fixed the error in gs.csv_import for me on a team drive.
1
0
0
Does gspread Support Accessing Spreadsheets on Team Drives?
2
python,google-apps-script,gspread
0
2017-05-10T15:34:00.000
I have a AWS lambda implemented using python/pymysql with AWS rds Mysql instance as backend. It connects and works well and I can also access the lambda from my android app. The problem I have is after I insert a value into rds mysql tables successfully using local machine mysql workbench and run the lambda function from AWS console its not showing the newly inserted value instantly. On the python aws lambda code I am not closing the connection or cursor. But if I edit the lambda function on the AWS console, by edit I mean just insert a space and again run the lambda from AWS console it fetches the newly inserted value. How do I configure/code to make lambda fetch db values in realtime.
3
0
0
0
false
56,725,577
0
878
1
0
0
43,944,404
AWS recommends making a global connection (before your handler function definition) in order to increase performance. Idea is that a new connection does not have to be established and the previous connection to the DB is reused, even when multiple instances of Lambda are run in close connection. But if your use case involves referencing MySQL tables through Lambda, especially if the table is regularly updated, I'd recommend initiating the connection object locally (inside the handler function) and then closing it after you run your queries. This is much in tandem to @dnevins' response and was the only way it worked for me as well. Hope this helps!
1
0
0
Python Aws lambda function not fetching the rds Mysql table value in realtime
2
python,amazon-web-services,aws-lambda,amazon-rds
0
2017-05-12T18:31:00.000
Ubuntu 14.04.3, PostgreSQL 9.6 Maybe I can get the plpythonu source code from the PostgreSQL 9.6 source code or somewhere else, put it into the /contrib directory, make it and CREATE EXTENSION after that!? Or something like that. Don't want to think that PostgreSQL reinstall is my only way.
2
2
0.379949
0
false
44,186,978
0
2,374
1
0
0
43,984,705
you can simply run python 2 sudo apt-get install postgresql-contrib postgresql-plpython-9.6 python 3 sudo apt-get install postgresql-contrib postgresql-plpython3-9.6 Then check the extension is installed SELECT * FROM pg_available_extensions WHERE name like '%plpython%'; To apply the extension to the database, use for python 2 CREATE EXTENSION plpython2u; for python 3 CREATE EXTENSION plpython3u;
1
0
1
Is there a way to install PL/Python after the database has been compiled without "--with-python" parameter?
1
postgresql,plpython
0
2017-05-15T16:36:00.000
I'm having issues connecting to a working SQL\Express database instance using Robot Framework's DatabaseLibrary. If I use either Connect To Database with previously defined variables or Connect To Database Using Custom Params with a connection string, I get the following results: pyodbc: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. (17) (SQLDriverConnect); [01000] [Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionOpen (Connect()). (53)') pymssql:: InterfaceError: Connection to the database failed for an unknown reason. The connection string I'm using is the following: 'DRIVER={SQL Server};SERVER=localhost\SQLExpress;UID=sa;PWD=mypass;DATABASE=MyDb' I copied several examples from guides and tutorials and all of them yield the same result, so my guess is that there is something wrong on my end, but I just can't figure out what. I can access the database using the Microsoft SQL Server Management Studio just fine, so the database is running. Any guidance will be greatly appreciated!
2
2
1.2
0
true
44,121,400
0
1,854
1
0
0
43,988,892
I was able to connect using @Goralight approach: Connect To Database Using Custom Params pymssql ${DBConnect} where ${DBConnect} contained database, user, Password, host and port
1
0
0
Cannot connect to SQL\Express with pyodbc/pymssql and Robot Framework
1
python,robotframework,pyodbc,pymssql
0
2017-05-15T21:11:00.000
I have sqlite db runing on my server. I want to access it using client side javascript in browser. Is this possible? As of now, I am using python to access the db and calling python scripts for db operations.
0
1
0.197375
0
false
44,001,568
1
158
1
0
0
44,000,687
It's not a good idea to allow clients to access directly to the db. If you have to do it be carefull to not give to the account you use full write/read access to the db or any malicius client can erase modify or steal information from the db. An implementation with client identification server-side and rest API to return or modify DB it's safer.
1
0
0
Access sqlite in server through client side javascript
1
javascript,python,sql,sqlite
0
2017-05-16T11:51:00.000
I am trying to install mysqlclient on mac to use mysql in a django project. I have made sure that setup tools is installed and that mysql connector c is installed as well. I keep getting the error Command "python setup.py egg_info" failed with error code 1 in. This is my first django project since switching from rails. Is there something I am missing? I am using python 3 and I use pip install mysqlclient.
0
0
1.2
0
true
44,036,652
1
599
1
0
0
44,008,037
I was able to fix this by running pip install mysql. I do not understand why this worked because I already had MySQL installed on my system and had been using it. I am going to assume it is because Python uses environments and MySQL wasn't installed in the environment but I would like to know for sure.
1
0
0
Mysqlclient fails to install
1
python,mysql,django,pip
0
2017-05-16T17:32:00.000
I am looking for a method for hiding all rows in an excel sheet using python's openpyxl module. I would like, for example, to hide all rows from the 10th one to the end. Is it possible in openpyxl? For instance in xlsxwriter there is a way to hide all unused rows. So I am looking for a similar funcitonality in openpyxl, but I can not find it in docs or anywhere else, so any help will be much appreciated. I know it can be easily done by iterating over rows, but this approach is awfully slow, so I would like to find something faster.
1
1
1.2
0
true
45,254,364
0
413
1
0
0
44,028,186
As far as I know, there is no such feature at the moment in the openpyxl. However, this can be easily done in an optimized way in the xlsxwriter module.
1
0
0
python openpyxl: hide all rows from to the end
2
python,excel,openpyxl,xlsxwriter
0
2017-05-17T14:48:00.000
Getting neo4j.v1.api.CypherError: Internal error - should have used fall back to execute query, but something went horribly wrong when using python neomodel client with neo4j community edition 3.2.0 server. And the neo4j server logs has the below errors: 2017-05-16 12:54:24.187+0000 ERROR [o.n.b.v.r.ErrorReporter] Client triggered an unexpected error [UnknownError]: Internal error - should have used fall back to execute query, but something went horribly wrong, reference 4c32d6e0-a66a-4db4-830c-b8d03ce6f1e3. 2017-05-16 12:54:24.187+0000 ERROR [o.n.b.v.r.ErrorReporter] Client triggered an unexpected error [UnknownError]: Internal error - should have used fall back to execute query, but something went horribly wrong, reference 4c32d6e0-a66a-4db4-830c-b8d03ce6f1e3. Internal error - should have used fall back to execute query, but something went horribly wrong org.neo4j.cypher.internal.ir.v3_2.exception.CantHandleQueryException: Internal error - should have used fall back to execute query, but something went horribly wrong
0
2
1.2
0
true
44,043,172
0
213
1
0
0
44,043,171
This seems to be a issue with neo4j version 3.2.0. Setting cypher.default_language_version to 3.1 in neo4j.conf and restarting the server should fix this.
1
0
0
neo4j.v1.api.CypherError: Internal error - should have used fall back to execute query, but something went horribly wrong
1
python,python-3.x,neo4j
0
2017-05-18T09:00:00.000
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite. In my code I am using nltk which has a import sqlite3 in one of the files. Steps taken till now: Deployment package has only python modules that I am using in the root. I get the error: Unable to import module 'my_program': No module named '_sqlite3' Added the _sqlite3.so from /home/my_username/anaconda2/envs/py3k/lib/python3.6/lib-dynload/_sqlite3.so into package root. Then my error changed to: Unable to import module 'my_program': dynamic module does not define module export function (PyInit__sqlite3) Added the SQLite precompiled binaries from sqlite.org to the root of my package but I still get the error as point #2. My setup: Ubuntu 16.04, python3 virtual env AWS lambda env: python3 How can I fix this problem?
23
6
1
0
false
44,076,628
0
7,936
2
0
0
44,058,239
This isn't a solution, but I have an explanation why. Python 3 has support for sqlite in the standard library (stable to the point of pip knowing and not allowing installation of pysqlite). However, this library requires the sqlite developer tools (C libs) to be on the machine at runtime. Amazon's linux AMI does not have these installed by default, which is what AWS Lambda runs on (naked ami instances). I'm not sure if this means that sqlite support isn't installed or just won't work until the libraries are added, though, because I tested things in the wrong order. Python 2 does not support sqlite in the standard library, you have to use a third party lib like pysqlite to get that support. This means that the binaries can be built more easily without depending on the machine state or path variables. My suggestion, which you've already done I see, is to just run that function in python 2.7 if you can (and make your unit testing just that much harder :/). Because of the limitations (it being something baked into python's base libs in 3) it is more difficult to create a lambda-friendly deployment package. The only thing I can suggest is to either petition AWS to add that support to lambda or (if you can get away without actually using the sqlite pieces in nltk) copying anaconda by putting blank libraries that have the proper methods and attributes but don't actually do anything. If you're curious about the latter, check out any of the fake/_sqlite3 files in an anaconda install. The idea is only to avoid import errors.
1
0
0
sqlite3 error on AWS lambda with Python 3
8
python-3.x,amazon-web-services,sqlite,aws-lambda
1
2017-05-18T21:43:00.000
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite. In my code I am using nltk which has a import sqlite3 in one of the files. Steps taken till now: Deployment package has only python modules that I am using in the root. I get the error: Unable to import module 'my_program': No module named '_sqlite3' Added the _sqlite3.so from /home/my_username/anaconda2/envs/py3k/lib/python3.6/lib-dynload/_sqlite3.so into package root. Then my error changed to: Unable to import module 'my_program': dynamic module does not define module export function (PyInit__sqlite3) Added the SQLite precompiled binaries from sqlite.org to the root of my package but I still get the error as point #2. My setup: Ubuntu 16.04, python3 virtual env AWS lambda env: python3 How can I fix this problem?
23
1
0.024995
0
false
49,342,276
0
7,936
2
0
0
44,058,239
My solution may or may not apply to you (as it depends on Python 3.5), but hopefully it may shed some light for similar issue. sqlite3 comes with standard library, but is not built with the python3.6 that AWS use, with the reason explained by apathyman and other answers. The quick hack is to include the share object .so into your lambda package: find ~ -name _sqlite3.so In my case: /home/user/anaconda3/pkgs/python-3.5.2-0/lib/python3.5/lib-dynload/_sqlite3.so However, that is not totally sufficient. You will get: ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory Because the _sqlite3.so is built with python3.5, it also requires python3.5 share object. You will also need that in your package deployment: find ~ -name libpython3.5m.so* In my case: /home/user/anaconda3/pkgs/python-3.5.2-0/lib/libpython3.5m.so.1.0 This solution is likely not work if you are using _sqlite3.so that is built with python3.6, because the libpython3.6 built by AWS will likely not support this. However, this is just my educational guess. If anyone has successfully done, please let me know.
1
0
0
sqlite3 error on AWS lambda with Python 3
8
python-3.x,amazon-web-services,sqlite,aws-lambda
1
2017-05-18T21:43:00.000
I haven't been able to find any direct answers, so I thought I'd ask here. Can ETL, say for example AWS Glue, be used to perform aggregations to lower the resolution of data to AVG, MIN, MAX, etc over arbitrary time ranges? e.g. - Given 2000+ data points of outside temperature in the past month, use an ETL job to lower that resolution to 30 data points of daily averages over the past month. (actual use case of such data aside, just an example). The idea is to perform aggregations to lower the resolution of data to make charts, graphs, etc display long time ranges of large data sets more quickly, as we don't need every individual data point that we must then dynamically aggregate on the fly for these charts and graphs. My research so far only suggests that ETL be used for 1 to 1 transformations of data, not 1000 to 1. It seems ETL is used more for transforming data to appropriate structure to store in a db, and not for aggregating over large data sets. Could I use ETL to solve my aggregation needs? This will be on a very large scale, implemented with AWS and Python.
0
0
1.2
0
true
44,107,589
0
588
1
0
0
44,074,550
The 'T' in ETL stands for 'Transform', and aggregation is one of most common ones performed. Briefly speaking: yes, ETL can do this for you. The rest depends on specific needs. Do you need any drill-down? Increasing resolution on zoom perhaps? This would affect the whole design, but in general preparing your data for presentation layer is exactly what ETL is used for.
1
0
0
Using ETL for Aggregations
1
python,amazon-web-services,etl,aws-glue
0
2017-05-19T16:09:00.000
I am building a web application that allows users to login and upload data files that would eventually be used to perform data visualisation and data mining features - Imagine a SAS EG/Orange equivalent on the web. What are the best practices to store these files (in a database or on file) to facilitate efficient retrieval and processing of the data and the pros and cons of each method?
0
0
0
0
false
44,206,133
1
299
1
0
0
44,094,933
This depends on what functionality you can offer. Many very interesting data mining tools will read raw data files only, so storing the data in a database does not help you anything. But then you won't want to run them "on the web" anyway, as they easily eat all your resources. Either way, first get your requirements straight and explicit, then settle on the analysis tools, then decide on the storage backend depending on what the tools can use.
1
0
0
Storing data files on Django application
1
python,django,database,data-visualization,data-mining
0
2017-05-21T08:50:00.000
I am using Python 2.7 and SQLite3. When I starting work with DB I want to check - does my database is empty or on not. I mean does it already have any tables or not. My idea is to use the simple SELECT from any table. And wrap this select in try:exception block. So if exception was raised then my DB is empty. Maybe someone know the better way for checking?
2
4
1.2
0
true
44,098,371
0
1,822
1
0
0
44,098,235
SELECT name FROM sqlite_master while connected to your database will give you all the tables names. you can then do a fetchall and check the size, or even contents of the list. not try/catch necessary (the list will be empty if the database doesn't contain any tables)
1
0
0
How could I check - does my SQLite3 database is empty?
1
python,sqlite
0
2017-05-21T14:50:00.000
I was running Django 1.11 with Python 3.5 and I decided to upgrade to Python 3.6. Most things worked well, but I am having issues connection to AWS S3. I know that they have a new boto version boto3 and that django-storages is a little outdated, so now there is django-storages-redux. I've been trying multiple combinations of boto/boto3 and django-storages-redux/django-storages to see if it works. But I'm getting a lot of erros, from SSL connection failing to the whole website being offline due to server errors. The newest is my website throwing a 400 Bad Request to all urls. My app does run on Python 3.5, so I'm confident that the issue is around collectstatic and S3. Is there anybody here who made a similar update work and tell me what configuration was used? Thanks a lot!
0
-1
1.2
0
true
44,122,528
1
149
1
0
0
44,121,989
Found the issue. Django-storages-redux was temporarily replacing django-storages since it's development had been interrupted. Now the django-storages team restarted to support it. That means that the correct configuration to use is: django-storages + boto3
1
0
0
Django: Upgrading to python3.6 with Amazon S3
1
django,python-3.x,amazon-s3,boto
0
2017-05-22T20:53:00.000
I'm making a web crawler in Python that collects redirects/links, adds them to a database, and enters them as a new row if the link doesn't already exist. I want like to use multi-threading but having trouble because I have to check in real time if there is an entry with a given URL. I was initially using sqlite3 but realised I can't use it simultaneously on different threads. I don't really want to use MySQL (or something similar) as it needs more disk space and runs as separate server. Is there anyway to make sqlite3 work with multiple threads?
3
0
0
0
false
44,123,745
0
7,557
1
0
0
44,123,678
One solution could be to acquire a lock to access the database directly from your program. In this way the multiple threads or processes will wait the other processes to insert the link before performing a request.
1
0
0
Getting SQLite3 to work with multiple threads
3
python,multithreading,sqlite,multiprocessing
0
2017-05-22T23:47:00.000
I have pretty simple model. User defines url and database name for his own Postgres server. My django backend fetches some info from client DB to make some calculations, analytics and draw some graphs. How to handle connections? Create new one when client opens a page, or keep connections alive all the time?(about 250-300 possible clients) Can I use Django ORM or smth like SQLAlchemy? Or even psycopg library? Does anyone tackle such a problem before? Thanks
0
0
0
0
false
44,139,838
1
77
1
0
0
44,139,772
In your case, I would rather go with Django internal implementation and follow Django ORM as you will not need to worry about handling connection and different exceptions that may arise during your own implementation of DAO model in your code. As per your requirement, you need to access user database, there still exists overhead for individual users to create db and setup something to connect with your codebase. So, I thinking sticking with Django will be more profound.
1
0
0
Handle connections to user defined DB in Django
1
python,django,database,postgresql,orm
0
2017-05-23T16:00:00.000
Trying to install cx_Oracle on Solaris11U3 but getting ld: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error python setup.py build running build running build_ext building 'cx_Oracle' extension cc -DNDEBUG -KPIC -DPIC -I/oracle/database/rdbms/demo -I/oracle/database/rdbms/public -I/usr/include/python2.7 -c cx_Oracle.c -o build/temp.solaris-2.11-sun4v.32bit-2.7-11g/cx_Oracle.o -DBUILD_VERSION=5.2.1 "SessionPool.c", line 202: warning: integer overflow detected: op "<<" cc -G build/temp.solaris-2.11-sun4v.32bit-2.7-11g/cx_Oracle.o -L/oracle/database/lib -L/usr/lib -lclntsh -lpython2.7 -o build/lib.solaris-2.11-sun4v.32bit-2.7-11g/cx_Oracle.so ld: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error: command 'cc' failed with exit status 2 Tried all available information on the internet: Installed gcc Installed solarisstudio12.4 Installed instantclient-basic-solaris.sparc64-12.2.0.1.0, instantclient-odbc-solaris.sparc64-12.2.0.1.0 Set LD_LIBRARY_PATH to oracle home directory:instantclient_12_2/ Same issue seen while installing DBD:Oracle perl module.
0
0
0
0
false
44,171,743
0
268
1
1
0
44,155,943
You cannot mix 32-bit and 64-bit together. Everything (Oracle client, Python, cx_Oracle) must be 32-bit or everything must be 64-bit. The error above looks like you are trying to mix a 64-bit Oracle client with a 32-bit Python.
1
0
0
Python module cx_Oracle ld installation issue on Solaris11U3 SPARC: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error
1
python,cx-oracle
0
2017-05-24T10:37:00.000
I recently did an inspectdb of our old database which is on MySQL. We want to move to postgre. now we want the inspection to migrate to a different schema is there a migrate command to achieve this? so different apps shall use different schemas in the same database
0
0
1.2
0
true
47,838,958
1
204
1
0
0
44,161,614
so simple! python manage.py migrate "app_name"
1
0
0
how to specify an app to migrate to a schema django
1
postgresql,python-3.5,django-migrations,django-1.8,django-postgresql
0
2017-05-24T14:39:00.000
I have a table which I am working on and it contains 11 million rows there abouts... I need to run a migration on this table but since Django trys to store it all in cache I run out of ram or disk space which ever comes first and it comes to abrupt halt. I'm curious to know if anyone has faced this issue and has come up with a solution to essentially "paginate" migrations maybe into blocks of 10-20k rows at a time? Just to give a bit of background I am using Django 1.10 and Postgres 9.4 and I want to keep this automated still if possible (which I still think it can be) Thanks Sam
5
5
1.2
0
true
44,167,568
1
995
1
0
0
44,167,386
The issue comes from a Postgresql which rewrites each row on adding a new column (field). What you would need to do is to write your own data migration in the following way: Add a new column with null=True. In this case data will not be rewritten and migration will finish pretty fast. Migrate it Add a default value Migrate it again. That is basically a simple pattern on how to deal with adding a new row in a huge postgres database.
1
0
0
Django migration 11 million rows, need to break it down
1
python,django,postgresql,django-migrations
0
2017-05-24T19:55:00.000
in my app I have a mixin that defines 2 fields like start_date and end_date. I've added this mixin to all table declarations which require these fields. I've also defined a function that returns filters (conditions) to test a timestamp (e.g. now) to be >= start_date and < end_date. Currently I'm manually adding these filters whenever I need to query a table with these fields. However sometimes me or my colleagues forget to add the filters, and I wonder whether it is possible to automatically extend any query on such a table. Like e.g. an additional function in the mixin that is invoked by SQLalchemy whenever it "compiles" the statement. I'm using 'compile' only as an example here, actually I don't know when or how to best do that. Any idea how to achieve this? In case it works for SELECT, does it also work for INSERT and UPDATE? thanks a lot for your help Juergen
1
0
0
0
false
44,395,701
1
182
1
0
0
44,207,726
I tried extending Query but had a hard time. Eventually (and unfortunately) I moved back to my previous approach of little helper functions returning filters and applying them to queries. I still wish I would find an approach that automatically adds certain filters if a table (Base) has certain columns. Juergen
1
0
0
sqlalchemy automatically extend query or update or insert upon table definition
2
python,sqlalchemy
0
2017-05-26T18:06:00.000
Yesterday, I installed an apche web server and phpmyadmin on my raspberry-py. How can I connect my raspberry-pi to databases in phpmyadmin with python? Can I use MySQL? Thank, I hope you understand my question and sorry for my bad english.
0
0
0
0
false
44,215,522
0
110
1
0
0
44,215,404
Your question is quite unclear. But from my understanding, here is what you should try doing: (Note: I am assuming you want to connect your Pi to a database to collect data and store in an IoT based application) Get a server. Any Basic server would do. I recommend DigitalOcean or AWS LightSail. They have usable servers for just $5 per month. I recommend Ubuntu 16.04 for ease of use. SSH into the server with your terminal with the IP address you got when you created the server Install Apache, MySQL, Python, PHPMyAdmin on the server. Write your web application in any language/framework you want. Deploy it and write a separate program to make HTTP calls to the said web server. MySQL is the Database server. Python is the language that is used to execute any instructions. PHPMyAdmin is the interface to view MySQL Databases and Tables. Apache is the webserver that serves the application you have written to deal with requests. I strongly recommend understanding the basics of Client-Server model of computing over HTTP. Alternatively, you could also use the approach of Using a DataBase-as-a-service from any popular cloud service provider(Eg., AWS RDS), to make calls directly into the DB.
1
0
0
connect my raspbery-pi to MySQL
1
python,mysql,apache,raspberry-pi
1
2017-05-27T09:52:00.000
I'm currently programming in python : - a graphical interface - my own simple client-server based on sockets (also in python). The main purpose here is to recieve a message (composed of several fields), apply any changes on it, and then send back the message to the client. What I want to achieve is to link the modification of a field of this message with any script (I Just saw radamsa which could do the job, I still need to test it. For example the message : John/smith/London would be John/(request by script)/London. I don't know if it's even possible to use just a part of code of something like sqlmap (or something else, I don't mind), without the whole thing. I hope I have given enough details, if not, feel free to ask for any =) Edit: the use of sqlmap is not mandatory, it's just one of the few fuzzing scripts i was aware of
0
0
0
0
false
44,585,660
0
156
1
0
0
44,262,826
Update : I finally managed to do what I had in mind. In SQLmap, the "dictionnary" can be found in the "/xml" directory. By using several "grep" I have been able to follow the flow of creation. Then, to create the global payload, and then split it in real SQL payload
1
0
0
How to link sqlmap with my own script
1
python,python-3.x,sockets,unix,sqlmap
0
2017-05-30T12:51:00.000
I'm calling a database, and returning a datetime.datetime object as the 3rd item in a tuple. The SQL statement is: SELECT name,name_text,time_received FROM {}.{} WHERE DATE(time_received)=CURRENT_DATE LIMIT 1 If I do print(mytuple[2]) it returns something like: 2017-05-31 17:21:19+00:00 but always with that "+00:00" at the end for every value. How do I remove that trailing "+00:00" from the datetime.datetime object? I've tried different string-stripping methods like print(mytuple[2][:-6]) but it gives me an error saying that datetime.datetime object is not subscriptable. Thanks for any help!
2
0
0
0
false
44,292,411
0
4,484
1
0
0
44,292,280
I imagine in order to use string stripping methods, you must first convert it to a string, then strip it, then convert back to whatever format you want to use. Cheers
1
0
1
How to remove "+00:00" from end of datetime.datetime object. (not remove time)
3
sql,database,python-3.x,datetime,psycopg2
0
2017-05-31T18:31:00.000
There is a data model(sql) with a scenario where it uses one input table and it is fed into the prediction model(python) and some output variables are generated onto another table and final join is done between input table and output table to get the complete table(sql). Note: there could be chances that output table doesn't predict values for every primary key in the input table. Is it good approach to create one table that is fed as input to prediction model and the same table is updated with the output variables i.e. we pass those columns as complete null during input and they are updated as the values are predicted? If not, then what are the disadvantages for the same.
0
1
1.2
0
true
44,295,921
0
123
1
0
0
44,295,661
It's difficult to say without knowing what the database load and latency requirements are. I would typically avoid using the same table for source and output: I would worry about the cost and contention of simultaneously reading from the table and then writing back to the table but this isn't going to be a problem for low load scenarios. (If you're only running a single process that is doing both the reading and writing then this isn't a problem at all.) Separating Input/Output may also make the system more understandable, especially if something unexpected goes wrong.
1
0
0
Best way to use input and output tables
1
python,sql,postgresql
0
2017-05-31T22:30:00.000
I have set up a database using MySQL Community Edition to log serial numbers of HDD's and file names. I am instructed to find a way to integrate Python scripting into the database so that the logs can be entered through python programming instead of manually (as manually would take a ridiculous amount of time.) Pycharm was specified as the programming tool that will be used, I have done research for the past few days and haven't found any solid way that this should be done, the python connector doesn't appear to be able to work with Pycharm. Any suggestions?
0
0
1.2
0
true
44,332,632
0
173
1
0
0
44,331,043
Reading between the lines here. I believe what you are being asked to do is called ETL. If somebody were to ask me to do the above my approach would be Force an agreed upon format for the incoming data (probably a .csv) Write a python application to; a. Read the data from the csv, b. Condition the data if necessary, c. Write the results to the database Pycharm would be the tool that I would use to write my Python code. It would have nothing to do with MySql. The ETL process should be initiated from the command line so you can automate it. So you need to research the following for python. Reading files from a csv Parsing command line arguments Connecting and writing to a MySql database Again, I'm doing some guessing here as your question is vague. SteveJ
1
0
0
What is the best way to integrate Pycharm Python into a MySQL CE Database?
1
python,mysql,pycharm,mysql-python
0
2017-06-02T14:12:00.000
I wonder if a good habit is to have some logic in MySQL database (triggers etc.) instead of logic in Django backend. I'm aware of fact that some functionalities may be done both in backend and in database but I would like to do it in accordance with good practices. I'm not sure I should do some things manually or maybe whole database should be generated by Django (is it possible)? What are the best rules to do it as well as possible? I would like to know the opinion of experienced people.
0
1
0.197375
0
false
44,354,531
1
133
1
0
0
44,353,532
It is true that if you used a database for your business logic you could get maximum possible performance and security optimizations. However, you would also risk many things such as No separation of concerns Being bound to the database vendor etc. Also, whatever logic you write in your database won't be version controlled with your app. Thus, whenever you change your database, you will have to create all those things once again. Instead, use Django ORM. It will create and manage your database based on your models by itself. As a result, whenever you recreate your database, you will just have to run migrations with one single command and you are done. This will cover most of the situations. And whenever you will need those speeds of stored procedures, Django ORM has you covered as well. In short, I believe that business logic should be kept out of the database as much as possible.
1
0
0
Django - backend logic vs database logic
1
python,mysql,django,django-models,django-rest-framework
0
2017-06-04T11:24:00.000
I am writing software that manipulates Excel sheets. So far, I've been using xlrd and xlwt to do so, and everything works pretty well. It opens a sheet (xlrd) and copies select columns to a new workbook (xlwt) It then opens the newly created workbook to read data (xlrd) and does some math and formatting with the data (which couldn't be done if the file isn't saved once) - (xlwt saves once again) However, I am now willing to add charts in my documents, and this function is not supported by xlwt. I have found that xlsxwriter does, but this adds other complications to my code: xlsxwriter only has xlsxwriter.close(), which saves AND closes the document. Does anyone know if there's any workaround for this? Whenever I use xlsxwriter.close(), my workbook object containing the document I'm writing isn't usable anymore.
4
4
1.2
0
true
44,417,110
0
1,717
1
0
0
44,387,732
Fundamentally, there is no reason you need to read twice and save twice. For your current (no charts) process, you can just read the data you need using xlrd; then do all your processing; and write once with xlwt. Following this workflow, it is a relatively simple matter to replace xlwt with XlsxWriter.
1
0
0
Saving XlsxWriter workbook more than once
1
python,excel,xlrd,xlsxwriter
0
2017-06-06T10:37:00.000
I'm running pandas read_sql_query and cx_Oracle 6.0b2 to retrieve data from an Oracle database I've inherited to a DataFrame. A field in many Oracle tables has data type NUMBER(15, 0) with unsigned values. When I retrieve data from this field the DataFrame reports the data as int64 but the DataFrame values have 9 or fewer digits and are all signed negative. All the values have changed - I assume an integer overflow is happening somewhere. If I convert the database values using to_char in the SQL query and then use pandas to_numeric on the DataFrame the values are type int64 and correct. I'm using Python 3.6.1 x64 and pandas 0.20.1. _USE_BOTTLENECK is False. How can I retrieve the correct values from the tables without using to_char?
0
1
1.2
1
true
44,519,380
0
577
1
0
0
44,392,676
Removing pandas and just using cx_Oracle still resulted in an integer overflow so in the SQL query I'm using: CAST(field AS NUMBER(19)) At this moment I can only guess that any field between NUMBER(11) and NUMBER(18) will require an explicit CAST to NUMBER(19) to avoid the overflow.
1
0
0
pandas read_sql_query returns negative and incorrect values for Oracle Database number field containing positive values
1
python,sql,oracle,pandas,dataframe
0
2017-06-06T14:24:00.000
I have a PostgreSQL database that is being used by a front-end application built with Django, but being populated by a scraping tool in Node.js. I have made a sequence that I want to use across two different tables/entities, which can be accessed by a function (nexval(serial)) and is called on every insert. This is not the primary key for these tables, but simply a way to maintain order through some metadata. Using it in Node.js during the insertion of the data into the tables is trivial, as I am using raw SQL queries. However, I am struggling with how to represent this using Django models. There does not seem to be any way to associate this Postgres function with a model's field. Question: Is there a way to use a Postgres function as the default value of a Django model field?
3
1
1.2
0
true
44,806,040
1
2,587
1
0
0
44,444,385
My eventual solution: Override the save method of the model, using a raw query to SELECT nextval('serial') inside the override, setting that as the value of the necessary field, then call save on the parent (super(PARENT, self).save()).
1
0
0
Postgres Sequences as Default Value for Django Model Field
2
python,django,postgresql,orm
0
2017-06-08T19:47:00.000
In my file there are MAX and MIN formulas in a row. Sample CELLS - | A | B | C | D | E | F | G | H | ROW: | MAX | MIN | MIN | MAX | MIN | MIN | MAX | MIN |MIN If the excel sheet is opened a green triangle is displaying with a warning message "Inconsistent Formula".
0
1
0.197375
0
false
44,523,601
0
523
1
0
0
44,521,638
This is a standard Excel warning to alert users to the fact that repeated and adjacent formulas are different since that may be an error. It isn't possible to turn off this warning in XlsxWriter.
1
0
0
How to Ignore "Inconsistent Formula" warning showing in generated .xlsx file using the python xlsxwriter?
1
python,excel,python-2.7,xlsxwriter
0
2017-06-13T12:31:00.000
i'm actually making ajax request that call a php file which call a python file. My main problem is with the imports in the python scripts. I actually work on local. I'm on linux. When i do "$ php myScript.php" ( which call python script inside ) it's work but when it come from the ajax call then the import of the python files does not work. So i moved some libraries in the current folder of the php and python script. First the import will only work if the library is in a folder, impossible to call a function from my other python script. Then i can't do " import tweepy " even if the library is in the current folder. But for pymongo its worked because I do " from pymongo import MongoClient ". All my script worked when call from php or when executed with python throw command line. Thoses libraries are also in my current python folder on linux but throw ajax call it never go there. I specify this at the beginning of each python file "#!/usr/bin/env python2.7" Here the path of my files folder ----- script.php ----- script.py ----- pymongo[FOLDER] ----- tweepy[FOLDER] Ps : Sorry english is not my main language
0
0
0
0
false
44,577,194
1
104
1
0
0
44,558,269
I finally succeed. In fact tweepy use the library call "six" which is not in my current folder. So i import all the python library in my folder, so i get no more error. But i still don't understand why python does not go search the library in his normal folder instead in the current folder.
1
0
0
executing python throw ajax , import does not work
1
php,python,ajax,import,directory
1
2017-06-15T03:56:00.000
I have data in CSV, in which one column is for fiscal year. eg. 2017 - 2019 . Please specify how to form the CREATE TABLE query and INSERT query with the Fiscal Year as field.
0
0
1.2
0
true
44,560,766
0
130
1
0
0
44,560,315
since it seems like range of years for fiscal postion. I would suggest to use two Integer field to store data. and and years will be in 4 numbers so use Type SMALLINT this way you use half of the storage space then INT field.
1
0
0
How to store fiscal year (eg. 2017-2020) in mysql?
1
python,mysql,csv
0
2017-06-15T06:37:00.000
Im trying to load xml file into the google bigquery ,can any one please help me how to solve this . I know we can load JSON ,CSV and AVRO files into big query . I need suggestion/help, Is the any way can i load xml file into bigquery
0
1
0.197375
0
false
44,590,333
0
834
1
0
1
44,588,770
The easiest option is probably to convert your XML file either to CSV or to JSON and then load it. Without knowing the size and shape of your data it's hard to make a recommendation, but you can find a variety of converters if you search online for them.
1
0
0
how to load xml file into big query
1
xml,python-2.7,google-bigquery
0
2017-06-16T12:00:00.000
Firstly, this question isn't a request for code suggestions- it's more of a question about a general approach others would take for a given problem. I've been given the task of writing a web application in python to allow users to check the content of media files held on a shared server. There will also likely be a postgres database from which records for each file will be gathered. I want the web app to: 1) Suggest the next file to check (from files that have yet to be checked) and have a link to the next unchecked file once the result of the previous check have been submitted. 2) Prevent the app from suggesting the same file to multiple users simultaneously. If it was just one user checking the files it would be easier, but I'm having trouble conceptualising how i'm going to achieve the two points above with multiple simultaneous users. As I say, this isn't a code request i'm just just interested in what approach/tools others feel would be best suited to this type of project. If there are any python libraries that could be useful i'd be interested to hear any recommendations. Thanks
0
1
1.2
0
true
44,608,243
1
49
1
0
0
44,606,342
These requirements are more or less straightforward to follow. Given that you will have a persistent database that can share the state of each file with multiple sessions - and even multiple deploys - of your system - and that is more or less a given with Python + PostgreSQL. I'd suggest you to create a Python class with a few fields yuu can use for the whole process, and use an ORM like SQLAlchemy or Django's to bind those to a database. The fields you will need are more or less: filename, filpath, timestamp, check_status - and some extra like "locked_for_checking", and "checker" (which might be a foreignkey to a Users collection). On presenting a file as a sugestion for a given user, you set the "locked_for_checking" flag - and for the overall listing, yu create a list that excçuds files "checked" or "locked_for_checking". (and sort the files by timestamp/size or other metadata that attends your requirements). You will need some logic to "unlock for checking" if the first user does not complete the checking in a given time frame, but that is it.
1
0
0
Python web app ideas- incremental/unique file suggestions for multiple users
1
python,postgresql,python-3.x,flask
0
2017-06-17T15:37:00.000
I have a table of size 15 GB in DynamoDB. Now I need to transfer some data based on timestamps ( which is in db) to another DynamoDB. What would be the most efficient option here? a) Transfer to S3,process with pandas or someway and put in the other table (data is huge. I feel this might take a huge time) b) Through DataPipeLine (read a lot but don't think we can put queries over there) c) Through EMR and Hive (this seems to be the best option but is it possible to do everything though a python script? Would I need to create an EMR Cluster and use it or create and terminate every time? How can EMR be used efficiently and cheaply as well?)
1
1
0.197375
0
false
44,612,782
0
831
1
0
0
44,608,785
I would suggest going with the data pipeline into S3 approach. And then have a script to read from S3 and process your records. You can schedule this to run on regular intervals to backup all your data. I don't think that any solution that does a full scan will offer you a faster way, because it is always limited by read throughput. Another possible approach is to use dynamoDB stream and lambdas to maintain second table in real time. Still you will first need to process existing 15 GB once using approach above, and then switch to lambdas for keeping them in sync
1
0
0
Data transfer from DynamoDB table to another DynamoDB table
1
python,hive,amazon-emr,amazon-data-pipeline
0
2017-06-17T19:48:00.000
I connect to a postgres database hosted on AWS. Is there a way to find out the number of open connections to that database using python API?
0
1
1.2
0
true
44,621,822
0
60
1
0
0
44,621,606
I assume this is for RDS. There is no direct way via the AWS API. You could potentially get it from CloudWatch but you'd be better off connecting to the database and getting the count that way by querying pg_stat_activity.
1
0
0
Finding number of open connections to postgres database
1
python,postgresql,amazon-web-services
0
2017-06-19T02:52:00.000
I want to find the same words in two different excel workbooks. I have two excel workbooks (data.xls and data1.xls). If in data.xls have the same words in the data1.xls, i want it to print the row of data1.xls that contain of the same words with data.xls. I hope u can help me. Thank you.
0
0
0
0
false
44,626,892
0
67
1
0
0
44,626,578
I am assuming that both excel sheets have a list of words, with one word in each cell. The best way to write this program would be something like this: Open the first excel file, you might find it easier to open if you export it as a CSV first. Create a Dictionary to store word and Cell Index Pairs Iterate over each cell/word, add the word to the dictionary as the Key, with the Cell Reference as the Value. Open the second excel file. Iterate over each cell/word, check if the word is present in the Dictionary, if it is, you can print out the corresponding cells or store them however you want.
1
0
1
python- how to find same words in two different excel workbooks
1
excel,windows,python-2.7
0
2017-06-19T09:17:00.000
I'm not able anymore to change my database on arangodb. If I try to create a collection I get the error: Collection error: cannot create collection: invalid database directory If I try to delete a collection I get the error: Couldn't delete collection. Besides that some of the collections are now corrupted. I've been working with this db for 2 months and I'm only getting these errors now. Thanks.
1
1
0.197375
0
false
44,649,139
1
111
1
0
0
44,632,365
If anyone gets the same error anytime in life, it was just a temporary error due to server overload.
1
0
0
Python ArangoDB
1
python,arangodb
0
2017-06-19T13:47:00.000
I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than millions rows for all users. Now I want to implement advance search on historical data including between dates by adding all history data to cloud. If anyone can suggest the better structure for this historical model as I've gone through many of the links and articles. But cannot find proper solution as I have to take care of the performance for search (In Current search, the time is taken to fetch result is normal but when history is fetched, It'll scan all the records which causes slowdown of queries because of complex JOINs as needed). The query that is used to fetch the data from cloudSQL are made dynamically based on the users' need. For example, A user want the employees list whose manager is "[email protected]" , by using python code, the query will built accordingly. Now a user want to find users whose manager WAS "[email protected]" with effectiveFrom 2016-05-02 to 2017-01-01. As I've find some of the usecases for structure as below: 1) Same model as current structure with new column flag for isCurrentData (status of data whether it is history or active) Disadv.: - queries slowdown while fetching data as it will scan all records. Duplication of data might increase. These all disadv. will affect the performance of advance search by increasing time. Solution to this problem is to partition whole table into diff tables. 2) Partition based on year. As time passes, this will generate too many tables. 3) 2 tables might be maintained. 1st for current data and second one for history. But when user want to search data on both models will create complexity of build query. So, need suggestions for structuring historical timeline with improved performance and effective data handling. Thanks in advance.
1
0
0
0
false
44,715,500
1
81
2
1
0
44,654,127
@Kevin Malachowski : Thanks for guiding me with your info and questions as It gave me new way of thinking. Historical data records will be more than 0.3-0.5 million(maximum). Now I'll use BigQuery for historical advance search. For live data-cloudSQL will be used as we must focus on perfomance for fetched data. Some of performance issue will be there for historical search, when a user wants both results from live as well as historical data. (BigQuery is taking time near about 5-6 sec[or more] for worst case) But it will be optimized as per data and structure of the model.
1
0
0
Google CloudSQL : structuring history data on cloudSQL
2
python,google-app-engine,google-cloud-sql
0
2017-06-20T13:14:00.000
I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than millions rows for all users. Now I want to implement advance search on historical data including between dates by adding all history data to cloud. If anyone can suggest the better structure for this historical model as I've gone through many of the links and articles. But cannot find proper solution as I have to take care of the performance for search (In Current search, the time is taken to fetch result is normal but when history is fetched, It'll scan all the records which causes slowdown of queries because of complex JOINs as needed). The query that is used to fetch the data from cloudSQL are made dynamically based on the users' need. For example, A user want the employees list whose manager is "[email protected]" , by using python code, the query will built accordingly. Now a user want to find users whose manager WAS "[email protected]" with effectiveFrom 2016-05-02 to 2017-01-01. As I've find some of the usecases for structure as below: 1) Same model as current structure with new column flag for isCurrentData (status of data whether it is history or active) Disadv.: - queries slowdown while fetching data as it will scan all records. Duplication of data might increase. These all disadv. will affect the performance of advance search by increasing time. Solution to this problem is to partition whole table into diff tables. 2) Partition based on year. As time passes, this will generate too many tables. 3) 2 tables might be maintained. 1st for current data and second one for history. But when user want to search data on both models will create complexity of build query. So, need suggestions for structuring historical timeline with improved performance and effective data handling. Thanks in advance.
1
0
0
0
false
44,662,852
1
81
2
1
0
44,654,127
Depending on how often you want to do live queries vs historical queries and the size of your data set, you might want to consider placing the historical data elsewhere. For example, if you need quick queries for live data and do many of them, but can handle higher-latency queries and only execute them sometimes, you might consider periodically exporting data to Google BigQuery. BigQuery can be useful for searching a large corpus of data but has much higher latency and doesn't have a wire protocol that is MySQL-compatible (although it's query language will look familiar to those who know any flavor of SQL). In addition, while for Cloud SQL you pay for data storage and the amount of time your database is running, in BigQuery you mostly pay for data storage and the amount of data scanned during your query executions. Therefore, if you plan on executing many of these historical queries it may get a little expensive. Also, if you don't have a very large data set, BigQuery may be a bit of an overkill. How large is your "live" data set and how large do you expect your "historical" data set to grow over time? Is it possible to just increase the size of the Cloud SQL instance as the historical data grows until the point at which it makes sense to start exporting to Big Query?
1
0
0
Google CloudSQL : structuring history data on cloudSQL
2
python,google-app-engine,google-cloud-sql
0
2017-06-20T13:14:00.000
Me and my group are currently working on a school project where we need to use an online python compiler, since we are not allowed to install or download any software on their computers. The project requires me to read data from a .xlsx file. Is there any online IDE with xlrd that can read the file that is on the school's computer? I've been looking at a few but can't seem to find any that has this support. On tutorialspoint.com it is possible to upload the excel file but not import xlrd. Other sites has xlrd but doesn't allow for uploading files to the site.
0
-1
-0.066568
0
false
72,494,283
0
2,482
2
0
0
44,686,664
import tabula Read a PDF File df = tabula.read_pdf("file:///C:/Users/tanej/Desktop/salary.pdf", pages='all')[0] convert PDF into CSV tabula.convert_into("file:///C:/Users/tanej/Desktop/salary.pdf", "file:///C:/Users/tanej/Desktop/salary.csv", output_format="csv", pages='all') print(df)
1
0
1
Read excel file with an online Python compiler with xlrd
3
python,xlsx,xlrd,online-compilation
0
2017-06-21T21:36:00.000
Me and my group are currently working on a school project where we need to use an online python compiler, since we are not allowed to install or download any software on their computers. The project requires me to read data from a .xlsx file. Is there any online IDE with xlrd that can read the file that is on the school's computer? I've been looking at a few but can't seem to find any that has this support. On tutorialspoint.com it is possible to upload the excel file but not import xlrd. Other sites has xlrd but doesn't allow for uploading files to the site.
0
0
0
0
false
44,687,122
0
2,482
2
0
0
44,686,664
Could the pandas package and its pandas.read_clipboard function help? You'd need to copy the content of the file manually to the clipboard before starting your script. Alternatively - is it considered cheating to just rent a server? Pretty cheap these days. Finally: you don't usually require admin rights to install Python... so assuming it's not a violation of school policy, the Anaconda distribution for instance is very happy to be installed for the local user only.
1
0
1
Read excel file with an online Python compiler with xlrd
3
python,xlsx,xlrd,online-compilation
0
2017-06-21T21:36:00.000
I am trying to retrieve a large amount of data(more than 7 million) from database and trying to save a s flat file. The data is being retrieved using python code(python calls stored procedure). But I am having a problem here. The process is eating up so much of memory hence killing the process automatically by unix machine. I am using read_sql_query to read the data and to_csv to write into flat file. So, I wanted to ask if there is a way to solve this problem. May be reading only a few thousand rows at a time and saving them and go to next line. I even used chunksize parameter as well. But it does not seem to resolve the issue. Any help or suggestion will be greatly appreciated.
3
0
0
0
false
44,707,209
0
6,717
1
0
0
44,706,706
Rather than using the pandas library, make a database connection directly (using psycopg2, pymysql, pyodbc, or other connector library as appropriate) and use Python's db-api to read and write rows concurrently, either one-by-one or in whatever size chunks you can handle.
1
0
0
Reading and writing large volume of data in Python
3
python,sql,pandas
0
2017-06-22T18:15:00.000
I have a basic personal project website that I am looking to learn some web dev fundamentals with and database (SQL) fundamentals as well (If SQL is even the right technology to use??). I have the basic skeleton up and running but as I am new to this, I want to make sure I am doing it in the most efficient and "correct" way possible. Currently the site has a main index (landing) page and from there the user can select one of a few subpages. For the sake of understanding, each of these sub pages represents a different surf break and they each display relevant info about that particular break i.e. wave height, wind, tide. As I have already been able to successfully scrape this data, my main questions revolve around how would I go about inserting this data into a database for future use (historical graphs, trends)? How would I ensure data is added to this database in a continuous manner (once/day)? How would I use data that was scraped from an earlier time, say at noon, to be displayed/used at 12:05 PM rather than scraping it again? Any other tips, guidance, or resources you can point me to are much appreciated.
0
2
1.2
0
true
44,715,054
1
181
1
0
0
44,714,345
This kind of data is called time series. There are specialized database engines for time series, but with a not-extreme volume of observations - (timestamp, wave heigh, wind, tide, which break it is) tuples - a SQL database will be perfectly fine. Try to model your data as a table in Postgres or MySQL. Start by making a table and manually inserting some fake data in a GUI client for your database. When it looks right, you have your schema. The corresponding CREATE TABLE statement is your DDL. You should be able to write SELECT queries against your table that yield the data you want to show on your webapp. If these queries are awkward, it's a sign that your schema needs revision. Save your DDL. It's (sort of) part of your source code. I imagine two tables: a listing of surf breaks, and a listing of observations. Each row in the listing of observations would reference the listing of surf breaks. If you're on a Mac, Sequel Pro is a decent tool for playing around with a MySQL database, and playing around is probably the best way to learn to use one. Next, try to insert data to the table from a Python script. Starting with fake data is fine, but mold your Python script to read from your upstream source (the result of scraping) and insert into the table. What does your scraping code output? Is it a function you can call? A CSV you can read? That'll dictate how this script works. It'll help if this import script is idempotent: you can run it multiple times and it won't make a mess by inserting duplicate rows. It'll also help if this is incremental: once your dataset grows large, it will be very expensive to recompute the whole thing. Try to deal with importing a specific interval at a time. A command-line tool is fine. You can specify the interval as a command-line argument, or figure out out from the current time. The general problem here, loading data from one system into another on a regular schedule, is called ETL. You have a very simple case of it, and can use very simple tools, but if you want to read about it, that's what it's called. If instead you could get a continuous stream of observations - say, straight from the sensors - you would have a streaming ingestion problem. You can use the Linux subsystem cron to make this script run on a schedule. You'll want to know whether it ran successfully - this opens a whole other can of worms about monitoring and alerting. There are various open-source systems that will let you emit metrics from your programs, basically a "hey, this happened" tick, see these metrics plotted on graphs, and ask to be emailed/texted/paged if something is happening too frequently or too infrequently. (These systems are, incidentally, one of the main applications of time-series databases). Don't get bogged down with this upfront, but keep it in mind. Statsd, Grafana, and Prometheus are some names to get you started Googling in this direction. You could also simply have your script send an email on success or failure, but people tend to start ignoring such emails. You'll have written some functions to interact with your database engine. Extract these in a Python module. This forms the basis of your Data Access Layer. Reuse it in your Flask application. This will be easiest if you keep all this stuff in the same Git repository. You can use your chosen database engine's Python client directly, or you can use an abstraction layer like SQLAlchemy. This decision is controversial and people will have opinions, but just pick one. Whatever database API you pick, please learn what a SQL injection attack is and how to use user-supplied data in queries without opening yourself up to SQL injection. Your database API's documentation should cover the latter. The / page of your Flask application will be based on a SQL query like SELECT * FROM surf_breaks. Render a link to the break-specific page for each one. You'll have another page like /breaks/n where n identifies a surf break (an integer that increments as you insert surf break rows is customary). This page will be based on a query like SELECT * FROM observations WHERE surf_break_id = n. In each case, you'll call functions in your Data Access Layer for a list of rows, and then in a template, iterate through those rows and render some HTML. There are various Javascript and Python graphing libraries you can feed this list of rows into and get graphs out of (client side or server side). If you're interested in something like a week-over-week change, you should be able to express that in one SQL query and get that dataset directly from the database engine. For performance, try not to get in a situation where more than one SQL query happens during a page load. By default, you'll be doing some unnecessary work by going back to the database and recomputing the page every time someone requests it. If this becomes a problem, you can add a reverse proxy cache in front of your Flask app. In your case this is easy, since nothing users do to the app cause its content to change. Simply invalidate the cache when you import new data.
1
0
0
Flask website backend structure guidance assistance?
1
python,sql,web,flask
0
2017-06-23T06:20:00.000