Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I have a table in a PostgreSQL database.
I'm writing data to this table (using some computation with Python and psycopg2 to write results down in a specific column in that table).
I need to update some existing cell of that column.
Till now, I was able either to delete the complete row before writing this single cell because all other cells on the row were also written back as the same time, or delete the entire column for the same reason.
Now I can't do that anymore because that would mean long computation time to rebuild either the row or the column for only a few new values to be written in some cell.
I know the update command. It works well for that.
But, if I had existing values in some cells, and that a new computation gives me no more result for these cells, I would like to "clear" the existing values to keep the table up-to-date with the last computation I've done.
Is there a simple way to do that ? update doesn't seems to work (it seems to keep the old values).
I precise again I'm using psycopg2 to write things to my table. | 0 | 1 | 0.197375 | 0 | false | 44,718,475 | 0 | 878 | 1 | 0 | 0 | 44,718,379 | you simple update the cell with the value NULL in SQL - psycopg2 will insert NULL into the database when you update your column with None-type from python. | 1 | 0 | 0 | Update field with no-value | 1 | python,sql,postgresql,psycopg2 | 0 | 2017-06-23T09:50:00.000 |
I am trying to connect to a MySQL database using PyQt5 on Python 3.6 for 64-bit Windows. When I call QSqlDatabase.addDatabase('MYSQL') and run my utility, it shows up with this error message:
QSqlDatabase: QMYSQL driver not loaded
QSqlDatabase: available drivers: QSQLITE QMYSQL QMYSQL3 QODBC QODBC3 QP
SQL QPSQL7
This confuses me since according to the error message, the QMYSQL driver is loaded. I installed PyQt through the default installer, so the MySQL plugin should be installed. Has anyone else experienced this problem or does someone know the cause of this? | 0 | 1 | 1.2 | 0 | true | 44,992,670 | 0 | 1,820 | 1 | 0 | 0 | 44,753,724 | It said driver available but you need to rebuid a new Mysql driver base on Qt Source code and Mysql Library. | 1 | 0 | 0 | PyQt QSqlDatabase: QMYSQL driver not loaded | 2 | python,mysql,qt,pyqt | 0 | 2017-06-26T05:49:00.000 |
When I'm using pymysql to perform operations on MySQL database, it seems that all the operations are temporary and only visible to the pymysql connection, which means I can only see the changes through cur.execute('select * from qiushi') and once I cur.close() and conn.close() and log back in using pymysql, everything seems unchanged.
However, when I'm looking at the incremental id numbers, it does increased, but I can't see the rows that were inserted from pymysql connection. It seems that they were automatically deleted?!
Some of my code is here:
import pymysql
try:
conn = pymysql.connect(host='127.0.0.1',port=3306,user='pymysql',passwd='pymysql',charset='utf8')
cur = conn.cursor()
#cur.execute('CREATE TABLE qiushi (id INT NOT NULL AUTO_INCREMENT, content_id BIGINT(10) NOT NULL, content VARCHAR(1000), created TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY(id));')
#cur.execute('DESCRIBE content')
#cur.fetchall()
cur.execute('USE qiushibaike')
for _ in range(0,len(content_ids)):
cur.execute("INSERT INTO qiushi (content,content_id) VALUES (\"%s\",%d)"%(jokes[_],int(content_ids[_])))
finally:
cur.close()
conn.close() | 0 | 1 | 0.197375 | 0 | false | 44,758,048 | 0 | 377 | 1 | 0 | 0 | 44,756,118 | I solved the problem by myself...
Because the config is automatically committed, so after each SQL sentence we should commit the changes.
Approach 1:
add cur.commit() after the cur.execute()
Approach 2:
edit the connection config, add autocommit=True | 1 | 0 | 0 | Unable to INSERT with Pymysql (incremental id changes though) | 1 | python,mysql,pymysql | 0 | 2017-06-26T08:56:00.000 |
I have some records data with \n. When I do a SELECT query using psycopg2 the result comes with \n escaped like this \\n. I want the result have literal \n in order to use splitlines(). | 2 | 0 | 1.2 | 0 | true | 44,766,725 | 0 | 972 | 2 | 0 | 0 | 44,763,758 | The point is that values were edited with pgadmin3 (incorrectly, the correct way is shift+enter to add a new line). I asked the user to use phppgadmin (easier for him, multiline fields are edited with textarea control) and now everything is working properly.
So pyscopg2 WORKS fine, I'm sorry to thought it was the culprit.
He was putting literals \n in order to put new lines. | 1 | 0 | 1 | I don't want psycopg2 to escape new line character (\n) in query result | 2 | python,python-3.x,psycopg2,psycopg | 0 | 2017-06-26T15:56:00.000 |
I have some records data with \n. When I do a SELECT query using psycopg2 the result comes with \n escaped like this \\n. I want the result have literal \n in order to use splitlines(). | 2 | -1 | -0.099668 | 0 | false | 44,763,994 | 0 | 972 | 2 | 0 | 0 | 44,763,758 | Try this: object.replace("\\n", r"\n")
Hope this helped :) | 1 | 0 | 1 | I don't want psycopg2 to escape new line character (\n) in query result | 2 | python,python-3.x,psycopg2,psycopg | 0 | 2017-06-26T15:56:00.000 |
I wanna migrate from sqlite3 to MySQL in Django. Now I have been working in Oracle, MS Server and I know that I can make Exception to try over and over again until it is done... However this is insert in a same table where the data must be INSERTED right away because users will not be happy for waiting their turn on INSERT on the same table.
So I was wondering, will the deadlock happen on table if to many users make insert in same time and what should I do to bypass that, so that users don't sense it? | 1 | 1 | 1.2 | 0 | true | 44,789,450 | 1 | 397 | 1 | 0 | 0 | 44,789,046 | I don't think you can get deadlock just from rapid insertions. Deadlock occurs when you have two processes that are each waiting for the other one to do something before they can make the change that the other one is waiting for. If two processes are just inserting, the database will simply process them in the order that they're received, there's no dependency between them.
If you're using InnoDB, it uses row-level locking. So unless two inserts both try to insert the same unique key, they shouldn't even lock each other out, they can be done concurrently. | 1 | 0 | 0 | MySql: will it deadlock on to many insert - Django? | 1 | python,mysql,django | 0 | 2017-06-27T20:10:00.000 |
I have one program that downloads time series (ts) data from a remote database and saves the data as csv files. New ts data is appended to old ts data. My local folder continues to grow and grow and grow as more data is downloaded. After downloading new ts data and saving it, I want to upload it to a Google BigQuery table. What is the best way to do this?
My current work-flow is to download all of the data to csv files, then to convert the csv files to gzip files on my local machine and then to use gsutil to upload those gzip files to Google Cloud Storage. Next, I delete whatever tables are in Google BigQuery and then manually create a new table by first deleting any existing table in Google BigQuery and then creating a new one by uploading data from Google Cloud Storage. I feel like there is room for significant automation/improvement but I am a Google Cloud newbie.
Edit: Just to clarify, the data that I am downloading can be thought of downloading time series data from Yahoo Finance. With each new day, there is fresh data that I download and save to my local machine. I have to uploading all of the data that I have to Google BigQUery so that I can do SQL analysis on it. | 0 | 1 | 1.2 | 1 | true | 44,814,853 | 0 | 642 | 1 | 0 | 0 | 44,804,051 | Consider breaking up your data into daily tables (or partitions). Then you only need to upload the CVS from the current day.
The script you have currently defined otherwise seems reasonable.
Extract your new day of CSVs from your source of timeline data.
Gzip them for fast transfer.
Copy them to GCS.
Load the new CVSs into the current daily table/partition.
This avoids the need to delete existing tables and reduces the amount of data and processing that you need to do. As a bonus, it is easier to backfill a single day if there is an error in processing. | 1 | 0 | 0 | Python/Pandas/BigQuery: How to efficiently update existing tables with a lot of new time series data? | 1 | python,pandas,google-bigquery,google-cloud-platform,gsutil | 0 | 2017-06-28T13:34:00.000 |
i'm developing an app for my company, using Python2.7 and MariaDB. I have created a functions which backups our main database server to another database server. I use this command to do it:mysqldump -h localhost -P 3306 -u root -p mydb | mysql -h bckpIPsrv -P 3306 -u root -p mydb2 .
I want to know if it's posible to see some kind of verbose mode or a percentage of the job and display it on screen.
thank you. | 2 | -1 | -0.099668 | 0 | false | 54,635,754 | 0 | 6,336 | 1 | 0 | 0 | 44,824,517 | dumpcmd = "mysqldump -h " + DB_HOST + " -u " + DB_USER + " -p" + DB_USER_PASSWORD + " " + DB_NAME + "| pv | gzip > " + pipes.quote(
BACKUP_PATH) + "/" + FILE_NAME + ".sql" | 1 | 0 | 0 | is there a way to have mysqldump progress bar which shows the users the status of their backups? | 2 | mysql,python-2.7,mariadb | 0 | 2017-06-29T11:57:00.000 |
Trying to install a postgresql database which resides on Azure for my python flask application; but the installation of psycopg2 package requires the pg_config file which comes when postgresql is installed. So how do I export the pg_config file from the postgresql database which also resides on azure? Is pg_config all psycopg2 need for a successful installation? | 0 | 2 | 1.2 | 0 | true | 44,915,875 | 1 | 164 | 1 | 0 | 0 | 44,911,066 | You don't need the specific pg_config from the target database. It's only being used to compile against libpq, the client library for PostgreSQL, so you only need the matching PostgreSQL client installed on your local machine.
If you're on Windows I strongly advise you to install a pre-compiled PostgreSQL. You can just install the whole server, it comes with the client libraries.
If you're on Linux, you'll probably need the PostgreSQL -devel or -dev package that matches your PostgreSQL version. | 1 | 0 | 0 | How to retrieve the pg_config file from Azure postgresql Database | 1 | python,postgresql,azure,psycopg2 | 0 | 2017-07-04T16:59:00.000 |
What I want is execute the sql
select * from articles where author like "%steven%".
For the sake of safety, i used like this way :
cursor.execute('select * from articles where %s like %s', ('author', '%steven%')
Then the result is just empty, not get a syntax error, but just empty set.
But I am pretty sure there is some thing inside, I can get result use the first sql. Is there anything run with my code ? | 1 | 1 | 0.099668 | 0 | false | 44,937,097 | 0 | 221 | 1 | 0 | 0 | 44,937,003 | The problem here is fact a minor mistake. Thanks to @Asad Saeeduddin, when I try to use print cursor._last_executed to check what has happened. I found that what is in fact executed is
SELECT * FROM articles WHERE 'title' LIKE '%steven%', look the quotation mark around the title, that's the reason why I got empty set.
So always remember the string after formatting will have a quotation around | 1 | 0 | 0 | Could not format sql correctly in pymysql | 2 | python,mysql,pymysql | 0 | 2017-07-05T22:30:00.000 |
I am relatively new to Django.
I have managed to create a basic app and all that without problems and it works fine.
The question probably has been asked before.
Is there a way to update existing Django models already mapped to existing databases when the underlying database is modified?
To be specific, I have mysql database that I use for my Django app as well as some standalone python and R scripts. Now, it is much easier to update the mysql database with, say, daily stock prices, everyday from my existing scripts outside Django models. Ideally, what I would like is to have my Django models that are already mapped to these tables to reflect the updated data.
I know there is $ python manage.py inspectdb for creating models from existing databases. But that is not the objective.
From what I have gathered so far from the docs and online searches, it is imperative to update the backend database through Django models. Not outside of it. Is it the case really? As long as the table structure doesn;t change I really don't see the why this should not be allowed. Database is meant to serve multiple customers isn't it? With Django being one of it.
And I can not provide a reproducible example as it is a conceptual question.
If this functionality doesn't exist, imho, it really should.
Thanks,
Kaustubh | 1 | 4 | 1.2 | 0 | true | 44,954,903 | 1 | 514 | 1 | 0 | 0 | 44,954,521 | You don't need to update models if you just added new data. Models are related to a database structure only. | 1 | 0 | 0 | Django models update when backend database updated | 1 | python,django,django-models | 0 | 2017-07-06T16:33:00.000 |
I'm working with Pycharm in a project to read SQL DBs ,I'm working in a windows 10 64bits workstation and I'm trying to install the module pymssql, I have already installed VS2015 to get all requirements but now each time that i try to install i got the message:
error: command 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe' failed with exit status 2
I saw on message details the error in:
_mssql.c(266): fatal error C1083: Cannot open include file: 'sqlfront.h': No such file or directory
How can i figured it out? thanks | 4 | 0 | 0 | 0 | false | 50,968,882 | 0 | 3,976 | 2 | 0 | 0 | 44,955,927 | I had a same problem but its fixed this way.
Copied "rc.exe" and "rcdll.dll" from "C:\Program Files (x86)\Windows Kits\8.1\bin\x86"
Pasted "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin" | 1 | 0 | 1 | Install pymssql 2.1.3 in Pycharm | 5 | sql,python-3.x,pycharm | 0 | 2017-07-06T17:57:00.000 |
I'm working with Pycharm in a project to read SQL DBs ,I'm working in a windows 10 64bits workstation and I'm trying to install the module pymssql, I have already installed VS2015 to get all requirements but now each time that i try to install i got the message:
error: command 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe' failed with exit status 2
I saw on message details the error in:
_mssql.c(266): fatal error C1083: Cannot open include file: 'sqlfront.h': No such file or directory
How can i figured it out? thanks | 4 | 0 | 0 | 0 | false | 69,630,619 | 0 | 3,976 | 2 | 0 | 0 | 44,955,927 | In my case helped me rollback to Python 3.8. Same problem I had on 3.10 x64 | 1 | 0 | 1 | Install pymssql 2.1.3 in Pycharm | 5 | sql,python-3.x,pycharm | 0 | 2017-07-06T17:57:00.000 |
I am new to pyspark. I want to plot the result using matplotlib, but not sure which function to use. I searched for a way to convert sql result to pandas and then use plot. | 15 | 1 | 0.099668 | 1 | false | 66,233,233 | 0 | 30,940 | 1 | 0 | 0 | 45,003,301 | For small data, you can use .select() and .collect() on the pyspark DataFrame. collect will give a python list of pyspark.sql.types.Row, which can be indexed. From there you can plot using matplotlib without Pandas, however using Pandas dataframes with df.toPandas() is probably easier. | 1 | 0 | 0 | How to use matplotlib to plot pyspark sql results | 2 | python,pandas,matplotlib,pyspark-sql | 0 | 2017-07-10T03:15:00.000 |
I am trying to write a data migration script moving data from one database to another (Teradata to snowflake) using JDBC cursors.
The table I am working on has about 170 million records and I am running into the issue where when I execute the batch insert a maximum number of expressions in a list exceeded, expected at most 16,384, got 170,000,000.
I was wondering if there was any way around this or if there was a better way to batch migrate records without exporting the records to a file and moving it to s3 to be consumed by the snowflake. | 1 | 1 | 0.197375 | 0 | false | 46,125,739 | 0 | 215 | 1 | 0 | 0 | 45,012,005 | If your table has 170M records, then using JDBC INSERT to Snowflake is not feasible. It would perform millions of separate insert commands to the database, each requiring a round-trip to the cloud service, which would require hundreds of hours.
Your most efficient strategy would be to export from Teradata into multiple delimited files -- say with 1 - 10 million rows each. You can then either use the Amazon's client API to move the files to S3 using parallelism, or use Snowflake's own PUT command to upload the files to Snowflake's staging area for your target table. Either way, you can then load the files very rapidly using Snowflake's COPY command once they are in your S3 bucket or Snowflake's staging area. | 1 | 0 | 0 | JDBC limitation on lists | 1 | python,jdbc,teradata,snowflake-cloud-data-platform | 0 | 2017-07-10T12:25:00.000 |
Background:
I have an application written in Python to monitor the status of tools. The tools send their data from specific runs and it all gets stored in an Oracle database as JSON files.
My Problem/Solution:
Instead of connecting to the DB and then querying it repeatedly when I want to compare the current run data to the previous run's data, I want to make a copy of the database query so that I can compare the new run data to the copy that I made instead of to the results of the query.
The reason I want to do this is because constantly querying the server for the previous run's data is slow and puts unwanted load/usage on the server.
For the previous run's data there are multiple files associated with it (because there are multiple tools) and therefore each query has more than one file that would need to be copied. Locally storing the copies of the files in the query is what I intended to do, but I was wondering what the best way to go about this was since I am relativity new to doing something like this.
So any help and suggestions on how to efficiently store the results of a query, which are multiple JSON files, would be greatly appreciated! | 1 | 1 | 1.2 | 0 | true | 45,044,543 | 0 | 465 | 1 | 0 | 0 | 45,036,714 | As you described querying the db too many times is not an option. OK in that case I would do this the following way :
When your program starts you get the data for all tools as a set of JSON-Files per tool right? OK. I am not sure how you get the data by querying the tools directly or by querying the db .. does not matter.
You check if you have old data in the "cache-dictionary" for that tool. If yes do your compare and store the "new data" as "previous data" in the cache. Ready for the next run. Do this for all tools. This loops forever :-)
This "cache dictionary" now can be implemented in memory or on disk. For your amount of data I think memory is just fine.
With that approach you do not have to query the db for the old data. The case that you cannot do the compare if you do not have old data in the "cache" at program start could be handled that you try to get it from db (risking long query times but what to do :-) | 1 | 0 | 0 | How To Store Query Results (Using Python) | 1 | python,json,database,oracle | 0 | 2017-07-11T14:00:00.000 |
Currently we are uploading the data retrieved from vendor APIs into Google Datastore. Wanted to know what is the best approach with data storage and querying the data.
I will be need to query millions of rows of data and will be extracting custom engineered features from the data. So wondering whether I should load the data into BigQuery directly and query it for faster processing or store it in Datastore and then move it to BigQuery for querying?. I will be using pandas for performing statistics on stored data. | 0 | 0 | 0 | 1 | false | 45,395,282 | 0 | 577 | 1 | 0 | 0 | 45,061,306 | As far as I can tell there is no support for Datastore in Pandas. This might affect your decision. | 1 | 0 | 0 | Is Google Cloud Datastore or Google BigQuery better suited for analytical queries? | 3 | python,pandas,google-cloud-datastore,google-bigquery,google-cloud-platform | 0 | 2017-07-12T15:00:00.000 |
I am trying to export data from Aurora into S3, I have created a stored procedure to perform this action. I can schedule this on the Aurora Scheduler to run at a particular point in time.
However, I have multiple tables - could go up to 100; so I want my process controller which is a python script sitting in Lambda to send a Queue Message - Based on this Queue message the stored procedure in Aurora will be started
I am looking at this for the following reasons
I do not want too much time lag between starting two exports
I also do not want two exports overlapping in execution time | 0 | 0 | 0 | 0 | false | 45,124,304 | 1 | 679 | 2 | 0 | 0 | 45,098,004 | There isn't any built-in integration that allows SQS to interact with Aurora.
Obviously you can do this externally, with a queue consumer that reads from the queue and invokes the procedures, but that doesn't appear to be relevant, here. | 1 | 0 | 0 | Does anyone know if we can start a storedprocedure in Aurora based on SQS | 3 | python,amazon-web-services,amazon-sqs,amazon-aurora | 0 | 2017-07-14T08:14:00.000 |
I am trying to export data from Aurora into S3, I have created a stored procedure to perform this action. I can schedule this on the Aurora Scheduler to run at a particular point in time.
However, I have multiple tables - could go up to 100; so I want my process controller which is a python script sitting in Lambda to send a Queue Message - Based on this Queue message the stored procedure in Aurora will be started
I am looking at this for the following reasons
I do not want too much time lag between starting two exports
I also do not want two exports overlapping in execution time | 0 | 0 | 0 | 0 | false | 55,167,030 | 1 | 679 | 2 | 0 | 0 | 45,098,004 | I have used lambda with alembic package to create schema and structures. I know we could create users and execute other database commands - the same way execute a stored procedure
Lambda could prove to be expensive - we probably could have an container to do it | 1 | 0 | 0 | Does anyone know if we can start a storedprocedure in Aurora based on SQS | 3 | python,amazon-web-services,amazon-sqs,amazon-aurora | 0 | 2017-07-14T08:14:00.000 |
In my pyramid app it's useful to be able to log in as any user (for test/debug, not in production). My normal login process is just a simple bcrypt check against the hashed password.
When replicating user-submitted bug reports I found it useful to just clone the sqlite database and run a simple script which would change everyone's password to a fixed string (just for local testing). Now that I'm switching over to postgresql that's less convenient to do, and I'm thinking of installing a backdoor to my login function.
Basically I wish to check os.environ (set from the debug.wsgi file which is loaded by apache through mod_wsgi) for a particular variable 'debug'. If it exists then I will allow login using any password (for any user), bypassing the password check.
What are the security implications of this? As I understand it, the wsgi file is sourced once when apache loads up, so if the production.wsgi file does not set that particular variable, what's the likelihood of an attacker (or incompetent user) spoofing it? | 2 | 1 | 1.2 | 0 | true | 45,113,051 | 1 | 64 | 1 | 0 | 0 | 45,112,983 | In order to instantiate the server application with that debug feature in environment, the attacker would have to have the hand over your webserver, most probably with administrative privileges.
From an outside process, an attacker cannot modify the environment of the running server, which is loaded into memory, without at least debug capabilities and a good payload for rewriting memory. It would be easier to just reload the server or try executing a script within it.
I think you are safe the way you go. If you are paranoid, ensure to isolate (delete) the backdoor from the builds to production. | 1 | 0 | 0 | Security implications of a pyramid/wsgi os.environ backdoor? | 1 | python,security,pyramid,environment,dev-to-production | 1 | 2017-07-14T23:42:00.000 |
I'm sending data back and forth Python and Cassandra. I'm using both builtin float types in my python program and the data type for my Cassandra table. If I send a number 955.99 from python to Cassandra, in the database it shows 955.989999. When I send a query in python to return the value I just sent, it is now 955.989990234375.
I understand the issue with precision loss in python, I just wanted to know if there's any built-in mechanisms in Cassandra that could prevent this issue. | 4 | 1 | 0.099668 | 0 | false | 50,065,729 | 0 | 639 | 1 | 0 | 0 | 45,139,240 | Also if you cannot change your column definition for some reason, converting your float value to string and passing str to the cassandra-driver will also solve your problem.
It will be able to generate the precise decimal values form str. | 1 | 0 | 0 | Python Cassandra floating precision loss | 2 | python,cassandra,floating-point,precision,cassandra-python-driver | 0 | 2017-07-17T08:23:00.000 |
I have a python script that execute a gbq job to import a csv file from Google cloud storage to an existing table on BigQuery.
How can I set the job properties to import to the right columns provided in the first row of the csv file?
I set parameter 'allowJaggedRows' to TRUE, but it import columns in order regardless of column names in the header of csv file. | 0 | 2 | 0.379949 | 1 | false | 45,156,763 | 0 | 3,297 | 1 | 0 | 0 | 45,155,117 | When you import a CSV into BigQuery the columns will be mapped in the order the CSV presents them - the first row (titles) won't have any effect in the order the subsequent rows are read.
To be noted, if you were importing JSON files, then BigQuery would use the name of each column, ignoring the order. | 1 | 0 | 0 | How to import CSV to an existing table on BigQuery using columns names from first row? | 1 | python,google-bigquery,import-from-csv | 0 | 2017-07-17T23:23:00.000 |
I need to store json objects on the google cloud platform. I have considered a number of options:
Store them in a bucket as a text (.json) file.
Store them as text in datastore using json.dumps(obj).
Unpack it into a hierarchy of objects in datastore.
Option 1: Rejected because it has no organising principles other than the filename and cannot be searched across.
Option 2: Is easy to implement, but you cannot search using dql.
Option 3: Got it to work after a lot of wrangling with the key and parent key structures. While it is searchable, the resulting objects have been split up and held together by the parent key relationships. It is really ugly!
Is there any way to store and search across a deeply structured json object on the google cloud platform - other than to set up mongodb in a compute instance? | 1 | 0 | 0 | 0 | false | 45,204,510 | 1 | 1,421 | 1 | 1 | 0 | 45,184,482 | I don't know what your exact searching needs are, but the datastore API allows for querying that is decently good, provided you give the datastore the correct indexes. Plus it's very easy to go take the entities in the datastore and pull them back out as .json files. | 1 | 0 | 1 | Storing json objects in google datastore | 1 | json,python-2.7,google-app-engine,google-cloud-datastore | 0 | 2017-07-19T08:05:00.000 |
I encountered the following irregularities and wanted to share my solution.
I'm reading a sql table from Microsoft SQL Server in Python using Pandas and SQLALCHEMY. There is a column called "occurtime" with the following format: "2017-01-01 01:01:11.000". Using SQLAlchemy to read the "occurtime" column, everything was returned as NaN. I tried to set the parse_date parameter in the pandas.read_sql() method but with no success.
Is anyone else encountering issue reading a datetime column from a SQL table using SQLAlchemy/Pandas? | 0 | 0 | 0 | 1 | false | 45,197,852 | 0 | 752 | 1 | 0 | 0 | 45,197,851 | I had to work around the datetime column from my SQL query itself just so SQLAlchemy/Pandas can stop reading it as a NaN value.
In my SQL query, I used CONVERT() to convert the datetime column to a string. This was ready with no issue, and then I used pandas.to_datetime() to convert it back into datetime.
Anyone else with a better solution or know what's really going on, please share your answer, I'd really appreciate it!!! | 1 | 0 | 0 | Pandas read_sql | 1 | python,sql-server,pandas,sqlalchemy | 0 | 2017-07-19T17:59:00.000 |
As a user of the database, are there any quicker ways of exporting data from Filemaker using languages like python or java? Perhaps to an Excel.
My job involves exporting selected data constantly from our company's Filemaker database. However, the software is super slow, and the design of our app is bad which makes selecting which data to export a pain. (I have to manually select data one by one by opening the full record of each data. There's no batch export function.)
Please provide me with alternative methods. I feel very stupid in doing this. | 1 | 0 | 0 | 0 | false | 45,215,938 | 0 | 1,900 | 1 | 0 | 0 | 45,205,162 | You can also save records as a spreadsheet for use in Microsoft Excel. For more information, see Saving and sending records as an Excel file in the FileMaker Help file. Use export when you want to export records in the current found set or export in a format other than an Excel spreadsheet. Use Save as Excel when you want to create an Excel spreadsheet that contains all the fields you have access to on the current layout.
If your FileMaker Pro source file contains summary fields, you can group by a sorted field in order to export subsummary values, such as subtotals generated by a report with grouped data. This process exports one record for each group. For example, if you have a report that totals sales by region, you can export one summary value for each region. | 1 | 0 | 0 | Any quick way to export data from Filemaker? | 2 | python,database,excel,filemaker,data-extraction | 0 | 2017-07-20T04:25:00.000 |
I need to send keys to excel to refresh formulas.
What are my best options?
I am already using Openpyxl but it does not satisfy all my needs. | 1 | 1 | 0.197375 | 0 | false | 55,856,869 | 0 | 1,564 | 1 | 0 | 0 | 45,225,010 | If this still helps, you can use from pywin32 (which should be a default package) to use win32com.client.
Sample code:
import win32com.client
xl = win32com.client.Dispatch("Excel.Application")
xl.sendkeys("^+s") # saves file
Use "%" to access alt so you can get hotkeys. | 1 | 0 | 0 | Using python to send keys to active Excel Window | 1 | python,excel,keyboard | 0 | 2017-07-20T20:56:00.000 |
I am developing a skill for Amazon Alexa and I'm using DynamoDB for storing information about the users favorite objects. I would like 3 columns in the database:
Alexa userId
Object
Color
I currently have the Alexa userId as the primary key. The problem that I am running into is that if I try to add an entry into the db with the same userId, it overwrites the entry already in there. How can I allow a user to have multiple objects associated with them in the db by having multiple rows? I want to be able to query the db by the userId and receive all the objects that they have specified.
If I create a unique id for every entry, and there are multiple users, I can't possibly know the id to query by to get the active users' objects. | 9 | 1 | 0.039979 | 0 | false | 45,266,031 | 1 | 10,290 | 2 | 0 | 0 | 45,227,546 | DynamoDB is a NoSQL-like, document database, or key-value store; that means, you may need to think about your tables differently from RDBMS. From what I understand from your question, for each user, you want to store information about their preferences on a list of objects; therefore, keep your primary key simple, that is, the user ID. Then, have a singe "column" where you store all the preferences. That can either be a list of of tuples (object,color) OR a dictionary of unique {object:color}.
When you explore the items in the web UI, it will show these complex data structures as json-like documents which you can expand as you will. | 1 | 0 | 0 | How to create multiple DynamoDB entries under the same primary key? | 5 | python,amazon-web-services,amazon-dynamodb,alexa-skills-kit | 0 | 2017-07-21T01:26:00.000 |
I am developing a skill for Amazon Alexa and I'm using DynamoDB for storing information about the users favorite objects. I would like 3 columns in the database:
Alexa userId
Object
Color
I currently have the Alexa userId as the primary key. The problem that I am running into is that if I try to add an entry into the db with the same userId, it overwrites the entry already in there. How can I allow a user to have multiple objects associated with them in the db by having multiple rows? I want to be able to query the db by the userId and receive all the objects that they have specified.
If I create a unique id for every entry, and there are multiple users, I can't possibly know the id to query by to get the active users' objects. | 9 | 0 | 0 | 0 | false | 45,693,902 | 1 | 10,290 | 2 | 0 | 0 | 45,227,546 | you cannot create multiple entries with same primary key. Please create composite keys (multiple keys together as primary key). Please note you cannot have multiple records of same combination | 1 | 0 | 0 | How to create multiple DynamoDB entries under the same primary key? | 5 | python,amazon-web-services,amazon-dynamodb,alexa-skills-kit | 0 | 2017-07-21T01:26:00.000 |
As far as I know Django apps can't start if any of the databases set in the settings.py are down at the start of the application. Is there anyway to make Django "lazyload" the initial database connection?
I have two databases configured and one of them is a little unstable and sometimes it can be down for some seconds, but it's only used for some specific use cases of the application. As you can imagine I don't want that all the application can't start because of that. Is there any solution for that?
I'm using Django 1.6.11, and we also use Django South for database migrations (in case that it's related somehow). | 6 | 1 | 1.2 | 0 | true | 45,638,008 | 1 | 2,860 | 1 | 0 | 0 | 45,240,311 | I did some more tests and problem only happens when you are using de development server python manage.py runserver. In that case, it forces a connection with the database.
Using an actual WSGI server it doesn't happen as @Alasdair informed.
@JohnMoutafis in the end I didn't test your solution, but that could work. | 1 | 0 | 0 | Django: How to disable Database status check at startup? | 2 | python,mysql,django,django-models,django-south | 0 | 2017-07-21T14:32:00.000 |
I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration. Is there any way to specify the ordering or am I just obsessed with the order but I shouldn't be?
Update after the discussion
PostgreSQL DBMS doesn't support positional column addition. So it is practically meaningless to expect this facility from the migration tool for column addition. | 5 | 11 | 1.2 | 0 | true | 45,261,424 | 1 | 5,299 | 2 | 0 | 0 | 45,261,303 | AFAIK, there's no officially supported way to do this, because fields are supposed to be atomic and it shouldn't be relevant. However, it messes with my obsessive-compulsive side as well, and I like my columns to be ordered for when I need to debug things in dbshell, for example. Here's what I've found you can do:
Make a migration with python manage.py makemigrations
Edit the migration file and reorder the fields in migrations.createModel
Good luck! | 1 | 0 | 0 | Django Migration Database Column Order | 2 | python,django,postgresql,django-models,migration | 0 | 2017-07-23T03:45:00.000 |
I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration. Is there any way to specify the ordering or am I just obsessed with the order but I shouldn't be?
Update after the discussion
PostgreSQL DBMS doesn't support positional column addition. So it is practically meaningless to expect this facility from the migration tool for column addition. | 5 | 0 | 0 | 0 | false | 59,406,349 | 1 | 5,299 | 2 | 0 | 0 | 45,261,303 | I am not 100% sure about the PostgreSQL syntax but this is what it looks like in SQL after you have created the database. I'm sure PostgreSQL would have an equivalent:
ALTER TABLE yourtable.yourmodel
CHANGE COLUMN columntochange columntochange INT(11) NOT NULL AFTER columntoplaceunder;
Or if you have a GUI (mysql workbench in my case) you can go to the table settings and simply drag and drop colums as you wish and click APPLY. | 1 | 0 | 0 | Django Migration Database Column Order | 2 | python,django,postgresql,django-models,migration | 0 | 2017-07-23T03:45:00.000 |
I have a django sql explorer which is running with 5 queries and 3 users.
Query1
Query2
Query3
Query4
Query5
I want to give access of Query1 and Query5 to user1
and Query4 and Query2 to user2 and likewise.
my default url after somebody logins is url/explorer
based on users permission he should see only those queries but as of now all users can see all queries,
I tried to search stackoverflow and also other places through google but there is no direct answer. Can someone point me to right resource or help me with doing this. | 1 | 0 | 0 | 0 | false | 47,134,025 | 1 | 233 | 1 | 0 | 0 | 45,320,643 | It's not possible to do with default implementation. You need to download the source code and customize as per your needs. | 1 | 0 | 0 | django sql explorer - user based query access | 1 | mysql,django,python-2.7 | 0 | 2017-07-26T07:50:00.000 |
I am writing a small program in python with pywin32 that manipulates some data in excel and I want to hide a row in order to obscure a label on one of my pivot tables.
According to MSDN the proper syntax is
Worksheet.Rows ('Row#').EntireRow.Hidden = True
When I try this in my code nothing happens - no error, nor hidden row. I have tried every combination I can think of of ranges to try and feed it but it will not hide the row in the output files.
Anyone know of a solution to this or if it is not handled by pywin?
EDIT:
Upon further debugging, I am finding that when I immediately check, the row's Hidden value is True but when I reach the save point the row is no longer hidden (another print reveals Hidden = False) | 3 | 0 | 0 | 0 | false | 45,335,421 | 0 | 1,117 | 2 | 0 | 0 | 45,334,926 | I'm not familiar with python syntax, but in VBA you dont put quotes around the row number... Ex: myWorksheet.Rows(10).EntireRow.Hidden = True | 1 | 0 | 0 | Hide row in excel not working - pywin32 | 2 | python,excel,vba,winapi,pywin32 | 0 | 2017-07-26T18:33:00.000 |
I am writing a small program in python with pywin32 that manipulates some data in excel and I want to hide a row in order to obscure a label on one of my pivot tables.
According to MSDN the proper syntax is
Worksheet.Rows ('Row#').EntireRow.Hidden = True
When I try this in my code nothing happens - no error, nor hidden row. I have tried every combination I can think of of ranges to try and feed it but it will not hide the row in the output files.
Anyone know of a solution to this or if it is not handled by pywin?
EDIT:
Upon further debugging, I am finding that when I immediately check, the row's Hidden value is True but when I reach the save point the row is no longer hidden (another print reveals Hidden = False) | 3 | 1 | 0.099668 | 0 | false | 45,335,753 | 0 | 1,117 | 2 | 0 | 0 | 45,334,926 | Turns out that a cell merge later in my program was undoing the hidden row - despite the fact that the merged cells were not in the hidden row. | 1 | 0 | 0 | Hide row in excel not working - pywin32 | 2 | python,excel,vba,winapi,pywin32 | 0 | 2017-07-26T18:33:00.000 |
What I can observe:
I am using windows 7 64bit My code (establish an odbc connection with
a SQL server on the network, simple reading operations only) is
written in python 3.6.2 32bit
I pip installed pyodbc, so I assume that was 32bit as well.
I downloaded and installed the 64bit "Microsoft® ODBC Driver 13.1 for SQL Server®" from microsoft website.
My python code connects to
other computers on the network, which run server2003 32bit and either SQL Server 2005(32bit) or sql2008(32bit).
The setup works.
Moreover: cursory test shows that, the above setup can successfully connect to a computer with Microsoft server2008(64bit) running sql2012(64bit) with the configuration under "SQL Server Network Connection (32bit)" being empty (meaing, the 32bit dll is missing), while the default 64 bit network connection configuration contains the usual config options like ip adress and listening port info.
My own explanation:
[1] the client and the server's OS and ODBC interfaces can be of any 32/64 bit combination, but the protocol that travels thru the network between my computer and the sql computer will be identical.
[2] 32 bit python+pyodbc can talk to microsoft's 64bit odbc driver, because... 32 bit python knows how to use a 64 bit DLL...? | 0 | 1 | 1.2 | 0 | true | 45,365,583 | 0 | 1,620 | 1 | 0 | 0 | 45,362,440 | A 32bit application can NOT invoke a 64bit dll, so python 32bit can not talk to a 64bit driver for sure.
msodbc driver for sql server is in essence a dll file: msodbcsql13.dll
I just found out (which is not even mentioned by microsoft) that "odbc for sql server 13.1 x64" will install a 64bit msodbcsql13.dll in system32 and a 32bit msodbcsql13.dll in SysWOW64 ( 32bit version of "system32" on a 64bit windows system)
I can not however be certain that the network protocol between a 32bit client talking to 64bit sql server will be the same as a 64bit client talking to a 64bit sql server. But, I believe that, once a request is put on the network by the client to the server, 32bit or 64bit doesn't matter anymore. Someone please comment on this | 1 | 0 | 0 | 32bit pyodbc for 32bit python (3.6) works with microsoft's 64 bit odbc driver. Why? | 1 | python-3.x,odbc,driver,32bit-64bit,pyodbc | 0 | 2017-07-27T23:12:00.000 |
I have web application which made by symfony2(php framework)
So there is mysql database handled by doctrine2 php source code.
Now I want to control this DB from python script.
Of course I can access directly to DB from python.
However, it is complex and might break the doctrine2 rule.
Is there a good way to access database via php doctrine from python?? | 0 | 0 | 0 | 0 | false | 45,375,722 | 0 | 153 | 1 | 0 | 0 | 45,371,167 | You can try the Django ORM or SQL Alchemy but the configuration of the models have to be done very carefully. Maybe you can write a parser from Doctrine2 config files to Django models. If you do, open source it please. | 1 | 0 | 0 | How to access doctrine database made by php from python | 1 | php,python,symfony,frameworks | 1 | 2017-07-28T10:31:00.000 |
AFAIU and from docs, RealDictCursor is a specialized DictCursor that enables to access columns only from keys (aka columns name), whereas DictCursor enables to access data both from keys or index number.
I was wondering why RealDictCursor has been implemented if DictCursor offers more flexibility? Is it performance-wise (or memory-wise) so different (in favor of RealDictCursor I imagine...)?
In other words, what are RealDictCursor use cases vs DictCursor? | 15 | -1 | -0.099668 | 0 | false | 54,212,351 | 0 | 14,075 | 1 | 0 | 0 | 45,399,347 | class psycopg2.extras.RealDictCursor(*args, **kwargs)
A cursor that uses a real dict as the base type for rows. Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic DictCursor instead of RealDictCursor. class psycopg2.extras.RealDictConnection A connection that uses RealDictCursor automatically.
Note
Not very useful since Psycopg2.5: you can use psycopg2.connect(dsn, cursor_factory=RealDictCursor) instead of RealDictConnection. class
psycopg2.extras.RealDictRow(cursor) A dict subclass representing a
data record. | 1 | 0 | 1 | psycopg2: DictCursor vs RealDictCursor | 2 | python,python-3.x,postgresql,psycopg2 | 0 | 2017-07-30T11:32:00.000 |
I have a project that :
fetches data from active directory
fetches data from different services based on active directory data
aggregates data
about 50000 row have to be added to database in every 15 min
I'm using Postgresql as database and django as ORM tool. But I'm not sure that django is the right tools for such projects. I have to drop and add 50000 rows data and I'm worry about performance.
Is there another way to do such process? | 0 | 0 | 1.2 | 0 | true | 45,404,605 | 1 | 81 | 1 | 0 | 0 | 45,404,241 | For sure there are other ways, if that's what you're asking. But Django ORM is quite flexible overall, and if you write your queries carefully there will be no significant overhead. 50000 rows in 15 minutes is not really big enough. I am using Django ORM with PostgreSQL to process millions of records a day. | 1 | 0 | 0 | Collecting Relational Data and Adding to a Database Periodically with Python | 3 | django,python-2.7,postgresql,orm | 0 | 2017-07-30T20:12:00.000 |
I am trying to generate a report in excel using win32com. I can get the information into the correct cells. However, one of my columns contains an ID number, and excel is formatting it as a number (displaying it in scientific notation). I have tried formatting the cell as text using sheet.Range(cell).NumberFormat = '@', which works, but will only update after the cell has been selected in the actual excel file. The same thing happens whether I format the cell before or after entering the data. Is there a way to refresh the cell formatting using win32com? I want the ID numbers to display correctly as soon as the com instance is made visible. | 0 | 2 | 1.2 | 0 | true | 45,443,851 | 0 | 328 | 1 | 0 | 0 | 45,443,395 | Pass a single leading quote to Excel ahead of the number, for example "'5307245040001" instead of "5307245040001" | 1 | 0 | 0 | Formatting does not automatically update when using excel with win32com | 1 | python,excel,number-formatting,win32com | 0 | 2017-08-01T16:43:00.000 |
I would like to use xlwings wit the OPTIMIZED_CONNECTION set to TRUE. I would like to modify the setting but somehow cannot find where to do it. I change the _xlwings.conf sheet name in my workbook but this seems to have no effect. Also I cannot find these settings in VBA as I think I am supposed to under what is called "Functions settings in VBA module" in the xlwings documentation. I tried to re-import the VBA module but cannot find xlwings.bas on my computer.(only xlwings.xlam that I cannot access in VBA)
I am using the 0.11.4 version of xlwings.
Sorry for this boring question and thanks in advance for any help. | 0 | 0 | 0 | 0 | false | 45,456,886 | 0 | 702 | 1 | 0 | 0 | 45,455,892 | The add-in replaces the need for the settings in VBA in newer versions.
One can debug the xlam module using "xlwings" as a password.
This enabled me to realize that the OPTIMIZED_CONNECTION parameter is now set through "USE UDF SERVER" keyword in the xlwings.conf sheet (which does work) | 1 | 0 | 0 | xlwings VBA function settings edit | 1 | python,windows,xlwings | 0 | 2017-08-02T08:47:00.000 |
So i am writing a python3 app with kivy and i want to have some data stored in a database using sqlite.
The user needs to have access to that data from the first time he opens the app
Is there a way to possibly make it so that when i launch the app, the user that downloads it, will already have the data i stored, like distribute the database along with the app? so that i don't have to create it for every user.
I have searched here and there but haven't found an answer yet
Thank you in advance | 1 | 1 | 1.2 | 0 | true | 45,489,681 | 0 | 310 | 1 | 0 | 0 | 45,483,128 | Just include the database file in the apk, as you would any other file. | 1 | 1 | 0 | Android-How can i attach SQLite database in python app | 1 | android,python-3.x,sqlite,kivy | 0 | 2017-08-03T11:39:00.000 |
So I was trying learn sqlite and how to use it from Ipython notebook, and I have a sqlite object named db.
I am executing this command:
sel=" " " SELECT * FROM candidates;" " "
c=db.cursor().execute(sel)
and when I do this in the next cell:
c.fetchall()
it does print out all the rows but when I run this same command again i.e. I run
c.fetchall() again it doesn't print out anything, it just displays a two square brackets with nothing inside them. But when I run the above first command ie, c=db.cursor().execute(sel) and then run db.fetchall() it again prints out the table.
This is very weird and I don't understand it, what does this mean? | 0 | 1 | 1.2 | 0 | true | 45,498,306 | 0 | 189 | 1 | 0 | 0 | 45,498,188 | That is because .fetchall() makes your cursor(c) pointing the last row.
if you want to select your DB again, you should .execute again.
Or, if you just want to use your fetched data again, you can store c.fetchall() into your variable. | 1 | 0 | 0 | Weird behavior by db.cursor.execute() | 2 | python,sqlite,ipython | 0 | 2017-08-04T04:16:00.000 |
I am using django 1.10 and python 3.6.1
when executing
get_or_none(models.Character, pk=0), with SQL's get method, the query returns a hashmap i.e.: <Character: example>
How can I extract the value example?
I tried .values(), I tried iterating, I tried .Character
nothing seems to work, and I can't find a solution in the documentation.
Thank you, | 0 | 0 | 0 | 0 | false | 45,501,557 | 1 | 414 | 1 | 0 | 0 | 45,500,972 | @Daniel Roseman helped me understand the answer.
SOLVED:
What I was getting from the query was the model of character, so I couldn't have accessed it thru result.Character but thru result.Field_Inside_Of_Character | 1 | 0 | 0 | Django SQL get query returns a hashmap, how to access the value? | 3 | python,mysql,sql,django | 0 | 2017-08-04T07:43:00.000 |
I have to SSH into 120 machines and make a dump of a table in databases and export this back on to my local machine every day, (same database structure for all 120 databases).
There isn't a field in the database that I can extract the name from to be able to identify which one it comes from, it's vital that it can be identified, as it's for data analysis.
I'm using the Python tool Fabric to automate the process and export the CSV on to my machine..
fab -u PAI -H 10.0.0.35,10.0.0.XX,10.0.0.0.XX,10.0.0.XX -z 1
cmdrun:"cd /usr/local/mysql/bin && ./mysql -u root -p -e 'SELECT *
FROM dfs_va2.artikel_trigger;' >
/Users/admin/Documents/dbdump/dump.csv"
download:"/Users/johnc/Documents/Imports/dump.csv"
Above is what I've got working so far but clearly, they'll all be named "dump.csv" is there any awesome people out there can give me a good idea on how to approach this? | 0 | 0 | 0 | 0 | false | 45,508,534 | 0 | 62 | 1 | 1 | 0 | 45,508,137 | You can try to modify your command as follow:
mysql -uroot -p{your_password} -e 'SELECT * FROM dfs_va2.artikel_trigger;' > /Users/admin/Documents/dbdump/$(hostname)_dump.csv" download:"/Users/johnc/Documents/Imports/$(hostname)_dump.csv"
hostname returns current machine name so all your files should be unique (of course if machines have unique names)
Also you don't need to navigate to /bin/mysql every time, you can use simply mysql or absolute path /usr/local/mysql/bin/mysql | 1 | 0 | 0 | Best way to automate file names of multiple databases | 1 | python,automation,fabric,devops | 0 | 2017-08-04T13:28:00.000 |
I am trying to include sqlite3 in an electron project I am getting my hands dirty with. I have never used electron, nor Node before, excuse my ignorance. I understand that to do this on Windows, I need Python installed, I need to download sqlite3, and I need to install it.
As per the NPM sqlite3 page, I am trying to install it using npm install --build-from-source
It always fails with
unpack_sqlite_dep
'python' is not recognized as an internal or external command,
operable program or batch file.
I have Python 2.7 installed and the path has been added to environment variable PATH. I can verify that if I type 'python' in cmd, I get the same response. BUT, if I type 'py', it works....
So, my question is: how can I make node-gyp use the 'py' command instead of 'python' when trying to unpack sqlite3?
If this is not possible, how can I make 'python' an acceptable command to use?
I am using Windows 10 if this helps. Also, please let me know if I can do this whole procedure in a different way.
Thanks for any help! | 0 | 0 | 0 | 0 | false | 45,533,423 | 0 | 259 | 1 | 0 | 0 | 45,527,497 | This has been resolved....
Uninstalled Python 2.7.13. Reinstalled, added path to PATH variable again, now command 'python' works just fine... | 1 | 0 | 0 | Failing to install sqlite3 plugin for electron project on windows | 1 | python-2.7,sqlite,electron,node-gyp | 0 | 2017-08-06T00:18:00.000 |
I have two database in mysql that have tables built from another program I have wrote to get data, etc. However I would like to use django and having trouble understanding the model/view after going through the tutorial and countless hours of googling. My problem is I just want to access the data and displaying the data. I tried to create routers and done inspectdb to create models. Only to get 1146 (table doesn't exist issues). I have a unique key.. Lets say (a, b) and have 6 other columns in the table. I just need to access those 6 columns row by row. I'm getting so many issues. If you need more details please let me know. Thank you. | 0 | 0 | 0 | 0 | false | 45,529,591 | 1 | 104 | 1 | 0 | 0 | 45,529,142 | inspectdb is far from being perfect. If you have an existing db with a bit of complexity you will end up probably changing a lot of code generated by this command. One you're done btw it should work fine. What's your exact issue? If you run inspectdb and it creates a model of your table you should be able to import it and use it like a normal model, can you share more details or errors you are getting while querying the table you're interested in? | 1 | 0 | 0 | Django and Premade mysql databases | 1 | python,mysql,django | 0 | 2017-08-06T06:14:00.000 |
I have a task which would really benefit from implementing partitioned tables, but I am torn because Postgres 10 will be coming out relatively soon.
If I just build normal tables and handle the logic with Python format strings to ensure that my data is loaded to the correct tables, can I turn this into a partition easily later?
Can I upgrade Postgres 9.6 to 10 right now? Or is that not advisable?
Should I install an extension like pg_partman?
My format string approach would just create separate tables (f{server}{source}{%Y%m}) and then I would union them together I suppose. Hopefully, I could eventually create a master table though without tearing anything down. | 1 | 1 | 1.2 | 0 | true | 45,537,959 | 0 | 130 | 1 | 0 | 0 | 45,535,616 | Pg 10 partitioning right now is functionally the same as 9.6, just with prettier notation. Pretty much anything you can do in Pg 10, you can also do in 9.6 with table-inheritance based partitioning, it's just not as convenient.
It looks like you may not have understood that table inheritance is used for partitioning in 9.6, since you refer to doing big UNIONs. This is unnecessary, PostgreSQL does it for you if you do inheritance-based partitioning. You can also have triggers that route inserts into the parent table into child tables, though it's more efficient for the application to route tuples like you suggest, by inserting directly into partitions. This will also work in PostgreSQL 10.
Pg's new built-in partitioning doesn't yet offer any new features you can't get with inheritance, like support for unique constraints across partitions, FKs referencing partitioned tables, etc. So there's really no reason to wait.
Just study up on how to do partitioning on 9.6.
I don't know if you can convert 9.6-style manual partitioning into PostgreSQL 10 native partitioning without copying the data. Ask on the mailing list or post a new specific question.
That said... often when people think they need partitioning, they don't. How sure are you that it's worth it? | 1 | 0 | 0 | Postgres partitioning options right now? | 1 | python,postgresql | 0 | 2017-08-06T19:09:00.000 |
I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script.
When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission denied error.
I tried to remove the read only mode on the folder I'm working on, I have all permissions on the computer, I clearly specified the save directory of my excel workbooks.
Any idea on what could be the issue? | 0 | 0 | 0 | 0 | false | 45,539,442 | 0 | 2,830 | 3 | 0 | 0 | 45,539,241 | make sure you have write permission in order to create a excel temporary lock file in said directory... | 1 | 0 | 0 | openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed | 4 | python,excel,openpyxl | 0 | 2017-08-07T04:01:00.000 |
I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script.
When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission denied error.
I tried to remove the read only mode on the folder I'm working on, I have all permissions on the computer, I clearly specified the save directory of my excel workbooks.
Any idea on what could be the issue? | 0 | 6 | 1 | 0 | false | 50,027,342 | 0 | 2,830 | 3 | 0 | 0 | 45,539,241 | Windows does not let you modify open Excel files in another program -- only Excel may modify open Excel files. You must close the file before modifying it with the script. (This is one nice thing about *nix systems.) | 1 | 0 | 0 | openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed | 4 | python,excel,openpyxl | 0 | 2017-08-07T04:01:00.000 |
I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script.
When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission denied error.
I tried to remove the read only mode on the folder I'm working on, I have all permissions on the computer, I clearly specified the save directory of my excel workbooks.
Any idea on what could be the issue? | 0 | 0 | 0 | 0 | false | 50,026,988 | 0 | 2,830 | 3 | 0 | 0 | 45,539,241 | I've had this issue with Excel files that are located in synced OneDrive folders. If I copy the file to a unsynced directory, openpyxl no longer has problems reading the .xlsx file while it is open in Excel. | 1 | 0 | 0 | openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed | 4 | python,excel,openpyxl | 0 | 2017-08-07T04:01:00.000 |
I would like to host a database on my raspberry pi to which I can access from any device. I would like to access the contents of the database using python.
What I've done so far:
I installed the necessary mysql packages, including apache 2.
I created my first database which I named test.
I wrote a simple php
script that connects and displays all the contents of my simple
database. The script is located on the raspberry pi at /var/www/html
and is executed when I enter the following from my laptop
(192.168.3.14/select.php)
Now my goal is to be able to connect to the database using python from my laptop. But I seem to have an error connecting to it, this is what I wrote to connect to it.
db = MySQLdb.connect("192.168.3.14","root","12345","test" )
Any help or direction is appreciated. | 0 | 1 | 0.066568 | 0 | false | 45,593,914 | 0 | 104 | 1 | 0 | 0 | 45,593,608 | on the terminal of your raspi use the following command:
mysql -u -p -h --port
where you switch out your hostname with your ip address. since currently you can only connect via local host | 1 | 0 | 0 | Raspberry Pi Database Server | 3 | php,python,mysql | 1 | 2017-08-09T14:31:00.000 |
I want to export data from Cassandra to Json file, because Pentaho didn't support my version of Cassandra 3.10 | 1 | 0 | 0 | 0 | false | 60,649,389 | 0 | 3,302 | 1 | 0 | 0 | 45,607,301 | You can use bash redirction to get json file.
cqlsh -e "select JSON * from ${keyspace}.${table}" | awk 'NR>3 {print $0}' | head -n -2 > table.json | 1 | 0 | 0 | How to export data from cassandra to Json file using Python or other language? | 4 | python,json,cassandra,cqlsh | 0 | 2017-08-10T07:35:00.000 |
I have a task to import multiple Excel files in their respective sql server tables. The Excel files are of different schema and I need a mechanism to create a table dynamically; so that I don't have to write a Create Table query. I use SSIS, and I have seen some SSIS articles on the same. However, it looks I have to define the table anyhow. OpenRowSet doesn't work well in case of large excel files. | 0 | 0 | 0 | 0 | false | 45,614,658 | 0 | 59 | 1 | 0 | 0 | 45,610,737 | You can try using BiML, which dynamically creates packages based on meta data.
The only other possible solution is to write a script task. | 1 | 0 | 0 | Multiple Excel with different schema Upload in SQL | 1 | python,sql,sql-server,excel,ssis | 0 | 2017-08-10T10:09:00.000 |
I have python programs that use python's xlwings module to communicate with excel. They work great, but I would like to run them using a button from excel. I imported xlwings to VBA and use the RunPython command to do so. That also works great, however the code I use with RunPython is something like:
"from filename import function;function()"
which requires me to make the entire python program a function. This is annoying when I go back to make edits to the python program because every variable is local. Any tips on running the file from RunPython without having to create a function out of it? | 0 | 2 | 1.2 | 0 | true | 45,622,510 | 0 | 873 | 1 | 0 | 0 | 45,621,637 | RunPython basically just does what it says: run python code. So to run a module rather than a single function, you could do: RunPython("import filename"). | 1 | 0 | 1 | Running Python from VBA using xlwings without defining function | 1 | python,vba,excel,xlwings | 0 | 2017-08-10T19:01:00.000 |
I have a Python Scraper that I run periodically in my free tier AWS EC2 instance using Cron that outputs a csv file every day containing around 4-5000 rows with 8 columns. I have been ssh-ing into it from my home Ubuntu OS and adding the new data to a SQLite database which I can then use to extract the data I want.
Now I would like to try the free tier AWS MySQL database so I can have the database in the Cloud and pull data from it from my terminal on my home PC. I have searched around and found no direct tutorial on how this could be done. It would be great if anyone that has done this could give me a conceptual idea of the steps I would need to take. Ideally I would like to automate the updating of the database as soon as my EC2 instance updates with a new csv table. I can do all the de-duping once the table is in the aws MySQL database.
Any advice or link to tutorials on this most welcome. As I stated, I have searched quite a bit for guides but haven't found anything on this. Perhaps the concept is completely wrong and there is an entirely different way of doing it that I am not seeing? | 0 | 1 | 0.099668 | 0 | false | 45,643,778 | 1 | 158 | 1 | 0 | 0 | 45,630,562 | The problem is you don't have access to RDS filesystem, therefore cannot upload csv there (and import too).
Modify your Python Scraper to connect to DB directly and insert data there. | 1 | 0 | 0 | Exported scraped .csv file from AWS EC2 to AWS MYSQL database | 2 | python,mysql,database,database-design,amazon-ec2 | 0 | 2017-08-11T08:36:00.000 |
I am looking for a solution to build an application with the following features:
A database compound of -potentially- millions of rows in a table, that might be related with a few small ones.
Fast single queries, such as "SELECT * FROM table WHERE field LIKE %value"
It will run on a Linux Server: Single node, but maybe multiple nodes in the future.
Do you think Python and Hadoop is a good choice?
Where could I find a quick example written in Python to add/retrieve information to Hadoop in order to see a proof of concept running with my one eyes and take a decision?
Thanks in advance! | 1 | 1 | 0.197375 | 0 | false | 45,631,639 | 0 | 43 | 1 | 0 | 0 | 45,631,450 | Not sure whether these questions are on topic here, but fortunately the answer is simple enough:
In these days a million rows is simply not that large anymore, even Excel can hold more than a million.
If you have a few million rows in a large table, and want to run quick small select statements, the answer is that you are probably better off without Hadoop.
Hadoop is great for sets of 100 million rows, but does not scale down too wel (in performance and required maintenance).
Therefore, I would recommend you to try using a 'normal' database solution, like MySQL. At least untill your data starts growing significantly.
You can use python for advanced analytical processing, but for simple queries I would recommend using SQL. | 1 | 0 | 0 | is the choice of Python and Hadoop a good one for this scenario? | 1 | python,hadoop,hadoop-streaming | 0 | 2017-08-11T09:20:00.000 |
I'm working on a little python3 server and I want to download a sqlite database from this server. But when I tried that, I discovered that the downloaded file is larger than the original : the original file size is 108K, the downloaded file size is 247K. I've tried this many times, and each time I had the same result. I also checked the sum with sha256, which have different results.
Here is my downloader.py file :
import cgi
import os
print('Content-Type: application/octet-stream')
print('Content-Disposition: attachment; filename="Library.db"\n')
db = os.path.realpath('..') + '/Library.db'
with open(db,'rb') as file:
print(file.read())
Thanks in advance !
EDIT :
I tried that :
$ ./downloader > file
file's size is also 247K. | 0 | 0 | 0 | 0 | false | 45,646,512 | 0 | 445 | 1 | 0 | 0 | 45,646,249 | Well, I've finally found the solution. The problem (which I didn't see first) was that the server sent plain text to client. Here is one way to send binary data :
import cgi
import os
import shutil
import sys
print('Content-Type: application/octet-stream; file="Library.db"')
print('Content-Disposition: attachment; filename="Library.db"\n')
sys.stdout.flush()
db = os.path.realpath('..') + '/Library.db'
with open(db,'rb') as file:
shutil.copyfileobj(file, sys.stdout.buffer)
But if someone has a better syntax, I would be glad to see it ! Thank you ! | 1 | 0 | 0 | File downloaded larger than original | 1 | python-3.x,cgi | 0 | 2017-08-12T03:33:00.000 |
I'm working a lot with Excel xlsx files which I convert using Python 3 into Pandas dataframes, wrangle the data using Pandas and finally write the modified data into xlsx files again.
The files contain also text data which may be formatted. While most modifications (which I have done) have been pretty straight forward, I experience problems when it comes to partly formatted text within a single cell:
Example of cell content: "Medical device whith remote control and a Bluetooth module for communication"
The formatting in the example is bold and italic but may also be a color.
So, I have two questions:
Is there a way of preserving such formatting in xlsx files when importing the file into a Python environment?
Is there a way of creating/modifying such formatting using a specific python library?
So far I have been using Pandas, OpenPyxl, and XlsxWriter but have not succeeded yet. So I shall appreciate your help!
As pointed out below in a comment and the linked question OpenPyxl does not allow for this kind of formatting:
Any other ideas on how to tackle my task? | 0 | 0 | 0 | 1 | false | 45,689,273 | 0 | 190 | 1 | 0 | 0 | 45,688,168 | i have been recently working with openpyxl. Generally if one cell has the same style(font/color), you can get the style from cell.font: cell.font.bmeans bold andcell.font.i means italic, cell.font.color contains color object.
but if the style is different within one cell, this cannot help. only some minor indication on cell.value | 1 | 0 | 0 | Modifying and creating xlsx files with Python, specifically formatting single words of a e.g. sentence in a cell | 1 | python,excel,pandas,openpyxl,xlsxwriter | 0 | 2017-08-15T07:22:00.000 |
I've been pouring over everywhere I can to find an answer to this, but can't seem to find anything:
I've got a batch update to a MySQL database that happens every few minutes, with Python handling the ETL work (I'm pulling data from web API's into the MySQL system).
I'm trying to get a sense of what kinds of potential impact (be it positive or negative) I'd see by using either multithreading or multiprocessing to do multiple connections & inserts of the data simultaneously. Each worker (be it thread or process) would be updating a different table from any other worker.
At the moment I'm only updating a half-dozen tables with a few thousand records each, but this needs to be scalable to dozens of tables and hundreds of thousands of records each.
Every other resource I can find out there addresses doing multithreading/processing to the same table, not a distinct table per worker. I get the impression I would definitely want to use multithreading/processing, but it seems everyone's addressing the one-table use case.
Thoughts? | 0 | 0 | 0 | 0 | false | 45,702,416 | 0 | 325 | 1 | 0 | 0 | 45,702,192 | For one I wrote in C#, I decided the best work partitioning was each "source" having a thread for extraction, one for each transform "type", and one to load the transformed data to each target.
In my case, I found multiple threads per source just ended up saturating the source server too much; it became less responsive overall (to even non-ETL queries) and the extractions didn't really finish any faster since they ended up competing with each other on the source. Since retrieving the remote extract was more time consuming than the local (in memory) transform, I was able to pipeline the extract results from all sources through one transformer thread/queue (per transform "type"). Similarly, I only had a single target to load the data to, so having multiple threads there would have just monopolized the target.
(Some details omitted/simplified for brevity, and due to poor memory.)
...but I'd think we'd need more details about what your ETL process does. | 1 | 0 | 1 | Python Multithreading/processing gains for inserts to different tables in MySQL? | 2 | python,mysql,python-multiprocessing,python-multithreading | 0 | 2017-08-15T21:59:00.000 |
I'm trying to create app using the command python3 manage.py startapp webapp but i'm getting an error that says:
django.core.exceptions.ImproperlyConfigured: Error loading either
pysqlite2 or sqlite3 modules (tried in that order): No module named
'_sqlite3'
So I tried installing sqlite3 using pip install sqlite3 but I got this error:
Using cached sqlite3-99.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-dbz_f1ia/sqlite3/setup.py", line 2, in
raise RuntimeError("Package 'sqlite3' must not be downloaded from pypi")
RuntimeError: Package 'sqlite3' must not be downloaded from pypi
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-dbz_f1ia/sqlite3/
I tried running this command: sudo apt install sqlite3 but it says sudo is not a valid command, even apt isn't for some reason. I'm running Python3.6.2. I installed Python on my Godaddy hosting and i'm using SSH to install everything. I installed Python and setup a virtualenv. Afterwards, I installed Django and created a Django project. How can I fix these errors to successfully create a Django app? | 3 | 2 | 0.379949 | 0 | false | 45,706,624 | 1 | 5,897 | 1 | 0 | 0 | 45,704,177 | sqlite3 is part of the standard library. You don't have to install it.
If it's giving you an error, you probably need to install your distribution's python-dev packages, eg with sudo apt-get install python-dev. | 1 | 0 | 0 | Downloading sqlite3 in virtualenv | 1 | python,django,sqlite | 0 | 2017-08-16T02:25:00.000 |
Working with Python 2.7 and I'd like to add new sheets to a current Excel workbook indexed to a specific position. I know Openpyxl's create_sheet command will allow me to specify an index for a new sheet within an existing workbook, but there's a catch: Openpyxl will delete charts from an existing Excel workbook if opened & saved. And my workbook has charts that I don't wish to be deleted.
Is there another way I can open this workbook, create a a few blank sheets that are located precisely after the existing first sheet, all without deleting any of the workbook's charts? | 0 | 0 | 1.2 | 0 | true | 45,729,433 | 0 | 70 | 1 | 0 | 0 | 45,724,575 | openpyxl 2.5 includes read support for charts | 1 | 0 | 0 | Add indexed sheets to Excel workbook w/out Openpyxl? | 1 | excel,python-2.7,openpyxl | 0 | 2017-08-16T23:45:00.000 |
I have configured the server to use MySQL Cluster. The Cluster architecture is as follows:
One Cluster Manager(ip1)
Two Data Nodes (ip2,ip3)
Two SQL Nodes(ip4,ip5)
My Question: Which node should I use to connect from Python application? | 0 | 3 | 1.2 | 0 | true | 45,729,005 | 0 | 915 | 1 | 0 | 0 | 45,728,111 | You have to call SQL nodes from your application. Use comma separated ip addresses for this. In your code use
DB_HOST = "ip4, ip5" | 1 | 0 | 0 | Connecting to mysql cluster from python application | 1 | python,mysql,mysql-cluster | 0 | 2017-08-17T06:35:00.000 |
I would like to know how to insert into same MongoDb collection from different python scripts running at the same time using pymongo
any help redirecting guidance would be very appreciated because I couldn't find any clear documentation in pymongo or mongdb about it yet
thank in advance | 1 | 1 | 1.2 | 0 | true | 45,737,589 | 0 | 754 | 1 | 0 | 0 | 45,737,486 | You should be able to just insert into the collection in parallel without needing to do anything special. If you are updating documents then you might find there are issues with locking, and depending on the storage engine which your MongoDB is using there may be collection locking, but this should not affect how you write your python script. | 1 | 0 | 1 | Writing in parallel to MongoDb collection from python | 1 | python,mongodb,python-3.x,pymongo,pymongo-3.x | 0 | 2017-08-17T14:11:00.000 |
Let me explain the problem
We get real time data which is as big as 0.2Million per day.
Some of these records are of special significance. The attributes
that shall mark them as significant are pushed in a reference collection. Let us say each row in Master Database has the following attributes
a. ID b. Type c. Event 1 d. Event 2 e. Event 3 f. Event 4
For the special markers, we identify them as
Marker1 -- Event 1 -- Value1
Marker2 -- Event 3 -- Value1
Marker3 -- Event 1 -- Value2
and so on. We can add 10000 such markers.
Further, the attribute Type can be Image, Video, Text, Others. Hence the idea is to segregate Data based on Type, which means that we create 4 collections out of Master Collection. This is because we have to run search on collections based on Type and also run some processing.The marker data should show in a different tab on the search screen.
We shall also be running a search on Master Collection through a wild search.
We are running Crons to do these processes as
I. Dumping Data in Master Collection - Cron 1
II. Assigning Markers - Cron 2
III. Segregating Data based on Type - Cron 3
Which runs as a module. Cron 1 - Cron 2 - Cron 3.
But assigning targets and segregation takes a very long time. We are using Python as scripting language.
In fact, the crons don't seem to work at all. The cron works from the command prompt. But scheduling these in crontab does not work. We are giving absolute path to the files. The crons are scheduled at 3 minutes apart.
Can someone help? | 0 | 0 | 0 | 0 | false | 45,797,856 | 0 | 143 | 1 | 0 | 0 | 45,769,111 | Yes, I also faced this problem but then I tried by moving small chunks of the data. Sharding is not the better way as per my experience regarding this kind of problem. Same thing for the replica set. | 1 | 0 | 0 | How to segregate large real time data in MongoDB | 1 | mongodb,python-3.x,cron | 0 | 2017-08-19T07:51:00.000 |
I just want to set a password to my file "file.db" (SQLite3 database), if someone trying to open this DB it has to ask password for authentication.
is there any way to do this Using python.
Thanks in Advance. | 0 | 0 | 0 | 0 | false | 45,811,504 | 0 | 52 | 1 | 0 | 0 | 45,811,440 | Asking a password when opening a file doesn't make much sense, it will take another program to do that, watching the file and intercepting the request at os level..
What you need to do is protect the file using ACL, setting the proper access rights to only desired users&groups. | 1 | 0 | 0 | protecting DB using Python | 1 | python,sqlite | 0 | 2017-08-22T07:28:00.000 |
When selecting a data source for a graph in Excel, you can specify how the graph should treat empty cells in your data set (treat as zero, connect with next data point, leave gap).
The option to set this behavior is available in xlsxwriter with chart.show_blanks_as(), but I can't find it in openpyxl. If anyone knows where to find it or can confirm that it's not present, I'd appreciate it. | 0 | 0 | 0 | 1 | false | 45,868,428 | 0 | 101 | 1 | 0 | 0 | 45,825,401 | Asked the dev about it -
There is a dispBlanksAs property of the ChartContainer but this currently isn't accessible to client code.
I looked through the source some more using that answer to guide me. The option is definitely in there, but you'd have to modify source and build locally to get at it.
So no, it's not accessible at this time. | 1 | 0 | 0 | How to replicate the "Show empty cells as" functionality of Excel graphs | 1 | python,excel,openpyxl | 0 | 2017-08-22T19:20:00.000 |
I would like to store a "set" in a database (specifically PostgreSQL) efficiently, but I'm not sure how to do that efficiently.
There are a few options that pop to mind:
store as a list ({'first item', 2, 3.14}) in a text or binary column. This has the downside of requiring parsing when inserting into the database and pulling out. For sets of text strings only, this seems to work pretty well, and the parsing is minimal. For anything more complicated, parsing becomes difficult.
store as a pickle in a binary column. This seems like it should be quick, and it is complete (anything picklable works), but isn't portable across languages.
store as json (either as a binary object or a text stream). Larger problems than just plain text, but better defined parsing.
Are there any other options? Does anyone have any experience with these? | 0 | 2 | 0.197375 | 0 | false | 45,850,429 | 0 | 76 | 1 | 0 | 0 | 45,848,956 | What you want to do is store a one-to-many relationship between a row in your table and the members of the set.
None of your solutions allow the members of the set to be queried by SQL. You can't do something like select * from mytable where 'first item' in myset. Instead you have to retrieve the text/blob and use another programming language to decode or parse it. That means if you want to do a query on the elements of the set you have to do a full table scan every time.
I would be very reluctant to let you do something like that in one of my databases.
I think you should break out your set into a separate table. By which I mean (since that is clearly not as obvious as I thought), one row per set element, indexed over primary key of the table you are referring from or, if you want to enforce no duplicates at the cost of a little extra space, primary key of the table you are referring from + set element value.
Since your set elements appear to be of heterogeneous types I see no harm in storing them as strings, as long as you normalize the numbers somehow. | 1 | 0 | 1 | How to store a "set" (the python type) in a database efficiently? | 2 | python,sql,json,postgresql,pickle | 0 | 2017-08-23T20:42:00.000 |
I have an excel xlsx file that I want to edit using python script.
I know that openpyxl is not able to treat data-validation but I want just to edit the value of some cells containing data-validation and then save the workbook without editing those data-validation.
For now, when I try to do that, I get an error :
UserWarning: Data Validation extension is not supported and will be
removed
and then my saved file doesn't contain anymore the data-validation.
Is there a way to tell openpyxl not to remove the data-validation when saving a workbook even if it can't read it? | 2 | 1 | 0.197375 | 0 | false | 45,863,816 | 0 | 2,941 | 1 | 0 | 0 | 45,862,917 | To be clear: openpyxl does support data validation as covered by the original OOXML specification. However, since then Microsoft has extended the options for data validation and it these that are not supported. You might be able to adjust the data validation so that it is supported. | 1 | 0 | 0 | openpyxl : data-validation read/write without treatment | 1 | python,excel,openpyxl | 0 | 2017-08-24T13:26:00.000 |
I just switched from django 1.3.7 to 1.4.22 (on my way to updating to a higher version of django). I am using USE_TZ=True and TIME_ZONE = 'Europe/Bucharest'. The problem that I am encountering is a DateTimeField from DB (postgres) that holds the value 2015-01-08 10:02:03.076+02 (with timezone) is read by my django as 2015-01-08 10:02:03.076000 (without timezone) even thou USE_TZ is True.
Any ideea why this might happen? I am using python 2.7.12 AMD64.
Thanks,
Virgil | 1 | 0 | 0 | 0 | false | 45,879,886 | 1 | 32 | 1 | 0 | 0 | 45,878,039 | Seems I needed to logout once and log in again in the app for it to work. Thanks. | 1 | 0 | 0 | Django offset-naive date from DB | 1 | python,django | 0 | 2017-08-25T09:09:00.000 |
I am using 64-bit python anaconda v4.4 which runs python v3. I have MS Access 2016 32-bit version. I would like to use pyodbc to get python to talk to Access. Is it possible to use 64-bit pyodbc to talk to a MS Access 2016 32-bit database?
I already have a number of python applications running with the 64-bit python anaconda. It will be a chore to downgrade to 32-bit python. | 10 | 4 | 0.26052 | 0 | false | 45,929,130 | 0 | 15,479 | 1 | 0 | 0 | 45,928,987 | Unfortunately, you need 32-bit Python to talk to 32-bit MS Access. However, you should be able to install a 32-bit version of Python alongside 64-bit Python. Assuming you are using Windows, during a custom install you can pick the destination path. Then use a virtualenv. For example, if you install to C:\Python36-32:
virtualenv --python=C:\Python36-32\bin\python.exe
Good luck! | 1 | 0 | 1 | Is it possible for 64-bit pyodbc to talk to 32-bit MS access database? | 3 | python,ms-access,odbc,32bit-64bit,pyodbc | 0 | 2017-08-29T00:20:00.000 |
Hope you have a great day. I have a table with 470 columns to be exact. I am working on Django unit testing and the tests won't execute giving the error when I run command python manage.py test:
Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In
current row format, BLOB prefix of 768 bytes is stored inline
To resolve this issue I am trying to increase the innodb_page_size in MySQL my.cnf file. When I restart MySQL server after changing value in my.cnf file, MySQL won't restart.
I have tried almost every available solution on stackoverflow but no success.
MYSQL version=5.5.57
Ubuntu version = 16.04
Any help would be greatly appreciated. Thank you | 0 | 0 | 0 | 0 | false | 46,018,456 | 1 | 978 | 1 | 0 | 0 | 45,964,972 | Since I have never seen anyone use the feature of having bigger block size, I have no experience with making it work. And I recommend you not be the first to try.
Instead I offer several likely workarounds.
Don't use VARCHAR(255) blindly; make the lengths realistic for the data involved.
Don't use uf8 (or utf8mb4) for columns that can only have ascii. Examples: postcode, hex strings, UUIDs, country_code, etc. Use CHARACTER SET ascii.
Vertically partition the table. That is spit it into two tables with the same PRIMARY KEY.
Don't splay arrays across columns; use another table and have multiple rows in it. Example: phone1, phone2, phone3. | 1 | 0 | 0 | Changing innodb_page_size in my.cnf file does not restart mysql database | 1 | python,mysql,django,unit-testing,innodb | 0 | 2017-08-30T15:58:00.000 |
when I try to connect to my application deploy at Pythonanywhere database does not working, its seems that he can't reach to him.
when I am using my computer and run the app all seems to be perfect.
any one any ideas?
Thanks very much. | 0 | 1 | 0.197375 | 0 | false | 46,028,070 | 0 | 197 | 1 | 0 | 0 | 46,013,567 | Hey after checking out I found that pythonanywhere required paid plan in order to use mlab services, or others services. | 1 | 0 | 0 | pythonanywhere with mlab(mongoDB) | 1 | mongodb,pythonanywhere,mlab | 0 | 2017-09-02T12:03:00.000 |
I'm trying to connect to a PostgreSQL database on Google Cloud using SQLAlchemy. Making a connection to the database requires specifying a database URL of the form: dialect+driver://username:password@host:port/database
I know what the dialect + driver is (postgresql), I know my username and password, and I know the database name. But I don't know how to find the host and port on the Google Cloud console. I've tried using the instance connection name, but that doesn't seem to work. Anyone know where I can find this info on Google Cloud? | 3 | 3 | 0.197375 | 0 | false | 64,040,093 | 1 | 6,507 | 1 | 1 | 0 | 46,178,062 | Hostname is the Public IP address. | 1 | 0 | 0 | What is the hostname for a Google Cloud PostgreSQL instance? | 3 | python,postgresql,google-cloud-platform,google-cloud-storage,google-cloud-sql | 0 | 2017-09-12T13:42:00.000 |
I have two data files which is some weird format. Need to parse it to some descent format to use that for future purposes. after parsing i end up having two formats on which one has an id and respective information pertaining to that id will be from another file.
Ex :
From file 1 i get
Name, Position, PropertyID
from file 2
PropertyId, Property1,Property2
like this i have more columns from both the file.
what is the idle way to store these information in a flat file to server as a database. i don't want to use database(Mysql,MsSql) for some reason.
initially i thought of using single Coma separated file. but ill end up using so many columns which will create problem when i update these information.
I ll be using the parsed data in some other application using java and python
can anyone suggest better way to handle this
Thanks | 0 | 0 | 0 | 0 | false | 46,181,307 | 0 | 1,067 | 1 | 0 | 0 | 46,180,651 | Ensure that you normalize your data with an ID to avoid touching so many different data columns with even a single change. Like the file2 you mentioned above, you can reduce the columns to two by having just the propertyId and the property columns. Rather than having 1 propertyId associated with 2 property in a single row you'd have 1 propertyId associated with 1 property per your example above. You need another file to correlate your two main data table. Normalizing your data like this can make your updates to them very minimal when change occurs.
file1:
owner_id | name | position |
1 | Jack Ma | CEO |
file2:
property_id | property |
101 | Hollywood Mansion |
102 | Miami Beach House |
file3:
OwnerId | PropertyId |
1 | 101
1 | 102 | 1 | 0 | 1 | Storing data in flat files | 2 | java,python,flat-file | 0 | 2017-09-12T15:44:00.000 |
I am running my Python script in which I write excel files to put them into my EC2 instance. However, I have noticed that these excel files, although they are created, are only put into the server once the code stops.
I guess they are kept in cache but I would like them to be added to the server straight away. Is there a "commit()" to add to the code?
Many thanks | 1 | 1 | 0.197375 | 0 | false | 46,301,367 | 0 | 116 | 1 | 0 | 0 | 46,300,696 | I guess they are kept in cache but I would like them to be added to the server straight away. Is there a "commit()" to add to the code?
No. It isn't possible to stream or write a partial xlsx file like a CSV or Html file since the file format is a collection of XML files in a Zip container and it can't be generated until the file is closed. | 1 | 0 | 0 | xlswriter on a EC2 instance | 1 | python,excel,amazon-web-services,amazon-ec2,xlsxwriter | 0 | 2017-09-19T12:40:00.000 |
Django-Storages provides an S3 file storage backend for Django. It lists
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as required settings. If I am using an AWS Instance Profile to provide S3 access instead of a key pair, how do I configure Django-Storages? | 3 | 1 | 0.099668 | 0 | false | 61,942,402 | 1 | 1,069 | 1 | 0 | 0 | 46,307,447 | The docs now explain this:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, boto3 internally looks up IAM credentials. | 1 | 0 | 0 | Use Django-Storages with IAM Instance Profiles | 2 | django,amazon-s3,boto3,python-django-storages | 0 | 2017-09-19T18:28:00.000 |
I am getting the below error while running the cqlsh in cassandra 2.2.10 ?? Can somebody help me to pass this hurdle:
[root@rac1 site-packages]# $CASSANDRA_PATH/bin/cqlsh
Python Cassandra driver not installed, or not on PYTHONPATH. You might
try “pip install cassandra-driver”.
Python: /usr/local/bin/python Module load path:
[‘/opt/cassandra/apache-cassandra-2.2.10/bin/../lib/six-1.7.3-py2.py3-none-any.zip’,
‘/opt/cassandra/apache-cassandra-2.2.10/bin/../lib/futures-2.1.6-py2.py3-none-any.zip’,
‘/opt/cassandra/apache-cassandra-2.2.10/bin/../lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456’,
‘/opt/cassandra/apache-cassandra-2.2.10/bin’,
‘/usr/local/lib/python2.7/site-packages’,
‘/usr/local/lib/python27.zip’, ‘/usr/local/lib/python2.7’,
‘/usr/local/lib/python2.7/plat-linux2’,
‘/usr/local/lib/python2.7/lib-tk’, ‘/usr/local/lib/python2.7/lib-old’,
‘/usr/local/lib/python2.7/lib-dynload’]
Error: can’t decompress data; zlib not available | 1 | 0 | 0 | 0 | false | 47,167,910 | 0 | 2,223 | 1 | 1 | 0 | 46,314,983 | Cassandra uses the python driver bundled in-tree in a zip file. If your Python runtime was not built with zlib support, it cannot use the zip archive in the PYTHONPATH. Either install the driver directly (pip install) as suggested, or put a correctly configured Python runtime in your path. | 1 | 0 | 0 | Python Cassandra driver not installed, or not on PYTHONPATH | 1 | python,linux,cassandra | 0 | 2017-09-20T06:42:00.000 |
My team uses .rst/sphinx for tech doc. We've decided to do tables in csv files, using the .. csv-table:: directive. We are beginning to using sphinx-intl module for translation. Everything seems to work fine, except that I don't see any our tables int he extracted .po files. Has anyone had this experience? What are best practices for doing csv tables and using sphinx-intl? | 0 | 1 | 0.197375 | 0 | false | 46,535,864 | 0 | 89 | 1 | 0 | 0 | 46,351,068 | We tested and verified that the csv content is automatically extracted into PO files, and building a localized version places the translated strings in MO files back into the table. | 1 | 0 | 0 | How do I use sphinx-intl if I am using the .. csv-table:: directives for my tables? | 1 | internationalization,python-sphinx,restructuredtext | 1 | 2017-09-21T18:40:00.000 |
can I get some advice, how to make mechanism for inserts, that will check if the values of PK is used?
If it is not used in the table, it will insert row with number. If it is used, it will increment value and check next value if it's used. So on... | 0 | 0 | 0 | 0 | false | 46,380,171 | 0 | 26 | 1 | 0 | 0 | 46,380,101 | This is too long for a comment.
You would need a trigger in the database to correctly implement this functionality. If you try to do it in the application layer, then you will be subject to race conditions in a multi-client environment.
Within Oracle, I would recommend just using an auto-generated column for the primary key. Don't try inserting it yourself. In Oracle 12C, you can define this directly using generated always as. In earlier versions, you need to use a sequence to define the numbers and a trigger to assign them. | 1 | 0 | 0 | cx_oracle PK autoincrementarion | 1 | python,oracle,cx-oracle | 0 | 2017-09-23T13:31:00.000 |
We've had a Flask application using pymssql running for 1.5 years under Python 2.7 and SQL Server 2012. We moved the application to a new set of servers and upgraded the Flask app to Python 3.6 and a new database server to SQL Server 2016. They're both Windows servers.
Since then, we've been getting intermittent 20017 errors:
pymssql.OperationalError(20017, b'DB-Lib error message 20017, severity 9:\nUnexpected EOF from the server (xx.xx.xx.xx:1433)\nDB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (xx.xx.xx.xx:1433)\n')
Only a small percentage of the calls return this, but enough to be causing problems. I can provide specific versions of everything we're running.
One solution proposed is to switch to pyodbc, but we have hundreds of queries and stored procedure calls, many with UUIDs, which pyodbc doesn't handle nearly as cleanly as pymssql.
I've installed pymssql via a precompiled wheel (pymssql-2.1.3-cp36-cp36m-win_amd64) because pip can't build it without an older version.
Any ideas on debugging or fixing this would be helpful. | 2 | 2 | 0.379949 | 0 | false | 46,436,613 | 0 | 933 | 1 | 0 | 0 | 46,410,009 | Well, our answer was to switch to pyodbc. A few utility functions made it more or less a cut-and-paste with a few gotchas here and there, but pymssql has been increasingly difficult to build, upgrade, and use for the last few years. | 1 | 0 | 0 | Pymssql Error 20017 after upgrading to Python 3.6 and SQL Server 2016 | 1 | sql-server,python-3.x,flask,pymssql | 0 | 2017-09-25T16:32:00.000 |
I'm using openpyxl for Python 2.7 to open and then modify a existing .xlsx file. This excel file has about 2500 columns and just 10 rows. The problem is openpyxl took to long to load the file (almost 1 Minute). Is there anyway to speed up the loading process of openpyxl. From other Threads I found some tips with read_only and write_only. But i have to read and write excel at the same time, so i can't apply this tips for me. Does anyone have any Suggestion. Thanks you very much | 0 | 0 | 0 | 0 | false | 55,336,278 | 0 | 1,358 | 1 | 0 | 0 | 46,428,168 | I had the same issue and found that while i was getting reasonable times initially (opening and closing was taking maybe 2-3 seconds), this suddenly increased to over a minute. I had introduced logging, so thought that may have been the cause, but after commenting this out, there was still a long delay
I copied the data from the Excel spreadsheet and just saved to a new excel spreadsheet which fixed it for me. Seems like it must have got corrupted somehow.
Note - saving the same filename as another filename didn't work, neither did saving the same filename on a local drive. | 1 | 0 | 0 | Openpyxl loading existing excel takes too long | 2 | python,excel,openpyxl | 0 | 2017-09-26T13:42:00.000 |
I've noticed that many SQLAlchemy tutorials would use relationship() in "connecting" multiple tables together, may their relationship be one-to-one, one-to-many, or many-to-many. However, when using raw SQL, you are not able to define the relationships between tables explicitly, as far as I know.
In what cases is relationship() required and not required? Why do we have to explicitly define the relationship between tables in SQLAlchemy? | 9 | 10 | 1.2 | 0 | true | 46,462,502 | 0 | 1,079 | 1 | 0 | 0 | 46,462,152 | In SQL, tables are related to each other via foreign keys. In an ORM, models are related to each other via relationships. You're not required to use relationships, just as you are not required to use models (i.e. the ORM). Mapped classes give you the ability to work with tables as if they are objects in memory; along the same lines, relationships give you the ability to work with foreign keys as if they are references in memory.
You want to set up relationships for the same purpose as wanting to set up models: convenience. For this reason, the two go hand-in-hand. It is uncommon to see models with raw foreign keys but no relationships. | 1 | 0 | 0 | Is it necessary to use `relationship()` in SQLAlchemy? | 1 | python,sqlalchemy | 0 | 2017-09-28T06:14:00.000 |
I have a question about sqlite3. If I were to host a database online, how would I access it through python's sqlite3 module?
E.g. Assume I had a database hosted at "www.example.com/database.db". Would it be as simple as just forming a connection with sqlite3.connect ("www.example.com/database.db") or is there more I need to add so that the string is interpreted as a url and not a filename? | 4 | 3 | 0.291313 | 0 | false | 46,492,537 | 0 | 1,622 | 1 | 0 | 0 | 46,492,388 | SQLite3 is embedded-only database so it does not have network connection capabilities. You will need to somehow mount the remote filesystem.
With that being said, SQLite3 is not meant for this. Use PostgreSQL or MySQL (or anything else) for such purposes. | 1 | 0 | 0 | Connecting to an online database through python sqlite3 | 2 | python,database,sqlite | 0 | 2017-09-29T15:39:00.000 |
I have two databases in odoo DB1 and DB2. I made some changes to existing modules(say module1 and module2) in DB1 through GUI(web client). All those changes were stored to DB1 and were working correctly when I am logged in through DB1.
Now, I made some changes in few files(in same two modules module1 and module2). These modules need to be upgraded in order to load those changes. So, i logged in from DB2 and upgraded those modules. My changes in file loaded correctly and were working correctly when I am logged in through DB2.
But those file changes were loaded only for DB2 and not for DB1.
So, I wanted to know:
How upgrading of a module works?? Does it upgrades only for the database through which the user is logged in and upgraded the module?
And if it is so. Then, is there a way that I can Upgrade my module while retaining all the previous changes that i made through the GUI in that same module?
What are the things that are changed when a module is upgraded? | 4 | 3 | 0.291313 | 0 | false | 46,501,313 | 1 | 1,295 | 2 | 0 | 0 | 46,500,405 | you can restart the server and start the server by python odoo-bin -d database_name -u module_name
or -u all to update all module | 1 | 0 | 0 | How upgrading of a Odoo module works? | 2 | python,openerp,odoo-9,odoo-10 | 0 | 2017-09-30T07:04:00.000 |
I have two databases in odoo DB1 and DB2. I made some changes to existing modules(say module1 and module2) in DB1 through GUI(web client). All those changes were stored to DB1 and were working correctly when I am logged in through DB1.
Now, I made some changes in few files(in same two modules module1 and module2). These modules need to be upgraded in order to load those changes. So, i logged in from DB2 and upgraded those modules. My changes in file loaded correctly and were working correctly when I am logged in through DB2.
But those file changes were loaded only for DB2 and not for DB1.
So, I wanted to know:
How upgrading of a module works?? Does it upgrades only for the database through which the user is logged in and upgraded the module?
And if it is so. Then, is there a way that I can Upgrade my module while retaining all the previous changes that i made through the GUI in that same module?
What are the things that are changed when a module is upgraded? | 4 | 4 | 0.379949 | 0 | false | 46,513,745 | 1 | 1,295 | 2 | 0 | 0 | 46,500,405 | There is 2 step for upgrading an addons in Odoo,
First, restarting the service. it will upgrade your .py files.
Second, click upgrade button in Apps>youraddonsname. it will upgrade your .xml files.
i create a script for upgrading the XML files. the name is upgrade.sh
#!/bin/sh
for db in $(cat /opt/odoo/scripts/yourlistdbfiles);
do
odoo --addons-path=/opt/odoo/youraddonspath -d $db -u youraddonsname --no-xmlrpc > /opt/odoo/logs/yourlogfiles.log 2>&1 &
sleep 20s && exit &
done
so you just run sh /opt/odoo/script/upgrade.sh after editing your addons and no need to click the upgrade button anymore.
hope this help | 1 | 0 | 0 | How upgrading of a Odoo module works? | 2 | python,openerp,odoo-9,odoo-10 | 0 | 2017-09-30T07:04:00.000 |
I'm updating from an ancient language to Django. I want to keep the data from the old project into the new.
But old project is mySQL. And I'm currently using SQLite3 in dev mode. But read that postgreSQL is most capable. So first question is: Is it better to set up postgreSQL while in development. Or is it an easy transition to postgreSQL from SQLite3?
And for the data in the old project. I am bumping up the table structure from the old mySQL structure. Since it got many relation db's. And this is handled internally with foreignkey and manytomany in SQLite3 (same in postgreSQL I guess).
So I'm thinking about how to transfer the data. It's not really much data. Maybe 3-5.000 rows.
Problem is that I don't want to have same table structure. So a import would be a terrible idea. I want to have the sweet functionality provided by SQLite3/postgreSQL.
One idea I had was to join all the data and create a nested json for each post. And then define into what table so the relations are kept.
But this is just my guessing. So I'm asking you if there is a proper way to do this?
Thanks! | 0 | 0 | 0 | 0 | false | 46,544,581 | 1 | 31 | 1 | 0 | 0 | 46,544,518 | better create the postgres database. write down the python script which take the data from the mysql database and import in postgres database. | 1 | 0 | 0 | Importing data from multiple related tables in mySQL to SQLite3 or postgreSQL | 1 | python,django,database,postgresql,sqlite | 0 | 2017-10-03T12:22:00.000 |
I am trying to upload data from certain fields in a CSV file to an already existing table.
From my understanding, the way to do this is to create a new table and then append the relevant columns of the newly created table to the corresponding columns of the main table.
How exactly do I append certain columns of data from one table to another?
As in, what specific commands?
I am using the bigquery api and the python-client-library. | 1 | 0 | 1.2 | 1 | true | 46,546,554 | 0 | 4,039 | 1 | 0 | 0 | 46,546,388 | You can use pandas library for that.
import pandas as pd
data = pd.read_csv('input_data.csv')
useful_columns = [col1, col2, ... ] # List the columns you need data[useful_columns].to_csv('result_data.csv', index=False) # index=False is to prevent creating extra column | 1 | 0 | 0 | How to Skip Columns of CSV file | 1 | python,csv,google-api,google-bigquery,google-python-api | 0 | 2017-10-03T13:57:00.000 |
I’m building a web app (python/Django) where customers create an account, each customer creates/adds as many locations as they want and a separate server generates large amounts of data for each location several times a day.
For example:
User A -> [locationA, locationB]
User B -> [locationC, locationD, locationE]
Where each location is an object that includes name, address, etc.
Every 3 hours a separate server gathers data from various sources like weather, check-ins etc for each location and I need to store each item from each iteration so I can then perform per-user-per-location queries.
E.g. “all the checkins in the last week group by location for User A”
Right now I am using MongoDB and storing a collection of venues with a field of ownerId which is the ObjectID of the owning user.
What is the best strategy to store the records of data? The naïve approach seems to be a collection for checkins, a collection for weather records etc and each document would have a “location” field. But this seems to have both performance and security problems (all the access logic would be in web app code).
Would it be better to have a completely separate DB for each user? Are there better ways?
Is a different strategy better if we switch to Postgres/SQL database? | 1 | 1 | 0.197375 | 0 | false | 46,553,762 | 1 | 81 | 1 | 0 | 0 | 46,553,070 | [GENERAL ADVICE]: I always use Postgres or MySQL as the django ORM connection and then Mongo or DynamoDB for analytics. You can say that it creates unnecessary complexity because that is true, but for us that abstraction makes it easier to separate out teams too. You have your front end devs, backend/ full stacks, and true backend devs. Not all of them need to be Django experts.
[SPECIFIC ADVICE]: This sounds to me like you should just get started with mongo. Unless you are a B2B SaaS app selling to enterprise companies who won't like a multi-tenet data model then it shouldn't be tough to map this out in mongo. The main reason I say mongo is nice is because it sounds like you don't fully know the schema of what you'll collect ahead of time. Later you can refactor once you get a better handle of what data you collect. Expect to refactor and just get the thing working. | 1 | 0 | 0 | Right strategy for segmenting Mongo/Postgres database by customer? | 1 | python,sql,django,mongodb,postgresql | 0 | 2017-10-03T20:41:00.000 |
I have a large amount of data around 50GB worth in a csv which i want to analyse purposes of ML. It is however way to large to fit in Python. I ideally want to use mySQL because querying is easier. Can anyone offer a host of tips for me to look into. This can be anything from:
How to store it in the first place, i realise i probably can't load it in all at once, would i do it iteratively? If so what things can i look into for this? In addition i've heard about indexing, would that really speed up queries on such a massive data set?
Are there better technologies out there to handle this amount of data and still be able to query and do feature engineering quickly. What i eventually feed into my algorithm should be able to be done in Python but i need query and do some feature engineering before i get my data set that is ready to be analysed.
I'd really appreciate any advice this all needs to be done on personal computer! Thanks!! | 0 | 0 | 0 | 0 | false | 46,607,645 | 0 | 589 | 1 | 0 | 0 | 46,574,694 | That's depend on what you have, you can use Apache spark and then use their SQL feature, spark SQL gives you the possibility to write SQL queries in your dataset, but for best performance you need a distributed mode(you can use it in a local machine but the result is limited) and high machine performance. you can use python, scala, java to write your code. | 1 | 0 | 0 | Storing and querying a large amount of data | 2 | python,mysql,bigdata,mysql-python | 0 | 2017-10-04T21:45:00.000 |
I want to create a program, which automates excel reporting including various graphs in colours. The program needs to be able to read an excel dataset. Based on this dataset, the program then has to create report pages and graphs and then export to an excel file as well as pdf file.
I have done some research and it seems this is possible using python with pandas - xlsxWriter or xlswings as well as Ruby gems - axlsx or win32ole.
Which is the user-friendlier and easy to learn alternative? What are the advantages and disadvantages? Are there other options I should consider (I would like to avoid VBA - as this is how the reports are currently produced)?
Any responses and comments are appreciated. Thank you! | 0 | 0 | 0 | 1 | false | 46,669,389 | 0 | 388 | 1 | 0 | 0 | 46,575,847 | If you already have VBA that works for your project, then translating it to Ruby + WIN32OLE is probably your quickest path to working code. Anything you can do in VBA is doable in Ruby (if you find something you can't do, post here to ask for help).
I prefer working with Excel via OLE since I know the file produced by Excel will work anywhere I open it. I haven't used axlsx but I'm sure it's a fine project; I just wouldn't trust that it would produce working Excel files every time. | 1 | 0 | 0 | Automating excel reporting and graphs - Python xlsxWriter/xlswings or Ruby axlsx/win32ole | 1 | python,ruby,excel,xlsxwriter,axlsx | 0 | 2017-10-04T23:52:00.000 |
So I have two table in a one-to-many relationship. When I make a new row of Table1, I want to populate Table2 with the related rows. However, this population actually involves computing the Table2 rows, using data in other related tables.
What's a good way to do that using the ORM layer? That is, assuming that that the Table1 mappings are created through the ORM, where/how should I call the code to populate Table2?
I thought about using the after_insert hook, but i want to have a session to pass to the population method.
Thanks. | 1 | 1 | 1.2 | 0 | true | 46,777,010 | 1 | 601 | 1 | 0 | 0 | 46,594,866 | After asking around in #sqlalchemy IRC, it was pointed out that this could be done using ORM-level relationships in an before_flush event listener.
It was explained that when you add a mapping through a relationship, the foreign key is automatically filled on flush, and the appropriate insert statement generated by the ORM. | 1 | 0 | 0 | Populating related table in SqlAlchemy ORM | 2 | python,sql,database,orm,sqlalchemy | 0 | 2017-10-05T21:11:00.000 |
I've built some tools that create front-end list boxes for users that reference dynamic Redshift tables. New items in the table, they appear automatically in the list.
I want to put the list in alphabetical order in the database so the dynamic list boxes will show the data in that order.
After downloading the list from an API, I attempt to sort the list alphabetically in a Pandas dataframe before uploading. This works perfectly:
df.sort_values(['name'], inplace=True, ascending=True, kind='heapsort')
But then when I try to upload to Redshift in that order, it loses the order while it uploads. The data appears in chunks of alphabetically ordered segments.
db_conn = create_engine('<redshift connection>')
obj.to_sql('table_name', db_conn, index = False, if_exists = 'replace')
Because of the way the third party tool (Alteryx) works, I need to have this data in alphabetical order in the database.
How can I modify to_sql to properly upload the data in order? | 0 | 0 | 1.2 | 1 | true | 46,610,485 | 0 | 725 | 1 | 0 | 0 | 46,608,223 | While ingesting data into redshift, data gets distributed between slices on each node in your redshift cluster.
My suggestion would be to create a sort key on a column which you need to be sorted. Once you have sort key on that column, you can run Vacuum command to get your data sorted.
Sorry! I cannot be of much help on Python/Pandas
If I’ve made a bad assumption please comment and I’ll refocus my answer. | 1 | 0 | 0 | Sorting and loading data from Pandas to Redshift using to_sql | 1 | python,sorting,amazon-redshift,pandas-to-sql | 0 | 2017-10-06T14:36:00.000 |
I'm working with a small company currently that stores all of their app data in an AWS Redshift cluster. I have been tasked with doing some data processing and machine learning on the data in that Redshift cluster.
The first task I need to do requires some basic transforming of existing data in that cluster into some new tables based on some fairly simple SQL logic. In an MSSQL environment, I would simply put all the logic into a parameterized stored procedure and schedule it via SQL Server Agent Jobs. However, sprocs don't appear to be a thing in Redshift. How would I go about creating a SQL job and scheduling it to run nightly (for example) in an AWS environment?
The other task I have involves developing a machine learning model (in Python) and scoring records in that Redshift database. What's the best way to host my python logic and do the data processing if the plan is to pull data from that Redshift cluster, score it, and then insert it into a new table on the same cluster? It seems like I could spin up an EC2 instance, host my python scripts on there, do the processing on there as well, and schedule the scripts to run via cron?
I see tons of AWS (and non-AWS) products that look like they might be relevant (AWS Glue/Data Pipeline/EMR), but there's so many that I'm a little overwhelmed. Thanks in advance for the assistance! | 1 | 1 | 0.066568 | 0 | false | 46,640,656 | 1 | 2,154 | 1 | 1 | 0 | 46,618,762 | The 2 options for running ETL on Redshift
Create some "create table as" type SQL, which will take your source
tables as input and generate your target (transformed table)
Do the transformation outside of the database using an ETL tool. For
example EMR or Glue.
Generally, in an MPP environment such as Redshift, the best practice is to push the ETL to the powerful database (i.e. option 1).
Only consider taking the ETL outside of Redshift (option 2) where SQL is not the ideal tool for the transformation, or the transformation is likely to take a huge amount of compute resource.
There is no inbuilt scheduling or orchestration tool. Apache Airflow is a good option if you need something more full featured than cron jobs. | 1 | 0 | 0 | AWS Redshift Data Processing | 3 | python,database,amazon-web-services,amazon-redshift | 0 | 2017-10-07T09:41:00.000 |
I successfully installed mod_wsgi via pip install mod_wsgi on Windows. However, when I copy the output of mod_wsgi-express module-config into my httpd.conf and try to start the httpd, I get the following error:
httpd.exe: Syntax error on line 185 of C:/path/to/httpd.conf: Cannot load c:/path/to/venv/Lib/site-packages/mod_wsgi/server/mod_wsgi.pyd into server
This is already after correcting the pasted output of module-config, as it was .../venv/lib/site-packages/mod_wsgi/server/mod_wsgiNone (note the "None"). I changed the "None" to ".pyd" as this is the correct path.
I already tried to install it outside the virtual env (Python being at C:\Python27), but it didn't make a difference -> same error.
I also tried to uninstall/re-install mod_wsgi. I had one failed install as Microsoft Visual C++ Compiler for Python 2.7 (Version 9.0.0.30729) was not present. After that installation, the mod_wsgi always installed OK.
The apache (Apache/2.4.27 (Win32)) comes from the xampp package and starts without issues when I remove the added lines for wsgi.
I need to use Python 2.7 because of a third-party module. So going for 3.x is unfortunately not an option at the moment.
Exact Python version is 2.7.13 (32-bit).
For completeness, the output of module-config is:
LoadModule wsgi_module "c:/www/my_project/venv/lib/site-packages/mod_wsgi/server/mod_wsgiNone"
WSGIPythonHome "c:/www/my_project/venv"
Update: tried one more thing:
Uninstalled mod_wsgi (with pip)
set "MOD_WSGI_APACHE_ROOTDIR=C:/WWW/apache"
And pip install mod_wsgi again
Still the same error... | 0 | 0 | 1.2 | 0 | true | 46,645,404 | 1 | 1,202 | 1 | 0 | 0 | 46,622,112 | The issue was that the Apache was built with VC14, but Python 2.7 naturally with VC9. Installing an Apache built with VC9 solved my issue. | 1 | 0 | 0 | Getting mod_wsgi to work with Python 2.7/Apache on Windows Server 2012; cannot load module | 1 | python,apache,mod-wsgi,windows-server-2008-r2 | 0 | 2017-10-07T15:46:00.000 |
Im going to run query that returns a huge table (about 700Mb) from Redshift and save it to CSV using SQLAlchemy and python 2.7 on my local machine (mac pro).
I've never done this with such a huge queries before and obviously there could be some memory and other issues.
My question is what i shall take into account and how to use sql alchemy in order to make the process work?
Thanks,
Alex | 0 | 0 | 0 | 0 | false | 46,715,732 | 0 | 1,752 | 1 | 0 | 0 | 46,714,971 | If you don't run much else on that machine then memory should not be an issue. Give it a try. Monitor memory use during the execution. Also use "load" to see what pressure on the system is. | 1 | 0 | 0 | Python/SQLAlchemy: How to save huge redshift table to CSV? | 2 | python,sql,sqlalchemy,amazon-redshift | 0 | 2017-10-12T16:48:00.000 |
On the face of it, it seems that bindparam should generally be used to eliminate SQL injection. However, in what situations would it necessitate using literal_column instead of bindparam - and what measures should be taken to prevent SQL injection? | 0 | 2 | 0.379949 | 0 | false | 46,736,027 | 0 | 918 | 1 | 0 | 0 | 46,719,568 | literal_column is intended to be used as, well, a literal name for a column, not as a parameter (which is a value), because column names cannot be parameterized (it's part of the query itself). You should generally not be using literal_column to put a value in a query, only column names. If you are accepting user input for column names, you should whitelist what those names are.
One exception is that sometimes you want to output some really complex expression not directly supported by SQLAlchemy, and literal_column basically allows you to put freeform text in a query. In these cases, you should ensure that user-supplied parts of the expression (i.e. values) are still passed via bind params. | 1 | 0 | 0 | Sqlalchemy When should literal_column or be used instead of bindparam? | 1 | python,python-2.7,sqlalchemy | 0 | 2017-10-12T22:02:00.000 |
Is it possible to query data from InfoPlus 21 (IP21) AspenTech using php?
I am willing to create a php application that can access tags and historical data from AspenTech Historian.
Is ODBC my answer? Even thinking that is, I am not quite sure how to proceed.
UPDATE:
I ended up using python and pyODBC.
This worked like a charm!
Thank you all for supporting. | 2 | 3 | 0.197375 | 0 | false | 46,762,657 | 0 | 7,310 | 2 | 0 | 0 | 46,730,944 | I am unaware of a method to access IP21 data directly via PHP, however, if you're happy to access data via a web service, there are both REST and a SOAP options.
Both methods are extremely fast and responsive.
AFW Security still applies to clients accessing the Web Services. Clients will require SQL Plus read (at lesast) access.
SOAP
Requires the "Aspen SQL plus Web Server/Service and Health Monitor" component to be installed on IP21 server (Selected during install of IP21).
Recent versions of IP21 require a slight modification to the web.config file to allow remote access. If you cannot execute the web service remotely, try doing it locally (i.e. on the same machine as the IP21 server) and see if this is an issue.
Example: http://IP21ServerHostName/SQLPlusWebService/SQLplusWebService.asmx/ExecuteSQL?command=select%20*%20from%20compquerydef;
REST
My preference (over SOAP), as it is super easy to access using JQuery (JavaScript) - a couple of lines of code!
Unsure of exactly what IP21 component is required on install for this, but it appears to be on most of my IP21 servers already.
Arguments in the URL can control the number of rows returned (handy).
If using within Jquery / JavaScript, web page must be hosted on the AspenOneServerHostName machine, else you'll run into Cross-Origin Resource Sharing (CORS) issues.
Example:
http://AspenOneServerHostName/ProcessData/AtProcessDataREST.dll/SQL?%3CSQL%20c=%22DRIVER={AspenTech%20SQLplus};HOST=IP21ServerHostName;Port=10014;CHARINT=N;CHARFLOAT=N;CHARTIME=N;CONVERTERRORS=N%22%20m=%22DesiredMaxNumberOfRowsReturned%22%20s=%221%22%3E%3C![CDATA[select%20*%20from%20compquerydef]]%3E%3C/SQL%3E
Notes:
AspenOneServerHostName can be the same as IP21ServerHostName
AspenOneServerHostName must have ADSA configured to view IP21ServerHostName
Replace DesiredMaxNumberOfRowsReturned with a number | 1 | 0 | 0 | How to query data from an AspenTech IP21 Historian using PHP? | 3 | php,python,odbc,aspen | 0 | 2017-10-13T13:18:00.000 |
Is it possible to query data from InfoPlus 21 (IP21) AspenTech using php?
I am willing to create a php application that can access tags and historical data from AspenTech Historian.
Is ODBC my answer? Even thinking that is, I am not quite sure how to proceed.
UPDATE:
I ended up using python and pyODBC.
This worked like a charm!
Thank you all for supporting. | 2 | 2 | 0.132549 | 0 | false | 50,016,010 | 0 | 7,310 | 2 | 0 | 0 | 46,730,944 | Yes ODBC driver should be applicable to meet your requirement. We have already developed an application to insert the data into IP21 historian which uses same protocol. Similarly some analytical tools (e.g. Seeq Cooperation) also uses ODBC to fetch the data from IP21 historian. Therefore it should be possible in your case as well. | 1 | 0 | 0 | How to query data from an AspenTech IP21 Historian using PHP? | 3 | php,python,odbc,aspen | 0 | 2017-10-13T13:18:00.000 |
I'm running into a performance issue with Google Cloud Bigtable Python Client. I'm working on a flask API that writes to and reads from a GCP Bigtable instance. The API uses the python client to communicate with Bigtable, and was deployed to GCP App Engine flexible environment.
Under low traffic, the API works fine. However during a load test, the endpoints that reads and writes to Bigtable suffers a huge performance decrease compare to a similar endpoint that doesn't communicate with Bigtable. Also, a large percentage of requests went to the endpoint receives a 502 Bad Gateway, even when health check was turned off in App Engine.
I'm aware of that the client is currently in Alpha. I wonder if the performance issue is known, or if anyone also ran into the same issue
Update
I found a documentation from Google stating:
There are issues with the network connection. Network issues can
reduce throughput and cause reads and writes to take longer than
usual. In particular, you'll see issues if your clients are not
running in the same zone as your Cloud Bigtable cluster.
In my case, my client is in a different region, by moving it to the same region had a huge increase in performance. However the performance issue still exist, and the recommendation from the documentation is to put client in the same zone as Bigtable.
I also considered using Container engine or Compute Engine where it is easier to specify the zone, but I want stay with App Engine for its autoscale functionality and managed services. | 1 | 3 | 0.53705 | 0 | false | 47,776,406 | 1 | 503 | 1 | 1 | 0 | 46,740,127 | Bigtable client take somewhere between 3 ms to 20 ms to complete each request, and because python is single threaded, during that period of time it will just wait until the response comes back. The best solution we found was for any writes, publish the request to Pubsub, then use Dataflow to write to Bigtable. It is significantly faster because publishing a message in Python would take way below 1 ms to complete, and because Dataflow can be set to exactly the same region as Bigtable, and it is easy to parallel, it can write much faster.
Though it doesn't solve the scenario where you need frequent read or write need to be instantaneous | 1 | 0 | 0 | Google Cloud Bigtable Python Client Performance Issue | 1 | google-app-engine,google-cloud-platform,bigtable,google-cloud-bigtable,google-cloud-python | 0 | 2017-10-14T02:03:00.000 |
I have MySQL database where I'm loading big files which insert more than 190 000 rows. I'm using python script which is doing some stuff and then load data from csv file into mysql execute query and commit.
My question is if I'm sending such a big file, is database ready after commit command or how to trigger when all datas are inserted in database? | 0 | 1 | 1.2 | 0 | true | 46,745,333 | 0 | 54 | 1 | 0 | 0 | 46,742,682 | The COMMIT does not actually return until the data has been... committed... so, yes, once you have committed any transaction, the work from that transaction is entirely done, as far as your application is concerned. | 1 | 0 | 0 | MySQL commit trigger done | 1 | python,mysql | 0 | 2017-10-14T08:53:00.000 |