[{"Question":"I am working on python flask with Dynamo DB ,is there any functionality to auto increment the primary key in AWS Dynamo DB (like SQL.AUTOINCREMENT), Any suggestion on this? how to handle the AUTOINCREMENT Feature in AWS DynamoDB.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":923,"Q_Id":48023512,"Users Score":1,"Answer":"Generally speaking, using an auto-increment primary key is not going to be best practice. If you really want to do it, you could use a lambda function and another dynamodb table that stores the last used value, but you would be much better off picking a better primary key so that you don't run into performance problems down the road.\nGenerally speaking, a GUID is a very easy to use alternative for a primary key where you don't have another obvious field to use.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-dynamodb","A_Id":48023965,"CreationDate":"2017-12-29T13:40:00.000","Title":"DynamoDB AutoIncrement Primary key Python flask","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have created a Glue job that copies data from S3 (csv file) to Redshift. It works and populates the desired table.\nHowever, I need to purge the table during this process as I am left with duplicate records after the process completes.\nI'm looking for a way to add this purge to the Glue process. Any advice would be appreciated.\nThanks.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4214,"Q_Id":48026111,"Users Score":0,"Answer":"You need to modify the auto generated code provided by Glue. Connect to redshift using spark jdbc connection and execute the purge query.\nTo spin up Glue containers in redshift VPC; specify the connection in glue job, to gain access for redshift cluster.\nHope this helps.","Q_Score":4,"Tags":"python,amazon-web-services,pyspark,amazon-redshift,aws-glue","A_Id":50180152,"CreationDate":"2017-12-29T17:21:00.000","Title":"AWS Glue Truncate Redshift Table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a Glue job that copies data from S3 (csv file) to Redshift. It works and populates the desired table.\nHowever, I need to purge the table during this process as I am left with duplicate records after the process completes.\nI'm looking for a way to add this purge to the Glue process. Any advice would be appreciated.\nThanks.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":4214,"Q_Id":48026111,"Users Score":2,"Answer":"The link @frobinrobin provided is out of date, and I tried many times that the preactions statements will be skiped even you provide a wrong syntax, and came out with duplicated rows(insert action did executed!)\nTry this:\njust replace the syntax from\nglueContext.write_dynamic_frame.from_jdbc_conf() in the link above to glueContext.write_dynamic_frame_from_jdbc_conf() will works!\nAt least this help me out in my case(AWS Glue job just insert data into Redshift without executing Truncate table actions)","Q_Score":4,"Tags":"python,amazon-web-services,pyspark,amazon-redshift,aws-glue","A_Id":65486258,"CreationDate":"2017-12-29T17:21:00.000","Title":"AWS Glue Truncate Redshift Table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build an application with mongoDB and Python Flask. While running the application, I am getting below error:\n\nConfigurationError: Server at 127.0.0.1:27017 reports wire version 0,\n but this version of PyMongo requires at least 2 (MongoDB 2.6).\n\nCan any one help me in this?\nThanks,\nBalwinder","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":25842,"Q_Id":48060354,"Users Score":1,"Answer":"This works for me:\n\nsudo pip3 uninstall pymongo\nsudo apt-get install python3-pymongo\n\nI hope that works for someone else, regards.","Q_Score":10,"Tags":"python,mongodb,flask","A_Id":54701902,"CreationDate":"2018-01-02T11:43:00.000","Title":"ConfigurationError: Server at 127.0.0.1:27017 reports wire version 0, but this version of PyMongo requires at least 2 (MongoDB 2.6)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am leaning Python programming language. I have no problems with Python. I read Python official docs, and I can write small programs in Python. I want to familiarize myself with mysql database because it is useful in learning software development concepts. I've installed mysql database and Django on my computer. I have Ubuntu 14.04 and python 3.4 installed. I've configured Django settings to use mysql database. I tested Django connection to mysql db and all things work properly.\nI am a complete newbie with web development. I didn't create my own website and I didn't start developing any web application.\nMy purpose currently is to master creation of mysql database and tables, making changes\/migrations\/queries, using Django models and Python.\nIs it reasonable\/possible to use Django ORM for work with mysql database without simultaneous development of a web application\/local application? As I've said, I don't have my own website. I want just to try using mysql and Django together on my computer in order to get deeper knowledge as to Django and mysql in this respect. \nMy final purpose is development in Python, including work with mysql database. \nMysql without Python and Django is of no use for me.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":67,"Q_Id":48062763,"Users Score":1,"Answer":"If you want to learn and use MySQL, then start without anything above it - no Django, no ORM, not even a Python script. Learn to configure your mysql server (the server process I mean - doesn't have to be on a distinct computer), to work with the command-line mysql client (database creation, tables creations \/ modifications, adding\/updating\/deleting rows, and, most important, doing simple and complex queries).\nWhile you're at it, learn about proper relational data modeling (normalisations etc) so you fully understand how to design your schemas. \nOnce you're confortable with this, spend some time (should be quite fast at this point) learning to do the same things from python scripts with your python's version mysql connector.\nThen if you want to learn web development and Django, well go for it. The ORM is quite easy to use when you already have a good understanding of what happens underneath, so by that time you shouldn't have much problems with this part. What you'll still have to learn are the HTTP protocol (trying to do web programming without understanding the HTTP protocol is like trying to win a car race without knowing how to drive a car - it's not technically impossible but it might end up being very painful experience), then front-end stuff (html\/css\/javascript) and finally the views \/ templates parts of Django (which should be easy once you know HTTP and html). \nYou can of course jump right in into Django, but you will probably fight with way too many concepts at once and end up spending twice more time to figure out how everything works and why it works that way.","Q_Score":0,"Tags":"python,mysql,django,database","A_Id":48063653,"CreationDate":"2018-01-02T14:30:00.000","Title":"Django to create and change mysql database for learning purposes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two columns, one is a string field customer containing customer names and the other is a numeric field sales representing sales.\nWhat I want to do is to group data by customer and then sort sales within group.\nIn SQL or Pandas, this is normally achieved by something like order by customer, sales on the table. But I am just curious about this implementation. Instead first sorting on customer and then sorting on sales, why not first group customer and sort sales. I don't really care about the order of the different customers since I only care about records of same customers being grouped together.\nGrouping is essentially mapping and should run faster than sorting. \nWhy isn't there such implementation in SQL? Am I missing something? \nExample data\n\nname,sales\njohn,1\nAmy,1\njohn,2\nAmy,3\nAmy,4\n\nand I want it to group by name and then sort by sales:\n\nname,sales\njohn,1\njohn,2\nAmy,1\nAmy,3\nAmy,4\nIn SQL you probably would do select * from table order by name,sales\nThis would definitely do the job. But my confusion is since I don't care about the order of name, I should be able to do some kind of grouping (which should be cheaper than sorting) first and do sorting only on the numeric field. Am I able to do this? Why do a lot of examples from google simply uses sorting on the two fields? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":48065753,"Users Score":0,"Answer":"Here is the answer to it-\nGrouping is done when you want to pull out the conclusion based on the entire group , like total of sales done,for each of the groups(in this case John and Amy) . It is used mostly with an aggregate function or sometimes to select distinct records only. What you wrote above is sorting the data in the order of name and sales , there is no grouping involved at all. Since the operation is sorting , its obvious that the command written for it would be sorting .","Q_Score":0,"Tags":"python,sql,pandas,sorting,group-by","A_Id":48071714,"CreationDate":"2018-01-02T18:09:00.000","Title":"Sort by two columns, why not do grouping first?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a workbook that has several tabs with pivot tables. I can put data on the tab that holds the data for each pivot. My problem is that I don't know how to refresh the pivot tables. I would assume that I would need to cycle through each sheet, check to see if there is a pivot table, and refresh it. I just can't find how to do that. All of the examples I find use win32 options, but I'm using a Mac and Linux.\nI would like to achieve with openpyxl if possible.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3004,"Q_Id":48088137,"Users Score":0,"Answer":"Worksheets(\"SheetName\").PivotTables(\"PivotTableName\").PivotCache().Refresh()","Q_Score":0,"Tags":"python,excel,pivot-table,openpyxl","A_Id":52813212,"CreationDate":"2018-01-04T03:10:00.000","Title":"Refresh Excel Pivot Tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm rather new to the whole ORM topic, and I've already searched forums and docs.\nThe question is about a flask application with SQLAlchemy as ORM for the PostgreSQL.\nThe __init__.py contains the following line:\ndb = SQLAlchemy()\nthe created object is referenced in the other files to access the DB.\nThere is a save function for the model:\ndef save(self):\n db.session.add(self)\n db.session.commit()\nand also an update function:\ndef update(self):\n for var_name in self.__dict__.keys():\n if var_name is not ('_sa_instance_state' or 'id' or 'foreign_id'):\n # Workaround for JSON update problem\n flag_modified(self, var_name)\n db.session.merge(self)\n db.session.commit()\nThe problem occurs when I'm trying to save a new object. The save function writes it to DB, it's visible when querying the DB directly (psql, etc.), but a following ORM query like:\nmodel_list = db.session.query(MyModel).filter(MyModel.foreign_id == this_id).all()\ngives an empty response.\nA call of the update function does work as expected, new data is visible when requesting with the ORM.\nI'm always using the same session object for example this:\n\nIf the application is restarted everything works fine until a new object was created and tried to get with the ORM.\nAn unhandsome workaround is using raw SQL like:\nmodel_list = db.session.execute('SELECT * FROM models_table WHERE\n foreign_id = ' + str(this_id))\nwhich gives a ResultProxy with latest data like this:\n\nI think my problem is a misunderstanding of the session. Can anyone help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":766,"Q_Id":48128705,"Users Score":0,"Answer":"It figured out that the problem has nothing to do with the session, but the filter() method:\n# Neccessary import for string input into filter() function\n from sqlalchemy import text\n # Solution or workaround\n model_list = db.session.query(MyModel).filter(text('foreign_key = ' + str(this_id))).all()\nI could not figure out the problem with:\nfilter(MyModel.foreign_id == this_id) but that's another problem.\nI think this way is better than executing raw SQL.","Q_Score":1,"Tags":"python-3.x,postgresql,flask-sqlalchemy","A_Id":48283838,"CreationDate":"2018-01-06T15:17:00.000","Title":"SQLAlchemy scoped_session is not getting latest data from DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use mostly SQLAlchemy core(v.1.0.8) expression language with flask(0.12) to create API calls. For a particular case where the table has 20 columns, I wish to select all except 1 particular column. How can this be done in the 'select' clause? Is there anything like 'except' that can be used instead of explicitly selecting the columns by names?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":3459,"Q_Id":48161673,"Users Score":-1,"Answer":"You could do some clever retrospective and do that but why not just select all and ignore the one you don't need?","Q_Score":4,"Tags":"python,postgresql,sqlalchemy","A_Id":48161699,"CreationDate":"2018-01-09T04:59:00.000","Title":"How to use SQLAlchemy core to select all table columns except 1 specific column in postgresql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sometime, we have many fields and large data set in DB (i am using mongoDB). One thing come in my mind regarding to save some bytes in DB by keeping shorten name in DB.\nLike \nyear : yr\nMonth : mn\nisSameCity : isSmCt\nSo, Is this approach good or bad. Or, that depends on case base.\nPlease mentor me on this.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":495,"Q_Id":48161770,"Users Score":1,"Answer":"Long named attributes (or, \"AbnormallyLongNameAttributes\") can be avoided while designing the data model. In my previous organisation we tested keeping short named attributes strategy, such as, organisation defined 4-5 letter encoded strings, eg:\n\nFirst Name = FSTNM,\nLast Name = LSTNM, \nMonthly Profit Loss Percentage = MTPCT,\nYear on Year Sales Projection = YOYSP, and so on..)\n\nWhile we observed an improvement in query performance, largely due to the reduction in size of data being transferred over the network, or (since we used JAVA with MongoDB) the reduction in length of \"keys\" in MongoDB document\/Java Map heap space, the overall improvement in performance was less than 15%.\nIn my personal opinion, this was a micro-optimzation that came at an additional cost (huge headache) of maintaining\/designing an additional system of managing Data Attribute Dictionary for each of the data models. This system was required to have an organisation wide transparency while debugging the application\/answering to client queries.\nIf you find yourself in a position where upto 20% increase in the performance with this strategy is lucrative to you, may be it is time to scale up your MongoDB servers\/choose some other data modelling\/querying strategy, or else to choose a different database altogether.","Q_Score":0,"Tags":"php,python,database,mongodb","A_Id":48162606,"CreationDate":"2018-01-09T05:11:00.000","Title":"is it a good idea to shorten attribute names in MongoDB database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a FlowFile and I want to insert the attributes into RDS. If this was a local machine, I'd create a DBCPConnectionPool, reference a JDBC driver, etc.\nWith RDS, what am I supposed to do? Something similar (how would I do this on AWS)? Or am I stuck using ExecuteScript? If it's the later, is there a Python example for how to do this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1019,"Q_Id":48162075,"Users Score":1,"Answer":"Question might not have been clear based on the feedback, but here is the answer to get a NiFi (running on an AWS EC2 instance) communicating with an Amazon RDS instance:\n\nOn the EC2 instance, download the latest JDBC driver (wget \"https:\/\/driver.jar\")\n(If needed) Move the JDBC driver into a safe folder.\nCreate the DBCPConnectionPool, referencing the fully-resolved file path to the driver.jar (helpful: use readlink -f driver.jar to get the path).\nDon't forget -- under your AWS Security Groups, add an inbound rule that allows your EC2 instance to access RDS (under Source, you should put the security group of your EC2 instance).","Q_Score":0,"Tags":"python,amazon-rds,apache-nifi","A_Id":48170299,"CreationDate":"2018-01-09T05:44:00.000","Title":"How best to interact with AWS RDS Postgres via NiFi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ok,\nI had a look at the UploadFile Class documentation of the Django framework. Didn't find exactly what I am looking for?\nI am creating a membership management system with Django. I need the staff to have the ability to upload excel files containing list of members (and their details) which I will then manipulate to map to the Model fields.\nIt's easy to do this with pandas framework for example, but I want to do it with Django if I can.\nAny suggestions.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":48236905,"Users Score":0,"Answer":"you can use xlrd to read excel files\nin client side you just submit a form with file input.\non server uploaded file stored on request.FILES\nread file and pass it to xlrd then process sheets and cells of each sheet","Q_Score":0,"Tags":"python,django,excel,django-models","A_Id":48237156,"CreationDate":"2018-01-13T04:26:00.000","Title":"How do I upload and manipulate excel file with Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to run an aggregation query for a large collection which has 200,000+ data records. And I want to run it with pymongo. I tried out the preferred method in the docs.\n\npipeline = [...]\ndb.command('aggregate', 'statCollection', pipeline=pipeline_aggregate)\n\nBut this returned an error saying pymongo.errors.OperationFailure: The 'cursor' option is required, except for aggregate with the explain argument.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1279,"Q_Id":48237302,"Users Score":0,"Answer":"I solved the problem using allowDiskUse option. So this is my answer.\n\npipeline_2 = [...]\ndb.command('aggregate', 'statCollection', pipeline=pipeline_2, allowDiskUse=True, cursor={})","Q_Score":2,"Tags":"python,mongodb,pymongo,data-analysis","A_Id":48237575,"CreationDate":"2018-01-13T05:51:00.000","Title":"How to run pymongo aggregation query for large(200,000+ records) collection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've just started to experiment with AWS SageMaker and would like to load data from an S3 bucket into a pandas dataframe in my SageMaker python jupyter notebook for analysis.\nI could use boto to grab the data from S3, but I'm wondering whether there is a more elegant method as part of the SageMaker framework to do this in my python code?\nThanks in advance for any advice.","AnswerCount":8,"Available Count":1,"Score":0.1243530018,"is_accepted":false,"ViewCount":73507,"Q_Id":48264656,"Users Score":5,"Answer":"Do make sure the Amazon SageMaker role has policy attached to it to have access to S3. It can be done in IAM.","Q_Score":53,"Tags":"python,amazon-web-services,amazon-s3,machine-learning,amazon-sagemaker","A_Id":48278872,"CreationDate":"2018-01-15T14:07:00.000","Title":"Load S3 Data into AWS SageMaker Notebook","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I am writing a program which writes data into an opened excel file. \nThe issue is that I need to run an infinite loop and the program is closed when it is killed. \nThe file isn't even created when I do this. workbook.close() is outside the infinite while loop. \nIs there a flush method within xlsxwriter so that I can save the data?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":962,"Q_Id":48269197,"Users Score":2,"Answer":"I was able to get around the problem by invoking the workbook.save() inside the loop. I have this long running program that keeps appending lines to the excel file and once the save method is invoked inside the loop, I can see new lines getting added as the program progresses.","Q_Score":2,"Tags":"python-2.7,io,xlsxwriter","A_Id":64650126,"CreationDate":"2018-01-15T19:00:00.000","Title":"Is there a flush method in the xlsxwriter module? [Python 2.7]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am writing a program which writes data into an opened excel file. \nThe issue is that I need to run an infinite loop and the program is closed when it is killed. \nThe file isn't even created when I do this. workbook.close() is outside the infinite while loop. \nIs there a flush method within xlsxwriter so that I can save the data?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":962,"Q_Id":48269197,"Users Score":0,"Answer":"Is there a flush method in the xlsxwriter module\n\nNo. You can only close()\/save a file once with XlsxWriter.","Q_Score":2,"Tags":"python-2.7,io,xlsxwriter","A_Id":64651008,"CreationDate":"2018-01-15T19:00:00.000","Title":"Is there a flush method in the xlsxwriter module? [Python 2.7]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scenario:\nI have a Source which maintains the transactions data. They have around 900 columns and based on the requirements of the new business, they add additional columns.\nWe are a BI team and we only extract around 200 columns which are required for our reporting. But when new business is launched \/ new analysis is required, sometimes users approach us and request us to pull extra columns from the source.\nCurrent Design:\nWe have created a table with extra columns for future columns as well.\nWe are maintaining a 400 column table with the future column names like str_01, str_02...., numer_01, numer_02... date_01, date_02... etc.\nWe have a mapping table which maps the columns in our table and columns in Source table. Using this mapping table, we extract the data from source.\nProblem:\nRecently, we have reached the 400 column limit of our table and we won't be able to onboard any new columns. One approach that we can implement is to modify the table to increase the columns to 500 (or 600) but I am looking for other solutions on how to implement ETL \/ design the table structure for these scenarios.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":131,"Q_Id":48272093,"Users Score":1,"Answer":"I suppose your additional columns are measures, not dimensions. So you can keep the dimensions in the individual columns and include them into sort key, and store measures in JSON, accessing them whenever you need. Also if you can distinguish between frequently used measures vs. occasional you can store the frequently used ones in columns and the occasional ones in JSON. Redshift has native support for extracting the value given the key, and you also have the ability to set up Python UDFs for more complex processing.","Q_Score":0,"Tags":"python,amazon-redshift,etl,emr,amazon-emr","A_Id":48276498,"CreationDate":"2018-01-15T23:26:00.000","Title":"ETL for a frequently changing Table structure","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a project using pandas-python to access data from Postgres using SQLAlchemy createngine function. While I pass the credentials and hostname:portname it throws error and asks me to add the machine IP to pg_conf.hba file on the Postgres server. Which will be cumbersome as I don't have a static IP for my machine and even this project need to be shared with other people and it doesn't make any sense to keep on adding new IPs or making requests with ** IPs as it has sensitive data.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":451,"Q_Id":48299705,"Users Score":0,"Answer":"Additional information on the topic revealed that actual issue being the\nlocal address he client is using for sending data when talking to the (database) server:\nYour client need to use the local VPN address assigned as source address.\nThis is achieved by adding in a socket.bind(_source address_) call before the call to socket.connect(_target address_).\nOr, more conveniently, just provide the source address parameter with the socket.create_connection(address[, timeout[, source_address]]) call that is setting up the connection to the server.","Q_Score":0,"Tags":"python,postgresql,vpn","A_Id":48399241,"CreationDate":"2018-01-17T11:11:00.000","Title":"Trying to access Postgres Data within the VPN without adding local machine ip to Postgres Server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on copying csv file content into postgresql database.\nWhile copying into the database, I get this error:\n\ninvalid input syntax for type numeric: \"inf\"\n\nMy question is:\nI think \"inf\" means \"infinitive\" value, is it right? what does \"inf\" correctly mean? If it is kinda error, is it possible to recover original value?\nAnd, Should I manually correct these values to copy it into the database?\nIs there any good solution to fix this problem without manually correcting or setting exceptions in copying codebase?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1132,"Q_Id":48333572,"Users Score":3,"Answer":"inf (meaning infinity) is a correct value for floating point values (real and double precision), but not for numeric.\nSo you will either have to use one of the former data types or fix the input data.","Q_Score":2,"Tags":"python,postgresql,csv","A_Id":48336589,"CreationDate":"2018-01-19T03:11:00.000","Title":"What mean \"Inf\" in csv?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed Anaconda 2.7 on my desktop and want to connect it to Postgresql server.\nI also installed psycopg2 through command prompt and it was successful. But when I import it using Jupyter notebook it shows me the following error.\n\nImportError Traceback (most recent call\n last) in ()\n ----> 1 import psycopg2\nC:\\Users\\amitdarak\\AppData\\Local\\Continuum\\anaconda2\\lib\\site-packages\\psycopg2-2.7.3.2-py2.7-win-amd64.egg\\psycopg2__init__.py\n in ()\n 48 # Import the DBAPI-2.0 stuff into top-level module.\n 49 \n ---> 50 from psycopg2._psycopg import ( # noqa\n 51 BINARY, NUMBER, STRING, DATETIME, ROWID,\n 52 \nImportError: DLL load failed: The specified module could not be found","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1406,"Q_Id":48338962,"Users Score":1,"Answer":"conda install -c anaconda postgresql worked fine for me on Windows 10.\nI know postgresql isn't the same module as psycopg2, but the easy installation of postgresql would trump any advantages psycopg2 might have for me.","Q_Score":0,"Tags":"python,postgresql,jupyter-notebook","A_Id":51459116,"CreationDate":"2018-01-19T10:29:00.000","Title":"Unable to import psycopg2 in python using anaconda","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to schedule a python script loading data to Bigquery without having to copy the authentication code generated from a google account link for each run. \nI am currently using the windows task scheduler to achieve this.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":853,"Q_Id":48343024,"Users Score":0,"Answer":"You can create the credentials by following this link cloud.google.com\/storage\/docs\/authentication#service_accounts.\nIn the python script, you can pass the json file path directly to the function you are using to read\/write from\/to BQ with the private_key argument.\npandas_gbq.read_gbq(query, project_id= myprojectid, ..., private_key= 'jsonfilepath', dialect=\u2019legacy\u2019) \npandas.to_gbq(dataframe, destination_table, project_id, chunksize=10000, ..., private_key='jsonfilepath') \nThen you schedule the task to run the python script as you'll normally do with the windows task scheduler.","Q_Score":1,"Tags":"python,windows,oauth,google-bigquery","A_Id":48620815,"CreationDate":"2018-01-19T14:13:00.000","Title":"Schedule a python script loading data to BigQuery under windows 10","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script which hits dozens of API endpoints every 10s to write climate data to a database. Lets say on average I insert 1,500 rows every 10 seconds, from 10 different threads. \nI am thinking of making a batch system whereby the insert queries aren't written to the db as they come in, but added to a waiting list and this list is inserted in batch when it reaches a certain size, and the list of course emptied.\n\nIs this justified due to the overhead with frequently writing small numbers of rows to the db?\nIf so, would a list be wise? I am worried about if my program terminates unexpectedly, perhaps a form of serialized data would be better?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":48356271,"Users Score":0,"Answer":"150 inserts per second can be a load on a database and can affect performance. There are pros and cons to changing the approach that you have. Here are some things to consider:\n\nDatabases implement ACID, so inserts are secure. This is harder to achieve with buffering schemes.\nHow important is up-to-date information for queries?\nWhat is the query load?\ninsert is pretty simple. Alternative mechanisms may require re-inventing the wheel.\nDo you have other requirements on the inserts, such as ensuring they are in particular order?\n\nNo doubt, there are other considerations.\nHere are some possible alternative approaches:\n\nIf recent data is not a concern, snapshot the database for querying purposes -- say once per day or once per hour.\nBatch inserts in the application threads. A single insert can insert multiple rows.\nInvest in larger hardware. An insert load that slows down a single processor may have little effect on a a larger machine.\nInvest in better hardware. More memory and faster disk (particularly solid state) and have a big impact.\n\nNo doubt, there are other approaches as well.","Q_Score":0,"Tags":"python,sql,postgresql,insert","A_Id":48356325,"CreationDate":"2018-01-20T12:42:00.000","Title":"Overhead on an SQL insert significant?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a crawler using Python to grab some data on company internal web.but when I posted all the data,it showed PLS-00306 wrong number or type of arguments in call to PM_USER_LOGIN_SP \nORA-066550:line 1, column 7\nPL\/SQL: Statement ignored\nI checked my Firefox inspector again and again, and all my request data were right, even I removed some of my request data or changed it, it returned another error code.\nIs there someone help me out what's the problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":48357407,"Users Score":0,"Answer":"Oracle procedure PM_USER_LOGIN_SP has one or more parameters, each of them having its own data type. When calling that procedure, you must match number and data type of each of them.\nFor example, if it expects 3 parameters, you can't pass only 2 (nor 4) of them (because of wrong number of arguments (parameters)).\nIf parameter #1 is DATE, you can't pass letter A to it (because of a wrong type). Note that DATEs are kind of \"special\", because something that looks like a date to us, humans (such as 20.01.2018, which is today) passed to Oracle procedure's DATE data type parameter must really be a date. '20.01.2018' is a string, so either pass date literal, such as DATE '2018-01-20' or use appropriate function with a format mask, TO_DATE('20.01.2018', 'dd.mm.yyyy').\nTherefore, have a look at the procedure first, pay attention to what it expects. Then check what you pass to it.","Q_Score":0,"Tags":"python,html,sql,asp.net","A_Id":48359205,"CreationDate":"2018-01-20T14:41:00.000","Title":"Return PLS-00306 During login in with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a desktop app that is built on top of Django framework and frozen to .exe using PyInstaller. The idea behind it, that an application should connect to remote database(PostgreSQL) on VPS. That VPS is serving static files for this application too. So here is the question - is that option secure? Can potential hackers connect to my database and make a mess in it or replace original DB with the fake one? If they can, how should I fix that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":48467301,"Users Score":1,"Answer":"It is not safe to connect to a remote database in a scenario that you are describing.\nFor a potential hacker its a piece of cake to figure out the credentials of the remote database that you are using.\nAnd to answer your question it will be difficult for the hacker to replace the DB with a fake one. But it wont stop him from getting all the data from your DB and modifying it.\nWhat you should do is to have a rest-api endpoint or a grapghql endpoint to interact with the database. and you can hit that endpoint from the client app.","Q_Score":1,"Tags":"python,sql,django,security","A_Id":48467379,"CreationDate":"2018-01-26T18:20:00.000","Title":"Let desktop app based on Django, connect to remote DB is secure?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating a Python 3 spider that scrapes Tor hidden services for useful data. I'm storing this data in a PostgreSQL database using the psycopg2 library. Currently, the spider script and the database are hosted on the same network, so they have no trouble communicating. However, I plan to migrate the database to a remote server on a VPS so that I can have a team of users running the spider script from a number of remote locations, all contributing to the same database. For example, I could be running the script at my house, my friend could run it from his VPS, and my professor could run the script from a few different systems in the lab at the university, and all of these individual systems could synchronize with the PostgreSQL server runnning on my remote VPS.\nThis would be easy enough if I simply opened the database VPS to accept connections from anywhere, making the database public. However, I do not want to do this, for security reasons. I know I could tunnel the connection through SSH, but that would require giving each person a username and password that would grant them access to the server itself. I don't wish to do this. I'd prefer simply giving them access to the database without granting access to a shell account.\nI'd prefer to limit connections to the local system 127.0.0.1 and create a Tor hidden service .onion address for the database, so that my remote spider clients can connect to the database .onion through Tor.\nThe problem is, I don't know how to connect to a remote database through a proxy using psycopg2. I can connect to remote databases, but I don't see any option for connecting through a proxy.\nDoes anyone know how this might be done?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":367,"Q_Id":48480183,"Users Score":0,"Answer":"This would be easy enough if I simply opened the database VPS to accept connections from anywhere\n\nHere lies your issue. Just simply lock down your VPS using fail2ban and ufw. Create a ufw role to only allow connection to your Postgres port from the IP address you want to give access from to that VPS ip address. \nThis way, you don't open your Postgres port to anyone (from *) but only to a specific other server or servers that you control. This is how you do it. Don't run an onion service to connect Postgres content because that will only complicate things and slow down the reads to your Postgres database that I am assuming an API will be consuming eventually to get to the \"useful data\" you will be scraping.\nI hope that at least points you in the right direction. Your question was pretty general, so I am keeping my answer along the same vein.","Q_Score":0,"Tags":"python,python-3.x,postgresql,psycopg2,tor","A_Id":48602297,"CreationDate":"2018-01-27T20:24:00.000","Title":"Connect to remote PostgreSQL server over Tor? [python] [Tor]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using fedora 27 and python 2.7 and I installed the last version of GCC but when I tried to install mysqlpython connector by typing 'pip install mysqlpython' it show me an error.\n\ncommand 'gcc' failed with exit status 1","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":48566024,"Users Score":0,"Answer":"Probably you're missing the python-devel package:\nsudo dnf install python-devel -y\nIt should be written in the last lines before the last error you've posted.","Q_Score":1,"Tags":"python-2.7,fedora,fedora-27","A_Id":48856425,"CreationDate":"2018-02-01T15:12:00.000","Title":"How to install Mysqlpython connector on fedora 27","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"MongoDB uses self-signed certificate. I want to setup service on EVE to work with it. I searched documentation and SO but found only information how to use self-signed cert to access EVE itself. What should I do to connect to MongoDB from EVE with self-signed certificate?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":228,"Q_Id":48572258,"Users Score":0,"Answer":"Resolved by passing required parameters (ssl, ssl_ca_certs, etc) to MongoClient via MONGO_OPTIONS setting.","Q_Score":0,"Tags":"python,mongodb,ssl,eve","A_Id":48727022,"CreationDate":"2018-02-01T21:46:00.000","Title":"Connect to MongoDB with self-signed certificate from EVE","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some experience with SQL and python.\nIn one of my SQL stored procedures I want to use some python code block and some functions of python numpy.\nWhat is the best way to do it.SQL server version is 2014.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":662,"Q_Id":48595683,"Users Score":0,"Answer":"First install the MySQL connector for python from the MySQL website and then import the mysql.connector module and initialize a variable to a mysql.connector.connect object and use cursors to modify and query data. Look at the documentation for more help.\nIf you don't have problem with no networking capabilities and less concurrency use sqlite, it is better than MySQL in these cases.","Q_Score":1,"Tags":"python,sql,sql-server","A_Id":48687137,"CreationDate":"2018-02-03T09:01:00.000","Title":"how to use python inside SQL server 2014","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am receiving the error: ImportError: No module named MySQLdb whenever I try to run my local dev server and it is driving me crazy. I have tried everything I could find online:\n\nbrew install mysql\npip install mysqldb\npip install mysql\npip install mysql-python\npip install MySQL-python\neasy_install mysql-python\neasy_install MySQL-python\npip install mysqlclient\n\nI am running out of options and can't figure out why I continue to receive this error. I am attempting to run my local dev server from using Google App Engine on a macOS Sierra system and I am using python version 2.7. I am also running: source env\/bin\/activate at the directory my project files are and am installing all dependencies there as well. My path looks like this: \n\/usr\/local\/bin\/python:\/usr\/local\/mysql\/bin:\/usr\/local\/opt\/node@6\/bin:\/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin:\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/bin\nDoes anyone have further ideas I can attempt to resolve this issue?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":865,"Q_Id":48617779,"Users Score":0,"Answer":"Turns out I had the wrong python being pointed to in my virtualenv. It comes preinstalled with its own default python version and so, I created a new virtualenv and used the -p to set the python path to my own local python path.","Q_Score":2,"Tags":"python,mysql,django,pip,mysql-python","A_Id":48712972,"CreationDate":"2018-02-05T07:39:00.000","Title":"VirtualEnv ImportError: No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am receiving the error: ImportError: No module named MySQLdb whenever I try to run my local dev server and it is driving me crazy. I have tried everything I could find online:\n\nbrew install mysql\npip install mysqldb\npip install mysql\npip install mysql-python\npip install MySQL-python\neasy_install mysql-python\neasy_install MySQL-python\npip install mysqlclient\n\nI am running out of options and can't figure out why I continue to receive this error. I am attempting to run my local dev server from using Google App Engine on a macOS Sierra system and I am using python version 2.7. I am also running: source env\/bin\/activate at the directory my project files are and am installing all dependencies there as well. My path looks like this: \n\/usr\/local\/bin\/python:\/usr\/local\/mysql\/bin:\/usr\/local\/opt\/node@6\/bin:\/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin:\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/bin\nDoes anyone have further ideas I can attempt to resolve this issue?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":865,"Q_Id":48617779,"Users Score":0,"Answer":"In my project, with virtualenv, I just did\n\npip install mysqlclient\n\nand like magic everything is ok","Q_Score":2,"Tags":"python,mysql,django,pip,mysql-python","A_Id":71918271,"CreationDate":"2018-02-05T07:39:00.000","Title":"VirtualEnv ImportError: No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can anyone explain me the difference between mongoengine and django-mongo-engine and pymongo.\nI am trying to connect to mongodb Database in Django2.0 and python3.6","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1047,"Q_Id":48648373,"Users Score":1,"Answer":"If you want multiple DBs (in Mongo itself) and need to switch DB for long time, then don't go for mongoengine.\nFor simple interaction mongoengine is a nice option.","Q_Score":0,"Tags":"python,django,mongodb,django-models,python-3.6","A_Id":54759944,"CreationDate":"2018-02-06T17:10:00.000","Title":"Difference between mongoengine and django-mongo-engine and pymongo","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Am implementing a sign up using python & mysql. \nAm getting the error no module named flask.ext.mysql and research implies that i should install flask first. They say it's very simple, you simply type\npip install flask-mysql\nbut where do i type this? In mysql command line for my database or in the python app?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":702,"Q_Id":48671331,"Users Score":-1,"Answer":"You should be able to type it in the command line for your operating system (ie. CMD\/bash\/terminal) as long as you have pip installed and the executable location is in your PATH.","Q_Score":0,"Tags":"python,mysql,pip,installation","A_Id":48671377,"CreationDate":"2018-02-07T18:56:00.000","Title":"Where do i enter pip install flask-mysql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm a little new to this, and figuring it out as I go. So far so good, however I have having trouble importing a field that has a foreign key. I have about 10,000 rows in a csv file that I want to add to the database. As you can imagine, entering 10,000 items at a time is too labour intensive. When I try for an import I get this error: ValueError: invalid literal for int() with base 10\nThis is because (i think) it is trying to match the related model with the id. However the database is empty, so there is no id, and furthermore, the \"author\" field in my csv (the one with the foreign key) doesn't have an id yet. ( i assume this is created when the record is). Any suggestions?\nSorry in advance for the newbie question.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":423,"Q_Id":48687566,"Users Score":0,"Answer":"You state the the database is blank, if so you're going to have to have at least 1 record in the parent table, where the primary key is defined. After that you can use the parent.primary-key value in your CSV file as a dummy value for your foreign key. That should at least allow you to populate the table. Later on, as your parent table grows you're going to have to write some sort of script to add the correct parent.primary-key value to each record in the CSV file and then import it.","Q_Score":0,"Tags":"python,django,csv","A_Id":48688068,"CreationDate":"2018-02-08T14:13:00.000","Title":"CSV import into empty Django database with foreign key field","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I really am at a loss here. I am using Pycharm 5.0.4 and running a virtual env with Python3 and Django 2.0.1. \nI am trying to get my database up and running for my project, but no matter what I do I cannot seem to get anything to show up in the database tool window drop down in Pycharm. I have 'ENGINE': 'django.db.backends.sqlite3' set in my settings.py, and in Pycharm i am going to:\n Create New -> Data Source -> SQlite(Xerial). \nI then makemigrations and migrate but nothing shows up in the database. I can even go to the project website and succesfully add\/create models in the admin site. But I cannot figure out where they are or see them... \nIt was working at one point but I deleted my database because I was getting some errors and now I am trying to recreate it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":331,"Q_Id":48877570,"Users Score":0,"Answer":"you do not need to go to Create New -> Data Source -> SQlite(Xerial). \nif your setting.py database config is as is ('ENGINE': 'django.db.backends.sqlite3') the database is autogenerated when you run makemigrations then migrate.\nto recreate the database (you said you deleted):\n\nremove previous migrations (delete all files in your app's migration folder except the init.py)\nCtrl+Alt+R (or tools -> run manage.py)\nthen in the manage.py terminal run makemigrations andmigrate`\n\na new database will be created and migrations applied......... you dont have to worry about seeing the entries in the database directly.\nIf you're able to create superuser and login to the admin site and manipulate model data, then you're up and running","Q_Score":0,"Tags":"python,django,pycharm","A_Id":48877702,"CreationDate":"2018-02-20T03:50:00.000","Title":"unable to see my anything in database pycharm","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I got something like this:\n\ncur.execute(\"INSERT INTO name VALUES(HERE_IS_VARIABLE,'string',int,'string')\")\n\nStuff with %s (like in python 2.*) not working.\nI got errors, which tells me that im trying to use \"column name\" in place where i put my variable.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":376,"Q_Id":48914074,"Users Score":0,"Answer":"You can try using f-Strings and separating out the statement from the execution:\nstatement = f\"INSERT INTO name VALUES({VARIABLE_NAME},'string',int,'string')\" \ncur.execute(statement)\nYou might also want to try with '' around {VARIABLE_NAME}: '{VARIABLE_NAME}'\nIn f-strings, the expressions in {} get evaluated and their values inserted into the string. \nBy separating out the statement you can print it and see if the string is what you were expecting. \nNote, the f-string can be used within the cur.execute function, however I find it more readable to separate out. \nIn python3.6+ this is a better way of formatting strings then with %s. \nIf this does not solve the problem, more information will help debug:\nwhat is the name table's schema?\nwhat variable \/ value are you trying to insert?\nwhat is the exact error you are given?","Q_Score":0,"Tags":"postgresql,python-3.6","A_Id":53563271,"CreationDate":"2018-02-21T19:35:00.000","Title":"Python3.6 + Postgresql how to put VARIABLES to SQL query?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an xlsx with 2 worksheet. \nIn the first sheet I have an Excel pivot, in the second one the data source for the pivot table.\nI would like to modify the data source via python keeping the pivot structure in the other sheet.\nOpening the woorkbook with openpyxl I lost the pivot table, does anyone know if there is an option to avoid this behaviour?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":607,"Q_Id":49065126,"Users Score":0,"Answer":"Use the openpyxl package version 2.5 above, the function to keep the pivot table format is only available from the 2.5 release onward.","Q_Score":2,"Tags":"python,excel,openpyxl","A_Id":52547698,"CreationDate":"2018-03-02T08:14:00.000","Title":"Keeping excel-like pivot with openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for high-level insight here, as someone coming from the PHP ecosystem. What's the common way to deploy updates to a live Flask application thats running on a single server (no load balancing nodes), served by some WSGI like Gunicorn behind Nginx?\nSpecifically, when you pull updates from a git repository or rsync files to the server, I'm assuming this leaves a small window where a request can come through to the application while its files are changing.\nI've mostly deployed Laravel applications for production, so to prevent this is use php artisan down to throw up a maintenance page while files copy, and php artisan up to bring the site back up when its all done.\nWhat's the equivalent with Flask, or is there some other way of handling this (Nginx config)?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":360,"Q_Id":49071433,"Users Score":0,"Answer":"Looks like Docker might be my best bet:\n\nHave Nginx running on the host, and the application running in container A with Gunicorn. Nginx directs traffic to container A.\nBefore starting the file sync, tear down container A and start up container B, which listens on the same local port. Container B can be a maintenance page or a copy of the application.\nStart file sync and wait for it to finish. When done, tear down container B, and start container A again.","Q_Score":4,"Tags":"python,flask,web-deployment","A_Id":49099488,"CreationDate":"2018-03-02T14:43:00.000","Title":"Deploying updates to a Flask application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I got started using AWS Glue for my data ETL. I've pulled in my data sources into my AWS data catalog, and am about to create a job for the data from one particular Postgres database I have for testing. I have read online that when authoring your own job, you can use a Zeppelin notebook. I haven't used Zeppelin at all, but have used Jupyter notebook heavily as I'm a python developer, and was using it a lot for data analytics, and machine learning self learnings. I haven't been able to find it anywhere online, so my question is this \"Is there a way to use Jupyter notebook in place of a Zeppelin notebook when authoring your own AWS Glue jobs?\"","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4551,"Q_Id":49090954,"Users Score":1,"Answer":"I think it should be possible, if you can setup a Jupyter notebook locally, and enable SSH tunneling to the AWS Glue. I do see some reference sites for setting up local Jupyter notebook, enable SSH tunneling, etc, though not AWS Glue specific.","Q_Score":3,"Tags":"python,jupyter,aws-glue","A_Id":49095523,"CreationDate":"2018-03-04T01:12:00.000","Title":"Is it possible to use Jupyter Notebook for AWS Glue instead of Zeppelin","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have written a python program that creates a mongoDB database.\nI have written function that is supposed to update this databse.\nI've searched through many many forum posts for similar problems, but none of them seem to address this exact one.\nBasically, My objects are very simple and like so.\nobject{\n 'foo' : 'bar'\n 'baz' : [foobar,foobarbaz,]\n}\nI basically create these objects, then, if they are repeated I update them with a function like this:\ndb[\"collection\"].update({u'foo' : bar},{'$push':{u'baz' : foobaz}})\nI am trying to append a string to the list which is the value for the field name 'baz'.\nHowever, I keep getting this object in return:\n{'updatedExisting': False, u'nModified': 0, u'ok': 1.0, u'n': 0}\nI've tried replacing update with update_one.\nI am using python 2.7, Ubuntu 16.04, pymongo 2.7.2,\nmongodb 3.6.3\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":610,"Q_Id":49149442,"Users Score":0,"Answer":"Ok I figured this one out.\nBecause I was getting the proper format returned, that means that I was trying to communicate with db, but something was amiss with the python\/javascript communication.\nI'm pretty sure that python does not understand javascript.\nThat is why everything must be explictly stated as a string.\nFor example, with:\ndb[\"collection\"].update({u'foo' : bar},{'$push':{u'baz' : foobaz}}).\nNotice how I must use 'foo' instead of foo.\nThe same goes for the values.\nI am updating the values by a variable passed into a function.\nIn order for pymongo to correctly convert this into javascript to communicate with my mongoDB database, i must also explicitly convert it into a string with str(bar) and str(foobaz).\nNone of this would be possible with python unless I was using pymongo.\nSo in order for pymongo to do it's magic, I must give it the proper format.\nThis doesn't seem to apply to all versions of python and pymongo, as i've searched this problem and none of the people using pymongo had to use this specific conversion. This may also have something to do with the fact that i'm using python 2.7. I know that the typechecking\/conversion is a little different in python3 in some instances.\nIf someone can explain this in a more lucid way (using the correct python and javascript terminology) that would be great.\nUpdate: This also seems to hold true for any type.\neg. in order to increment using the $inc operator with 1, I must also use int(1) to convert my int into an int explicitly. (Even though this is kind of weird, because int should be primitive, but maybe it's an object in python).","Q_Score":1,"Tags":"python,mongodb,pymongo","A_Id":49155384,"CreationDate":"2018-03-07T10:20:00.000","Title":"MongoDB \/ PyMongo \/ Python update not working","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to set up a cronjob that executes a python (3.6) script every day at a given time that connects to an oracle 12g database with a 32 bit client (utilizing the cx_Oracle and sqlalchemy libs). The code itself was developed on a win64 bit machine.\nHowever, when trying to deploy the script onto an Ubuntu 16.04 server, I run into a dilemma when it comes to 32 vs 64 bit architectures.\n\nThe server is based on a 64 bit architecture\nThe oracle db is accessible via a 32 bit client\nmy current python version on ubuntu is based on 64 bit and I spent about an hour of how to get a 32 bit version running on a 64 bit linux machine without much success. \n\nThe error I receive at this moment when trying to run the python script refers to the absence of an oracle client (DPI-1047). However, I already encountered a similar problem in windows when it was necessary to switch the python version to the 32 bit version and to install a 32 bit oracle client.\nIs this also necessary in the ubuntu case or are there similar measurements needed to be taken? and if so, how do I get ubuntu to install and run python3.6 in 32 bit as well as the oracle client in 32 bit?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":698,"Q_Id":49171782,"Users Score":4,"Answer":"I am a bit confused about your question but this should give some clarification:\n\nA 32-bit client can connect to a 64-bit Oracle database server - and vice versa\nYou can install and run 32-bit applications on a 64-bit machine - this is at least valid for Windows, I don't know how it works on Linux.\nYour application (the python in your case) must have the same \"bitness\" as installed Oracle Client.","Q_Score":0,"Tags":"python,oracle,ubuntu,32bit-64bit,cx-oracle","A_Id":49172856,"CreationDate":"2018-03-08T11:15:00.000","Title":"Running a Python Script in 32 Bit on 64 linux machine to connect to oracle DB with 32 bit client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can't seem to get this to work with pymongo it was working before I added the $or option. Am I missing something obvious with this\ndataout = releasescollection.find( { $or: [{\"l_title\":{\"$regex\": \"i walk the line\", \"$options\": \"-i\"}}, {\"artistJoins.0.artist_name\":{\"$regex\": \"Johnny Cash\", \"$options\": \"-i\"}}]}).sort('id', pymongo.ASCENDING).limit(25)\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"discogs.py\", line 51\n dataout = releasescollection.find( { $or: [{\"l_title\":{\"$regex\": \"i walk the line\", \"$options\": \"-i\"}}, {\"artistJoins.0.artist_name\":{\"$regex\": \"Johnny Cash\", \"$options\": \"-i\"}}]})\n ^\nSyntaxError: invalid syntax\nRunning the below directly in mongo works but I'm missing something in the switchover to python\ndb.releases.find( { $or: [{\"l_title\":{\"$regex\": \"i walk the line\", \"$options\": \"-i\"}}, {\"artistJoins.0.artist_name\":{\"$regex\": \"Johnny Cash\", \"$options\": \"-i\"}}]}).sort({'id':1}).limit(25)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1427,"Q_Id":49204190,"Users Score":8,"Answer":"Should've relized this far sooner, once I added the $or it needs to be in quotes. So this works: \ndataout = releasescollection.find( { \"$or\": [{\"l_title\":{\"$regex\": \"i walk the line\", \"$options\": \"-i\"}}, {\"artistJoins.0.artist_name\":{\"$regex\": \"Johnny Cash\", \"$options\": \"-i\"}}]}).sort('id', pymongo.ASCENDING).limit(25)","Q_Score":1,"Tags":"python,python-2.7,mongodb-query,pymongo","A_Id":49204372,"CreationDate":"2018-03-10T00:42:00.000","Title":"PYTHON - PYMONGO - Invalid Syntax with $or","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to select data from mssql table using python(I'm using pycharm).\nOne of the fields contains arabic letters, but the result of the select is '???????' Instead of the arabic letters. How do I get the arabic words correctly?\nIm using pymssql. Im creating a connection and a cursor, and than running:\n\"cursor.execute(command)\".\nThe command is:\n\"Select * from Table where Field = XXX\"\nIt returns result, just not in the rigth encoding. Btw, in the table the arabic words are written correctly.\nI tried printing the data to the console and writing it to a file, both failed(returned '????').\nI've also added \"# -- coding: utf-8 --\" at the beginning of the file, so it can handle the non-ascii letters.\nAny idea? Thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":49217448,"Users Score":0,"Answer":"I needed to change the way I'm connecting to the database - instead of pymssql use pypyodbc.","Q_Score":0,"Tags":"python,sql-server,python-2.7,pycharm,pymssql","A_Id":49221473,"CreationDate":"2018-03-11T06:49:00.000","Title":"Selecting non-ascii words from mssql table using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed MySQL connector for python 3.6 in centos 7\nIf I search for installed modules with below command \n\nit's showing as below\n pip3.6 freeze\n mysql-connector==2.1.6\n mysql-connector-python==2.1.7\n pymongo==3.6.1\npip3.6 search mysql-connector\n mysql-connector-python (8.0.6) -MYSQL driver written in Python\n INSTALLED: 2.1.7\n LATEST: 8.0.6\n mysql-connector (2.1.6) - MySQL driver written in Python\n INSTALLED: 2.1.6 (latest)\n\nMySQL connector installed.But when trying to run the program using MySQL connector then its showing error no module installed MySQL connector.I am using MariaDB 10.0\n\npython3.6 mysql1.py\n Traceback (most recent call last):\n File \"mysql1.py\", line 2, in \n import mysql.connector as mariadb\n File \"\/root\/Python_environment\/my_Scripts\/mysql.py\", line 2, in \n import mysql.connector\n ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package\n\ncan any one know how to resolve","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":455,"Q_Id":49248489,"Users Score":1,"Answer":"You must not name your script mysql.py \u2014 in that case Python tries to import mysql from the script \u2014 and fails.\nRename your script \/root\/Python_environment\/my_Scripts\/mysql.py to something else.","Q_Score":0,"Tags":"python,pip,mariadb,centos7","A_Id":49254109,"CreationDate":"2018-03-13T04:47:00.000","Title":"Mysql Connector issue in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed MySQL connector for python 3.6 in centos 7\nIf I search for installed modules with below command \n\nit's showing as below\n pip3.6 freeze\n mysql-connector==2.1.6\n mysql-connector-python==2.1.7\n pymongo==3.6.1\npip3.6 search mysql-connector\n mysql-connector-python (8.0.6) -MYSQL driver written in Python\n INSTALLED: 2.1.7\n LATEST: 8.0.6\n mysql-connector (2.1.6) - MySQL driver written in Python\n INSTALLED: 2.1.6 (latest)\n\nMySQL connector installed.But when trying to run the program using MySQL connector then its showing error no module installed MySQL connector.I am using MariaDB 10.0\n\npython3.6 mysql1.py\n Traceback (most recent call last):\n File \"mysql1.py\", line 2, in \n import mysql.connector as mariadb\n File \"\/root\/Python_environment\/my_Scripts\/mysql.py\", line 2, in \n import mysql.connector\n ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package\n\ncan any one know how to resolve","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":455,"Q_Id":49248489,"Users Score":0,"Answer":"This is the problem I faced in Environment created by python.Outside the python environment i am able to run the script.Its running succefully.In python environment i am not able run script i am working on it.if any body know can give suggestion on this","Q_Score":0,"Tags":"python,pip,mariadb,centos7","A_Id":49376529,"CreationDate":"2018-03-13T04:47:00.000","Title":"Mysql Connector issue in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to do some querys on my data base, but the file is to large and is spending to much time, exists some way to upload one zip file with those querys to my table?\nps: the file have between 350Mb to 500Mb","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":198,"Q_Id":49262646,"Users Score":0,"Answer":"Store the zip file somewhere else on your server and simply store the name or the file location string in your dB.\nMysql really isn't intended to store large files or better yet zip folders. \nThen when you go to retrieve it just unload the file location into an tag it will link to it.","Q_Score":0,"Tags":"python,mysql,mysql-connector","A_Id":49262686,"CreationDate":"2018-03-13T17:47:00.000","Title":"Do mysql querys in python with zip file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have just inherited an extreemly legacy application (built on windows 95 - Magic7 for the connoisseurs) now backed against a recentish mssql db (2012). That's not the db system it was first designed on, and it thus comes with some seriously odd design for tables.\nI'm looking for a python ORM to help me talk to this thing easily. Namely, I'm after an ORM that can easily, for instance, merge 2 tables as if they were one.\nFor instance I may have tables BILLS and BILLS_HISTORY, with different column names, and perhaps even different column types, so different strictly speaking, but sementically containing the same information (same number of columns, sementically identical values).\nI'm looking for an ORM that lets me define only one Bill object, that maps to both tables, and that gives me the right hooks to decide where things go, and how to write them when tweaks are needed.\nAnother Example : say I have an object called a good. If a good is finished, it goes in the GOODS table, if it is not finished, it goes in the GOODS_UNFINISHED table. I'm looking for a goods object that can read both tables, and give me a finished property set to the right value depending which table it comes from (and with the hooks to change it from one table to the other if the property is set in some way).\nI'm fine with python, but I have not done much such db work before so my knowledge is limited there. I could, and might end up writing my own tailor made ORM, but that seems like a waste of time for something that will be thrown away in 6 months when the full transition is done to something new. Does anyone know of an ORM with such capabilities ? I'm planning to study ponyORM and SQLAlchemy, but I have a feeling it will take me a few days to come to a conclusion wether they are suitable for my use case. So I thought I'd ask the community too ...\nCheers","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":235,"Q_Id":49325336,"Users Score":0,"Answer":"For the record I've ended up with a hybrid approach using sqlalchemy.\nSqlalchemy was not flexible enough to do everything I wanted out of the box in a non verbose fashion, but had the required functionality to get a fair bit of the way along if one took the pain of writing explicitely everything needed. So I wrote a program that generates about 6000 lines of sqlalchemy code in order to have a 1 to 1 mapping between sqlalchemy objects to tables in the way required (basically defining everything explicitely for sqla). Sqlalchemy has a lot of hooks during autoload, but I have found it hard\/impossible to leverage different hooks and set fine grained behaviour at each hook at the same time, that's why I went the automated explicit way.\nOn top of these sqlalchemy objects, I've written objects that wraps them to hide the \"which table\" traffic control things. A bit of a kludge and I think that I could have done something with type heritance and sqlachemy objects, but time was passing and I only needed very little functionality or maintainability in that layer, so just charged ahead.","Q_Score":0,"Tags":"python,python-3.x,orm,sqlalchemy,ponyorm","A_Id":51031086,"CreationDate":"2018-03-16T16:19:00.000","Title":"Looking for a particular python ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want assign the data which is retrieve from database (sqlit3) particular column for a variable and call that variable for word tokenize.\nplease help with this\nI know tokenize part but I want to know how to assign the db value to a variable in python.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":49340877,"Users Score":0,"Answer":"This is worked but when using c.fetchall it didn't work.shows error saying TypeError: expected string or buffer\nimport sqlite3\nimport nltk\nfrom nltk.tokenize import sent_tokenize, word_tokenize\nconn = sqlite3.connect('ACE.db')\nc = conn.cursor()\nc.execute('SELECT can_answer FROM Can_Answer')\nrows = c.fetchone()\nfor row in rows:\n print(row)\nprint(word_tokenize(row))","Q_Score":0,"Tags":"python,sqlite","A_Id":49361558,"CreationDate":"2018-03-17T19:12:00.000","Title":"how to assign db column value to variable and call it to tokenize in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The default database of the Django is sqlite, however I want to use MYSQL instead.Since the MYSQLdb module is not supported in python3.x, the official doc of django recommend using mysqlclient and MySQL Connector\/Pythoninstead.Here is the original doc:\n\nMySQL has a couple drivers that implement the Python Database API described in PEP 249:\n \u2022 mysqlclient is a native driver. It\u2019s the recommended choice.\n \u2022 MySQL Connector\/Python is a pure Python driver from Oracle that does not require the MySQL client library\n or any Python modules outside the standard library.\n These drivers are thread-safe and provide connection pooling.\n In addition to a DB API driver, Django needs an adapter to access the database drivers from its ORM. Django provides\n an adapter for mysqlclient while MySQL Connector\/Python includes its own.\n\nI've got the latest version of mysql-client and mysql-connector-python, but as I execute themigratecommand, error occurs.Here is part of the message:\n\nUnhandled exception in thread started by .wrapper at 0x7f2112e99d90>\n Traceback (most recent call last):\n File \"\/home\/lothakim\/anaconda3\/envs\/py36\/lib\/python3.6\/site-packages\/django\/db\/backends\/mysql\/base.py\", line 15, in \n import MySQLdb as Database\n ModuleNotFoundError: No module named'MySQLdb'django..........core.exceptions.ImproperlyConfigured: Error loading MySQLdb module.Did you install mysqlclient?\n\nIt seems to be the problem of the database connection.But I followed every step of the official tutorial.How can I fix this problem?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":761,"Q_Id":49345950,"Users Score":1,"Answer":"It's a silly mistake...\nI confuse the mysql-client with mysqlclient.The former is part of the MYSQL application, while the latter is a python module.I didn't install the latter.Also note you should sudo apt-get install libmysqlclient-devbefore pip install mysqlclient.","Q_Score":3,"Tags":"python,mysql,django","A_Id":49346089,"CreationDate":"2018-03-18T08:33:00.000","Title":"How do I use MYSQL as the database in a Django project?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to write a migration which grants readonly permissions to each schema in my multi-tenant postgres DB. \nThe migrations run once per schema, so what I would like to do would be capture the name of the schema for which it is running, and then use that schema_name in my SQL statement to grant permissions for that schema. \nIn django, I can create a migration operation called 'RunPython', and from within that python code I can determine for which schema the migrations are currently running (schema_editor.connection.connection_name). \nWhat I want to do is pass that information to the next migration operation, namely \"RunSQL\", so that the SQL I run can be:\n\"GRANT SELECT ON ALL TABLES IN SCHEMA {schema_name_from_python_code} TO readaccess;\"\nIf anyone can shed any light on this issue it would be greatly appreciated. Cheers!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":668,"Q_Id":49389756,"Users Score":0,"Answer":"I was able to figure this out by getting rid of the migrations.runSQL. I just have migrations.RunPython. From within that python forward_func I am able to access the DB and write sql there (with the necessary string interpolation) \n:)","Q_Score":0,"Tags":"django,python-3.x,postgresql,multi-tenant,django-migrations","A_Id":49412917,"CreationDate":"2018-03-20T16:41:00.000","Title":"How to combine migrations.RunPython and migrations.RunSQL in django migration operation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are trying our luck with robot framework for tests. Automation. I am stuck at database connection at this point.\nA DB connection using cx_Oracle is displaying an error saying \u201c No keyword withy the name cx_Oracle\u2019 . If you have any idea please help . It will be helpful if you could put out an example of the Oracle dB connection sample.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4004,"Q_Id":49398152,"Users Score":0,"Answer":"It was indeed an installation issue. We had to use Anaconda3 and had to installed the library under its site-packages . I had this one under default Python folder.The issue is now resolved .","Q_Score":0,"Tags":"python,oracle,robotframework","A_Id":49469043,"CreationDate":"2018-03-21T04:22:00.000","Title":"Robot Framework : how to connect to Oracle database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to build a whole sheet from scratch, and stay efficient while doing it.\nFor that purpose, I am trying to rely on bulk operations.\nI can build a massive list of rows and add them easily using add_rows().\nHowever, I need some rows to be children of other rows, and neither row.indent nor row.parent_id seem possible to set on new rows (since the fresh rows don't have an id yet).\nI could possibly: create the parent row > add_rows() > get_sheet() > find the row id in sheet > create the child row > add_rows() but I'm losing the benefits of bulk operations.\nIs there any way at all so set child\/parent relationships in python before ever communicating with the smartsheet server?\n[Edit] Alternatively, a way to export an excel file via the SDK (or other) would also work, as I'm able to create my table with xlsxwrite and upload it manually to smartsheet at the moment. (Which is not an option, as we're trying to generate dozens of sheets, multiple times a day, got to automate it.)\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":466,"Q_Id":49416108,"Users Score":0,"Answer":"You cannot create a sheet with hierarchy in a single call. All rows in a single POST or PUT must have the same location specifier.\nYou can either:\n(1) Add all rows as a flat list, then indent each contiguous group of child rows. Repeat down the hierarchy.\n(2) Add top level rows, then add each contiguous group of indented rows","Q_Score":1,"Tags":"python-2.7,smartsheet-api","A_Id":49416890,"CreationDate":"2018-03-21T20:41:00.000","Title":"Building whole sheet programmatically with Python SDK","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am utilizing Mysql.Connector in my python code to do multiple inserts\/updates to a DB. After performing all of the inserts\/updates + other processing, I determine if it was successful or not and then perform either a db.commit() or a db.rollback(). I'm concerned about what would happen in a couple different situations. If the process is unexpectedly terminated. e.g kill -9 program.py or if the host|container that the program is running in is shutdown.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":160,"Q_Id":49430879,"Users Score":3,"Answer":"The MySQL server will roll back uncommitted transactions if the connection is terminated.","Q_Score":0,"Tags":"mysql,python-3.x","A_Id":49430986,"CreationDate":"2018-03-22T14:12:00.000","Title":"What happens to my transaction when Mysql.Connector for Python is unexpectedly terminated?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if there was a function in xlsxwriter that lets you sort the contents in the column from greatest to least or least to greatest? thanks!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":847,"Q_Id":49501501,"Users Score":2,"Answer":"Sorting isn't a feature of the xlsx file format. It is something Excel does at runtime.\nSo it isn't something XlsxWriter can replicate. A workaround would be to to sort your data using Python before you write it.","Q_Score":3,"Tags":"python,xlsxwriter","A_Id":49507909,"CreationDate":"2018-03-26T22:15:00.000","Title":"Is there a function in xlsxwriter that lets you sort a column?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python3, I mainly use float or np.float32\/64 and when it comes to store it into a database, even if SQL type is Numeric\/Decimal we end up with 0.400000000000021 or something like that instead of 0.4\nIt may be a problem if such data is accessed from another application.\nWorking only with decimal.Decimal in python isn't an answer for us, since we heavely make use of pandas, and Decimal is not supported.\nA solution would be to cast float to Decimal just before inserting into SQL (SQL Server in our case but it's a detail). And then back from Decimal to float after SELECT.\nDo you have another (and nicer) way to handle such issue ?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":4457,"Q_Id":49522441,"Users Score":6,"Answer":"The problem is that the value of your float isn't 0.4, because there is no value in either float32 or float64 (or Python native float, which is usually the same as float64) that's 0.4. The closest float64 to 0.4 that is 0.400000000000021, which is exactly what you've got.\nSince that's the closest float value there is to 0.4, if you ask Python to convert it to a string (e.g., print(f)), it'll be friendly and give you the string 0.4.\nBut when you pass it to a database\u2026 Well, it actually depends on which database interface library you're using. With some, it will call repr, which would give you '0.4' (at least in Python 3.x), so you're asking the database to store the float value of the string '0.4'. But with others, it will pass the float value directly as a C double, so you're asking the database to store the float value 0.400000000000021.\n\nSo, what should you do?\n\nDo you want to use this database with other code that will be reading the values as strings and then converting them to something like Decimal or float80 or decimal64 or some other type? Then you almost certainly want to set a SQL data type like DECIMAL(12, 6) that matches your actual precision, and let the database take care of it. (After all, there is no difference between 0.4 rounded to 6 decimal places and 0.400000000000021 rounded to 6 decimal places.)\nDo you want to do math inside the database itself? Same as above.\nOtherwise? Do nothing.\n\nSeriously, if the other code that's going to use this database is just going to read the values as float64, or read them as strings and convert them to float64 (or float32), they are going to end up with 0.400000000000021 no matter what you do, so don't do anything.\nAlso, consider this: if the difference between 0.4 and 0.400000000000021 is going to make any difference for any of your code, then your code is already broken by using float64, before you even get to the database.","Q_Score":2,"Tags":"python,sql,floating-point,decimal,precision","A_Id":49523107,"CreationDate":"2018-03-27T21:09:00.000","Title":"how to store python float in SQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python3, I mainly use float or np.float32\/64 and when it comes to store it into a database, even if SQL type is Numeric\/Decimal we end up with 0.400000000000021 or something like that instead of 0.4\nIt may be a problem if such data is accessed from another application.\nWorking only with decimal.Decimal in python isn't an answer for us, since we heavely make use of pandas, and Decimal is not supported.\nA solution would be to cast float to Decimal just before inserting into SQL (SQL Server in our case but it's a detail). And then back from Decimal to float after SELECT.\nDo you have another (and nicer) way to handle such issue ?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4457,"Q_Id":49522441,"Users Score":0,"Answer":"If you don't wan't\/need the precision, you can use np.round(array,roundto)","Q_Score":2,"Tags":"python,sql,floating-point,decimal,precision","A_Id":49522460,"CreationDate":"2018-03-27T21:09:00.000","Title":"how to store python float in SQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python3, I mainly use float or np.float32\/64 and when it comes to store it into a database, even if SQL type is Numeric\/Decimal we end up with 0.400000000000021 or something like that instead of 0.4\nIt may be a problem if such data is accessed from another application.\nWorking only with decimal.Decimal in python isn't an answer for us, since we heavely make use of pandas, and Decimal is not supported.\nA solution would be to cast float to Decimal just before inserting into SQL (SQL Server in our case but it's a detail). And then back from Decimal to float after SELECT.\nDo you have another (and nicer) way to handle such issue ?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4457,"Q_Id":49522441,"Users Score":0,"Answer":"You have to define decimal places in SQL, e.g.: decimal(8,2)","Q_Score":2,"Tags":"python,sql,floating-point,decimal,precision","A_Id":49522639,"CreationDate":"2018-03-27T21:09:00.000","Title":"how to store python float in SQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Goal: how to convert (111, 222, 333) to ('111', '222', '333') for an sql query in Python?\nWhat I have done so far: \nI am calling a csv file to a df:\ndataset = pd.read_csv('simple.csv')\nprint(dataset)\nLIST\n0 111\n1 222\n2 333\nList11 = dataset.LIST.apply(str)\nprint(List1)\n0 111\n1 222\n2 333\nName: OPERATION, dtype: object\nmyString = \",\".join(List1)\nprint(myString)\n111,222,333\nsql = \"SELECT * FROM database WHERE list IN (%s)\" %myString\nThis does not work. Could you please help?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1305,"Q_Id":49541070,"Users Score":0,"Answer":"Please try this and verify if it helps-\nsql = \"SELECT * FROM database WHERE list IN (%s)\" % \",\".join(map(myString,List1))","Q_Score":2,"Tags":"python,sql,arrays,string,quote","A_Id":49542244,"CreationDate":"2018-03-28T17:49:00.000","Title":"(Sql + Python) df array to string with single quotes?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Django application that is successfully being hosted on a remote server using Nginx. The production DB is PostgreSQL. \nI have a development server where I'd like to change the code for the Django application. When I use python manage.py runserver for testing, I'd ideally prefer to avoid touching the production DB at all. \nThis is my first time crossing this bridge. Can someone shed some light on the best practice for 'stubbing' the entire database for development? Can you do some if\/else statement in settings.py to use SQLite? Or is there a better solution?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":49544913,"Users Score":1,"Answer":"You can definitely use if\/else statements in settings.py, or any Python code really.\nCommon practice is to put values which differ, especially secrets like database passwords, in environment variables. You set these to different values in production or locally and access them in Python using os.environ.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":49544986,"CreationDate":"2018-03-28T22:03:00.000","Title":"Best way to modify Django application without affecting production database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Now i am developing a website in django. But i used MySQLdb to connect with the database instead of django ORM since django ORM doesn't support multiple keys.\nI will explain my question with a example, consider i am writing a dictionary to database having type longtext. Hence i have used json.dumps() method to write database.\nI am reading those field using another url, hence while coding view function for reading i have used json.loads() method to get the dictionary back and here is my question arise. Whether i need to handle the exception when database field hold a non json string. If database field hold a non json string json.loads() will produce ValueError.\nWhether i need to catch those type of error since chance of having database with non json string is very little.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":49550077,"Users Score":0,"Answer":"It's down to preference but personally I try to catch all potential errors, you never know what other issues they might expose.\nAnd ofcourse, there's the \"Zen of Python\": \"Errors should never pass silently.\nUnless explicitly silenced\"","Q_Score":0,"Tags":"python,json,django,django-testing","A_Id":49550213,"CreationDate":"2018-03-29T07:14:00.000","Title":"How to code a django site efficiently while accessing database using MySQLdb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to connect to MSSQL database from AWS Lambda (using python) and really struggling to proceed further.\nI tried many options with pyodbc, pypyodbc, pymssql they work on local development machine (Windows 7), however AWS Lambda is unable to find the required packages when deployed on AWS. I use ZAPPA for deployment of Lambda package.\nI searched through many forums but unable to see the anything moving ahead, any help on this would be highly appreciated.\nMany thanks,\nAkshay","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2801,"Q_Id":49556382,"Users Score":1,"Answer":"Try to do import cython together with pymssql in your code.","Q_Score":5,"Tags":"python,sql-server,amazon-web-services,aws-lambda","A_Id":70156360,"CreationDate":"2018-03-29T12:42:00.000","Title":"Use MSSQL with AWS Lambda","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently submitting my U-SQL jobs via the Python library and I want to add additional code in a C# or Python code-behind file. Are code-behind files supported, either in python or in a CLI-based method that I could easily automate? \nIdeally I'd like to use the Azure CLI or the Python library so this can run on both Linux and Windows (i.e. not relying on Visual Studio). I've check the documentation for both PowerShell and Python, but I don't see any instructions on how to submit jobs with code-behind logic.\nHere is my python code:\n\nfrom azure.mgmt.datalake.analytics.job import DataLakeAnalyticsJobManagementClient\n\nadlaJobClient = get_client_from_cli_profile(\n DataLakeAnalyticsJobManagementClient,\n adla_job_dns_suffix='azuredatalakeanalytics.net')\n\ndef submit_usql_job(script):\n job_id = str(uuid.uuid4())\n job_result = adlaJobClient.job.create(\n ADLA_ACCOUNT_NAME,\n job_id,\n JobInformation(\n name='Sample Job',\n type='USql',\n properties=USqlJobProperties(script=script)\n )\n )\n print(\"Submitted job ID '{}'\".format(job_id))\n return job_id","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":312,"Q_Id":49568032,"Users Score":1,"Answer":"Likely you're going to have to manage creating and registering the assembly yourself as an additional step in your job. Then reference the assembly as you normally would. If you need an example of what this might look like, submit a job from Visual Studio, for a query that has an accompanying code-behind file, and look at the script that it generates for you. You'll see that it is adding the above steps for you, transparently. Now, you can try applying this same approach\/pattern in your own code. \nEither that or move your code-behind logic to a dedicated library which you can upload and register separately, one-time, then reference it to your heart's content from your python-submitted jobs.","Q_Score":0,"Tags":"python,azure,azure-data-lake,u-sql","A_Id":49577265,"CreationDate":"2018-03-30T03:35:00.000","Title":"Programmatically submit a U-SQL job with code-behind","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to superset.\nGoing to Sources > Databases for a new connection to my athena.\nI have downloaded JDBC driver and writing following connection line:\n\nawsathena+jdbc:\/\/AKIAJ2PKWTZYAPBYKRMQ:xxxxxxxxxxxxxxx@athena.us-east-1.amazonaws.com:443\/default?s3_staging_dir='s3:\/\/aws-athena-query-results-831083831535-us-east-1\/' as SQLAlchemy URI. First parameter being access key and 2nd being secret key(Modified a bit for privacy)\n\nI am getting error as:\nERROR: {\"error\": \"Connection failed!\\n\\nThe error message returned was:\\nCan't load plugin: sqlalchemy.dialects:awsathena.jdbc\"}\nWould be very thankful for support as I really wish to explore the open source visualisation using superset on my databases.\nThanks,\nRavi","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2417,"Q_Id":49578829,"Users Score":1,"Answer":"If you are sure you have done pip install \"PyAthenaJDBC>1.0.9\" in the same python environment as you start your superset. Try restarting Superset in the same environment.","Q_Score":2,"Tags":"python,sqlalchemy,amazon-athena,superset,apache-superset","A_Id":51992746,"CreationDate":"2018-03-30T17:36:00.000","Title":"Cant connect to superset using athena database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In django, is there a way to find which rows were changed during a transaction?\nThere are CDC frameworks out there, however I would like to find the changes of a specific transaction with some sort of an ID. I would also want this to be synchronous with the rest of the runtime code.\nCheers","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":49601575,"Users Score":0,"Answer":"You can try django debug toolbar. There are many option to track requests.\nPretty sure you will find something.","Q_Score":0,"Tags":"python,django,database,transactions","A_Id":49601633,"CreationDate":"2018-04-01T18:57:00.000","Title":"Is there a way to track transaction changes in Django ORM?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Happy Easter to y'all!\nSo my problem is essentially using SQL timestamp is causing me some issues. I have a Travel Booking website with a database made in PHPmyadmin and as mentioned two timestamp columns (one for departure time and one for arrival.) If there are times currently there for the journey they will be displayed, if not a tick box to set the current time as the timestamp, this i'm fine with. \nI don't know what html form element to use to display the entirety of the SQL timestamp, both the date and time section in the html form (or how to validate any of it xD) I have tried splitting the timestamp and displaying it in both a date and time field but had no luck and was told to stick to the timestamp by my group members. Cheers","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":49612023,"Users Score":1,"Answer":"Solved the problem with some troubleshooting. The formatting difference between datetime-local and timestamp can be solved with some simple regex. \nVariableName = re.sub(r\"\\s+\", 'T', VariableName) \nSwaps any whitespace characters with a T. \nThis is because datetime-local likes to concatenates the date and time together using a capital T. If we simulate this using the regex above we can convert the timestamp into a readable format.","Q_Score":0,"Tags":"python,html,sql","A_Id":49677529,"CreationDate":"2018-04-02T13:14:00.000","Title":"Exchanging Information between HTML Form and SQL Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"df.to_sql(name='hourly', con=engine, if_exists='append', index=False)\n\nIt inserts data not only to table 'hourly', but also to table 'margin' - I execute this particular line only.\nIt's Postgresql 10.\nWhile Creating table 'hourly', I inherited column names and dtypes from table 'margin'.\nIs it something wrong with the db itself or is it Python code?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":588,"Q_Id":49651442,"Users Score":0,"Answer":"Removing\n\nINHERITS (tablename);\n\non the slave table (creating it again without INHERITS)\nseems to have done the trick.\nOnly a matter of curiosity:\nWhy did it matter? I thought inheritance only gets columns and dtypes not the actual data.","Q_Score":1,"Tags":"python-3.x,postgresql,pandas","A_Id":49651837,"CreationDate":"2018-04-04T12:47:00.000","Title":"Pandas Dataframe.to_sql wrongly inserting into more than one table (postgresql)","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a excel file with large text data. 2 columns have lot of text data. Like descriptions, job duties. \nWhen i import my file in python df=pd.read_excel(\"form1.xlsx\"). It shows the columns with text data as NaN. \nHow do I import all the text in the columns ?\nI want to do analysis on job title , description and job duties. Descriptions and Job Title are long text. I have over 150 rows.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2378,"Q_Id":49652693,"Users Score":1,"Answer":"Try converting the file from .xlsx to .CSV \nI had the same problem with text columns so i tried converting to CSV (Comma Delimited) and it worked. Not very helpful, but worth a try.","Q_Score":0,"Tags":"excel,python-3.x,pandas,import","A_Id":49656081,"CreationDate":"2018-04-04T13:46:00.000","Title":"how to read text from excel file in python pandas?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem with multilanguage and multi character encoded text. \nProject use OpenGraph and it will save in mysql database some information from websites. But database have problem with character encoding. I tryed encoding them to byte. That is problem, becouse in admin panel text show us bute and it is not readable. \nPlease help me. How can i save multilanguage text in database and if i need encode to byte them how can i correctly decode them in admin panel and in views","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":49672291,"Users Score":0,"Answer":"You should encode all data as UTF-8 which is unicode.","Q_Score":0,"Tags":"python,django","A_Id":49672440,"CreationDate":"2018-04-05T12:22:00.000","Title":"Django multilanguage text and saving it on mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We're experiencing some slowdown, and frustrating database lockups with our current solution, which essentially consists of calling stored procedures on an MSSQL server to manipulate data. If two or more users try to hit the same table simultaneously, one is locked out and their request fails.\nThe proposed solution to this problem was to bring the data into python using sqlalchemy, and perform any manipulations \/ calculations on it in dataframes. This worked but was incredibly slow because of the network calls to the DB.\nIs there a better solution which can support multiple concurrent users, without causing too much of a slowdown?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":193,"Q_Id":49675751,"Users Score":1,"Answer":"You can use nolock keyword in stored procedure to remove this problem\nin your stored procedure where you specify table name in front of that write nolock keyword i hope it will be work for you\neg.\nselect * from tablename1 t1\njoin nolock tablename2 t2 on t2.id=t1.id","Q_Score":0,"Tags":"python,sql-server,sqlalchemy","A_Id":49684338,"CreationDate":"2018-04-05T15:08:00.000","Title":"Replacing MSSQL Server Stored Procedures to Prevent DB Locks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to download the last 24hours of new files added to an S3 bucket - however, the S3 bucket contains a large number of files.\nFrom my understanding s3 buckets use a flat structure where files are stored alphabetically based on the key name. \nI've written a script to pull all the data stored on the bucket using threading. However, now I have all the files on my local system I want to update the database every 24hours with any new files that have been uploaded to S3. \nMost forums recommend using 'last modified' to search for the correct files and then download the files that match the data specified.\nFirstly, does downloading a file from the s3 bucket change the 'last modified'? Seems like this could cause problems.\nSecondly, this seems like a really in-efficient process - searching through the entire bucket for files with the correct 'last modified' each time, then downloading... especially since the bucket contains a huge number of files. Is there a better way to achieve this?\nFinally, does the pre-fix filter make this process any more efficient? or does this also require searching through all files.\nThanks in advance!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1708,"Q_Id":49710898,"Users Score":1,"Answer":"I'm going to go a different direction with this answer... You're right, that process is inefficient. I'm not sure the quantities and size of data you're dealing with but you're basically talking that you need a batch job to download new files. Searching a large number of keys is the wrong way to do it and is kind of an anti-pattern in AWS. At the root you need to keep track of new files as they come in.\nThe best way to solve this is using a Lambda Function (python since you're already familiar) that is triggered when a new object is deposited in your S3 bucket. What does that function do when a new file comes in?\nIf I had to solve this I would do one of the following:\n\nAdd the key of the new file to a DynamoDB table along with the timestamp. Throughout the day that table will grow whenever a new file comes in. When you're running your batch job read the contents of that table and download all the keys referenced, remove the row from the DynamoDB table. If you wanted to get fancy you could query based on the timestamp column and never clear rows from the table.\nCopy the file to a second \"pickup\" bucket. When your batch job runs you just read all the files out of this pickup bucket and delete them. You have to be careful with this one. It's really easy but you have to consider the size\/quantity of the files you're depositing so you don't run into the Lambda 5min execution limit.\n\nI can't really recommend one over the other because I'm not familiar with your scale, cost appetite, etc. For a typical use case I would probably go with the DynamoDB table solution. I think you'll be surprised how easy DynamoDB is to interact with in Python3.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,amazon-s3,boto3","A_Id":49711565,"CreationDate":"2018-04-07T18:57:00.000","Title":"Efficiently downloading files from S3 periodically using python boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to download the last 24hours of new files added to an S3 bucket - however, the S3 bucket contains a large number of files.\nFrom my understanding s3 buckets use a flat structure where files are stored alphabetically based on the key name. \nI've written a script to pull all the data stored on the bucket using threading. However, now I have all the files on my local system I want to update the database every 24hours with any new files that have been uploaded to S3. \nMost forums recommend using 'last modified' to search for the correct files and then download the files that match the data specified.\nFirstly, does downloading a file from the s3 bucket change the 'last modified'? Seems like this could cause problems.\nSecondly, this seems like a really in-efficient process - searching through the entire bucket for files with the correct 'last modified' each time, then downloading... especially since the bucket contains a huge number of files. Is there a better way to achieve this?\nFinally, does the pre-fix filter make this process any more efficient? or does this also require searching through all files.\nThanks in advance!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1708,"Q_Id":49710898,"Users Score":1,"Answer":"Another solution to add here..\nYou could enable inventory on S3 which gives you a daily report of all files in the bucket, including meta data such as date in CSV format. \nWhen the CSV is generated (first one can take 48hours) you are able to generate a list of new files that you can download accordingly. The dynamo lambda option mentioned before will definitely give you a more real-time solution. \nAlso, I think modified date is only affected by PUT and POST actions","Q_Score":0,"Tags":"python-3.x,amazon-web-services,amazon-s3,boto3","A_Id":49712138,"CreationDate":"2018-04-07T18:57:00.000","Title":"Efficiently downloading files from S3 periodically using python boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i have build an app in django to extract data from an mssql server and display the results on a table on a template.\nwhat i want to do now is to export the same sql query results to an excel file. I have used pymssql driver to connect to the db and pysqlalchemy.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":712,"Q_Id":49848167,"Users Score":0,"Answer":"my code actually worked. I thought it was going to save the excel file to 'C:\\excel' folder so i was looking for the file in the folder but i couldn't find the excel file. The excel file was actually exported to my django project folder instead. \nHow to i allow the end user to be able to download the file to their desktop instead of exporting it to the server itself","Q_Score":1,"Tags":"python,sql-server,django,sqlalchemy,django-views","A_Id":49850520,"CreationDate":"2018-04-16T00:36:00.000","Title":"python and Django export mssql query to excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using gremlinpython version 3.3.2 and AWS NEPTUNE.\nI try to drop all the edges (tried the vertices too) and it fails everytime.\ng.E().drop().iterate()\nGives me: \n\ngremlin_python.driver.protocol.GremlinServerError: 597: Exception\n processing a script on request [RequestMessage{,\n requestId=ae49cbb7-e034-4e56-ac76-b62310f753c2, op='bytecode',\n processor='traversal', args={gremlin=[[], [V(), drop()]],\n aliases={g=g}}}].\n\nDid anyone already successfuly remove all vertices\/edges of a graph in AWS Neptune?\nEDIT:\nDropping a specific ID works:\ng.E(id).drop().iterate()\nEDIT2:\nHere is a backtrace done using gremlin console: \n\ngremlin> g.E().count().next() \n==>740839\n gremlin> g.E().drop().iterate()\n A timeout occurred within the script during evaluation of [RequestMessage{, requestId=24c3d14c-c8be-4ed9-a297-3fd2b38ace9a, op='eval', > processor='', args={gremlin=g.E().drop().iterate(), bindings={}, batchSize=64}}] - consider increasing the timeout\n Type ':help' or ':h' for help.\n Display stack trace? [yN]y\n org.apache.tinkerpop.gremlin.jsr223.console.RemoteException: A timeout occurred within the script during evaluation of [RequestMessage{, > requestId=24c3d14c-c8be-4ed9-a297-3fd2b38ace9a, op='eval', processor='', args={gremlin=g.E().drop().iterate(), bindings={}, > batchSize=64}}] - consider increasing the timeout\n at org.apache.tinkerpop.gremlin.console.jsr223.DriverRemoteAcceptor.submit(DriverRemoteAcceptor.java:178)\n at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:99)\n at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:122)\n at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:95)\n at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:98)\n at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)\n at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1225)\n at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:145)\n at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:165)\n at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:130)\n at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:59)\n at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:98)\n at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)\n at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1225)\n at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:145)\n at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:165)\n at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:89)\n at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:236)\n at org.apache.tinkerpop.gremlin.console.Console.(Console.groovy:146)\n at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:236)\n at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:453)\n gremlin>\n\nI would say it's a timeout problem, right?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1739,"Q_Id":49877928,"Users Score":2,"Answer":"OK so after some exchange with @stephen mallette in comment of the question and AWS support I finally found where the problem lies.\nAs it still a preview NEPTUNE still suffer some smalls issues and drop() is one.\nA workaround, given by the support is to perform drop() in batches via parallel connections:\n\ng.V().limit(1000).drop() \n\nSo dropping table is hitting a timeout right now, even with a 5 minutes timeout and 700.000 edges.\nI will update this answer on NEPTUNE's release.","Q_Score":2,"Tags":"python,gremlin,amazon-neptune","A_Id":50076688,"CreationDate":"2018-04-17T12:17:00.000","Title":"Drop all edges on AWS Neptune using pythongremlin","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am loading the data using COPY command. \nMy Dates are in the following format. \n\nD\/MM\/YYYY eg. 1\/12\/2016\nDD\/MM\/YYYY eg. 23\/12\/2016\n\nMy target table data type is DATE. I am getting the following error \"Invalid Date Format - length must be 10 or more\"","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1414,"Q_Id":49926250,"Users Score":2,"Answer":"As per the AWS Redshift documentation, \n\nThe default date format is YYYY-MM-DD. The default time stamp without\n time zone (TIMESTAMP) format is YYYY-MM-DD HH:MI:SS.\n\nSo, as your date is not in the same format and of different length, you are getting this error. Append the following at the end of your COPY command and it should work.\n[[COPY command as you are using right now]] + DATEFORMAT 'DD\/MM\/YYYY'\nNot sure about the single digit case though. You might want to pad the incoming values with a 0 in the beginning to match the format length.","Q_Score":2,"Tags":"python,amazon-redshift","A_Id":49926489,"CreationDate":"2018-04-19T16:46:00.000","Title":"Redshift COPY Statement Date load error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create table in odoo 10 with the following columns: quantity_in_the_first_day_of_month,input_quantity,output_quantity,quantity_in_the_last_day_of_the_month.\nbut i don't know how to get the quantity of the specified date","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":49965402,"Users Score":0,"Answer":"You can join the sale order and sale order line to get specified date.\nselect \n sum(sol.product_uom_qty)\nfrom \n sale_order s,sale_order_line sol \nwhere \n sol.order_id=s.id and\n DATE(s.date_order) = '2018-01-01'","Q_Score":0,"Tags":"python,python-3.x,python-2.7,odoo,odoo-10","A_Id":50134759,"CreationDate":"2018-04-22T11:28:00.000","Title":"How to get the quantity of products in specified date in odoo 10","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In order for Python to talk to MariaDB I need to install mariadb-devel and python34-mysql-debug packages before using pip to install mysqlclient. I have done this with Python and MariaDB on a single server. Now, I'm installing this in an environment with two servers: AppServer which runs Python code and DBServer which is running MariaDB. So, do maradb-devel and python34-mysql-debug need to be installed on AppServer or on DBServer?\n\nBoth servers are running RHEL 7.4.\nMariaDB is version 10.1\nPython is version 3.4\n\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":130,"Q_Id":49987101,"Users Score":0,"Answer":"mariadb-devel and python34-mysql-debug packages need to be installed on the application server where Python is running. I tested this and it is working - I was able to run a simple Python script and connect to the database on the other server.","Q_Score":0,"Tags":"python,mysql,mariadb","A_Id":50026461,"CreationDate":"2018-04-23T17:48:00.000","Title":"Where to install mariadb-devel and python34-mysql-debug packages for python in a split app server \/ db server environment","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got this error ModuleNotFoundError: No module named 'django.db.migrations.migration' after i tried these below steps\n\npython3 manage.py migrate --fake resources zero (resources is my\napp name)\nfind . -path \"*\/migrations\/*.py\" -not -name \"__init__.py\" -delete\nfind . -path \"*\/migrations\/*.pyc\" -delete\npython3 manage.py showmigrations\n\nNote: used PostgreSQL\nHow to resolve this issue?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5056,"Q_Id":50001428,"Users Score":6,"Answer":"By running those commands you might have accidentally deleted the migrations module. Try reinstalling Django via pip.\npip uninstall django\npip install django\nTake note of the version of Django you are using. In case you aren't using the latest version for your python environment install using the following command\npip install django==\nEdit:-\nDrop the existing database schema. Delete the migrations folder and recreate an empty one in its place.","Q_Score":3,"Tags":"django,python-3.x,django-models,wagtail","A_Id":50009168,"CreationDate":"2018-04-24T12:15:00.000","Title":"Python-ModuleNotFoundError: No module named 'django.db.migrations.migration'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am learning to implement Flask application.\nAnd using mysql as a database.\nI tried MySQLdb, flask_mysql & flask_sqlalchemy.\nBut still getting this error, when i try to perform any action on database :\nsqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (2059, \"Authentication plugin 'caching_sha2_password' cannot be loaded: The specified module could not be found.\\r\\n\")\nI also tried :\nALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'root';\nPlease help.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2259,"Q_Id":50015205,"Users Score":0,"Answer":"You probably need to install the mysql client. On Debian based systems you can use sudo apt install mysql-client -y. I ran into this while using the Python 3.6 Docker image.","Q_Score":1,"Tags":"python-3.x,flask,flask-sqlalchemy,mysql-python","A_Id":50295359,"CreationDate":"2018-04-25T06:15:00.000","Title":"sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (2059, \"Authentication plugin 'caching_sha2_password'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"today I come to you for inspiration or maybe ideas how to solve a task not killing my laptop with massive and repetitive code.\nI have a CSV file with around 10k records. I also have a database with respective records in it. I have four fields inside both of these structures: destination, countryCode,prefix and cost\nEvery time I update a database with this .csv file I have to check if the record with given destination, countryCode and prefix exist and if so, I have to update the cost. That is pretty easy and it works fine.\nBut here comes the tricky part: there is a possibility that the destination may be deleted from one .csv file to another and I need to be aware of that and delete that unused record from the database. What is the most efficient way of handling that kind of situation?\nI really wouldn't want to check every record from the database with every row in a .csv file: that sounds like a very bad idea.\nI was thinking about some time_stamp or just a bool variable which will tell me if the record was modified during the last update of the DB BUT: there is also a chance that neither of params within the record change, thus: no need to touch that record and mark it as modified.\nFor that task, I use Python 3 and mysql.connector lib. \nAny ideas and advice will be appreciated :)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":425,"Q_Id":50016819,"Users Score":0,"Answer":"If you're keeping a time stamp why do you care if it's updated even if nothing was changed in the record? If the reason is that you want to save the date of the latest update you can add another column saving a time stamp of the last time the record appeared in the csv and afterwords delete all the records that the value of this column in them is smaller than the date of the last csv.","Q_Score":0,"Tags":"mysql,database,python-3.x,csv,database-design","A_Id":50020530,"CreationDate":"2018-04-25T07:52:00.000","Title":"Check if a record from database exist in a csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently started working on influxDB, can't find how to add new measurements or make a table of data from separate measurements, like in SQL we have to join table or so.\nThe influxdb docs aren't that clear. I'm currently using the terminal for everything and wouldn't mind switching to python but most of it is about HTTP post schemes in the docs, is there any other alternative?\nI would prefer influxDB in python if the community support is good","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":176,"Q_Id":50022212,"Users Score":1,"Answer":"The InfluxDB query language does not support joins across measurements. \nIt instead needs to be done client side after querying data. Querying, without join, data from multiple measurements can be done with one query.","Q_Score":1,"Tags":"influxdb,influxdb-python","A_Id":50022881,"CreationDate":"2018-04-25T12:20:00.000","Title":"queires and advanced operations in influxdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is a cell like: x1, x2, x3.\nMy program can find the specific word x2, and I want to mark this word(x2) only.\nHow can I use openpyxl to mark the character only x2?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":484,"Q_Id":50036503,"Users Score":0,"Answer":"Although excel supports formatting partial cells, openpyxl only supports formatting whole cells and not partial cells.","Q_Score":0,"Tags":"python-3.x,openpyxl","A_Id":50044660,"CreationDate":"2018-04-26T06:47:00.000","Title":"openpyxl - how can I style on character partial cell","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install ibm_db driver in Linux for Python. I test the installation using import ibm_db. The installation was successful. When I test using 'import ibm_db' I get the following error: \nImportError: \/usr\/lib\/python2.7\/site-packages\/ibm_db-2.0.3-py2.7-linux-x86_64.egg\/ibm_db.so: undefined symbol: PyUnicodeUCS2_FromObject\nPlease help me to resolve this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":461,"Q_Id":50062872,"Users Score":0,"Answer":"Please try installing the current version of the ibm_db driver (which at time of writing is 2.0.8a), which you can do via:\npip install \"ibm_db==2.0.8a\"\nYou need the version of pip that matches your python version.","Q_Score":0,"Tags":"python-2.7,db2","A_Id":50063192,"CreationDate":"2018-04-27T12:55:00.000","Title":"Error installing Python ibm-db driver on Linux","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have built a web app in Python and Flask and am having trouble pulling the date and time from my SQLite database.\nI enter the date into the DB with the following line-\norder.order_placed = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\nWhich with my current example enters the following into the DB -\n2018-05-01 12:08:49\nBut when I call order.order_placed I get datetime.date(2018, 5, 1)\nEven if I call str(order.order_placed) I get '2018-05-01'\nCan someone help me get the full date and time out of the database? Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":252,"Q_Id":50115624,"Users Score":1,"Answer":"It's possible that you're using DateField when in actuality you want to use DateTimeField.\nFurthermore, you don't need to call strftime before storing the data. Peewee works nicely with Python datetime objects.","Q_Score":1,"Tags":"python,sqlite,flask,peewee","A_Id":50118318,"CreationDate":"2018-05-01T11:23:00.000","Title":"Peewee and SQLite returning incorrect date format","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an excel table that I am building using python logic via xlwings. Once calculated, I would like to copy that table (ie its range) and save it as an image (similar format to select range -> copy -> right click -> paste as image ). End goal is to use pptx to include the table in a powerpoint presentation\nIs this possible?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1679,"Q_Id":50136831,"Users Score":0,"Answer":"I found that the best solution for this is to embed the excel range into the powerpoint presentation. \nCopy your excel range, go to the ribbon and click on the triangle under 'Paste', 'Paste Special', 'Paste Link'\nThis will automatically reflect the changes in the presentation","Q_Score":2,"Tags":"xlwings,python-pptx","A_Id":50138205,"CreationDate":"2018-05-02T14:14:00.000","Title":"xlwings: copy range and save as an image","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a keras model (tensorflow) and want to store it in my MS SQL Server database. What is the best way to do that? pyodbc.Binary(model) throws an error. I would prefer a way without storing the model in the file system first.\nThanks for any help","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3587,"Q_Id":50174189,"Users Score":3,"Answer":"It seems that there is no clean solution to directly store a model incl. weights into the database. I decided to store the model as h5 file in the filesystem and upload it from there into the database as a backup. For predictions I load anyway the model from the filesystem as it is much faster than getting it from the database for each prediction.","Q_Score":2,"Tags":"sql-server,database,python-3.x,tensorflow,keras","A_Id":50363845,"CreationDate":"2018-05-04T11:46:00.000","Title":"Save keras model to database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a keras model (tensorflow) and want to store it in my MS SQL Server database. What is the best way to do that? pyodbc.Binary(model) throws an error. I would prefer a way without storing the model in the file system first.\nThanks for any help","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3587,"Q_Id":50174189,"Users Score":1,"Answer":"The best approach would be to save it as a file in the system and just save the path in the database. This technique is usually used to store large files like images since databases usually struggle with them.","Q_Score":2,"Tags":"sql-server,database,python-3.x,tensorflow,keras","A_Id":57933840,"CreationDate":"2018-05-04T11:46:00.000","Title":"Save keras model to database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an interesting problem I'd like to find a solution to.\nI have a MySQL server running on Ubuntu (16.04). I also have a number of servers that store system information like CPU temperature, network traffic data & CPU loads in a database on the MySQL server. To accomplish this I have a couple of Python programs running on each server that harvest the data and push it to the database.\nOn those servers I've also got a number of scripts that periodically query the database for historical data that they then graph and present on a webpage.\nThe way I've set this up right now results in all the servers tending to query the database at about the same time. This causes a high load on the MySQL server followed by a long period of virtually no load.\nWhat options are there for me (preferably client-side Python) that can help me spread the load on the SQL server more evenly?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":491,"Q_Id":50187126,"Users Score":1,"Answer":"This is generally known as a stampeding or thundering herd problem. All of a sudden a ton of clients want something, and then activity goes back to nothing. There are different ways of coping with that. If you have some intermediary between the client and the server, e.g. a load balancer, you can use that to spread the load around and perhaps even use it to spin up new server instances as needed.\nIn the case of a direct client-MySQL connection that typically isn't an option. Perhaps you can switch to read-only replicated slaves, which can more easily absorb the impact (i.e. scale horizontally). Or you get a bigger server which has a better peak-load performance. Of course, if 99% of the time there's no load whatsoever, these solutions aren't very cost-effective.\nThe cheap solution is to avoid all clients stampeding towards the server at the same time. Either offset each client individually (e.g. schedule their cron job for 0 * * * *, 5 * * * *, 10 * * * * etc.), or simply delay each client by a random amount each time (sleep(randint(0, 360)) in Python, sleep $((RANDOM % 360)) && ... in bash).","Q_Score":0,"Tags":"python,mysql,python-3.x,debian-based","A_Id":50211918,"CreationDate":"2018-05-05T07:58:00.000","Title":"Client-side (Python) load-balancing a MySQL server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed Python 3.6.5 on Windows 10. I see that there is a sqlite3 folder in ...\\Python\\Python36\\Lib directory. I added Python PATH to environment variable. However, I can't run the command \"sqlite3\" from Powershell nor Git Bash. It would say \"command not found\". What did I do wrong?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":8283,"Q_Id":50205804,"Users Score":0,"Answer":"I had an issue with installing brotab, and I always got the message that sqlite3 was not found. I had just installed python beforehand, and in list of modules, I saw sqlite3. When I started py.exe console, I also could import that module. It turned out that I had a entry in my Path to c:\\Program Files\\LibreOffice\\program, which contained an older version of python.exe.\nI ran python.exe -m pydoc -b inside that program folder to see the Index of modules, and there was no mention of sqlite3. I removed that entry from my Path, and kept the newly installed python.exe, and it fixed the issue","Q_Score":5,"Tags":"python,python-3.x,sqlite","A_Id":69244735,"CreationDate":"2018-05-07T00:43:00.000","Title":"sqlite3: command not found Python 3 on Windows 10","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed Python 3.6.5 on Windows 10. I see that there is a sqlite3 folder in ...\\Python\\Python36\\Lib directory. I added Python PATH to environment variable. However, I can't run the command \"sqlite3\" from Powershell nor Git Bash. It would say \"command not found\". What did I do wrong?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":8283,"Q_Id":50205804,"Users Score":0,"Answer":"So, apparently Sqlite3's CLI does not come installed with Python (Python 3.6). What comes pre-installed is Python's Sqlite3 library.So, you can access the Sqlite3 DB either through the library, or by manually installing Sqlite3 CLI.","Q_Score":5,"Tags":"python,python-3.x,sqlite","A_Id":53717912,"CreationDate":"2018-05-07T00:43:00.000","Title":"sqlite3: command not found Python 3 on Windows 10","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I am using my second computer to gather some data and insert it into the SQL database. I set up everything when it comes to reading and writing the database remotely, and I can insert new rows just by using the normal SQL.\nWith pyodbc I can read tables, but when I insert new data, nothing happens. No error message, but also no new rows in the table. \nI wonder if anyone has faced this issue before and knows what the solution is.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":124,"Q_Id":50223255,"Users Score":0,"Answer":"The cursor.execute() method only prepares the SQL statement. Then, since this is an INSERT statement, you must use the cursor.commit() method for the records to actually populate your table. Likewise for a DELETE statement, you need to commit, as well.\nWithout more perspective here, I can only assume that you are not committing the insert.\nNotice, similarly, that when you run cursor.execute(\"\"\"select * from yourTable\"\"\"), you need to run cursor.fetchall() or another fetch statement to actually retrieve and view your query.","Q_Score":1,"Tags":"python,pyodbc","A_Id":50583744,"CreationDate":"2018-05-07T22:17:00.000","Title":"Inserting into SQL with pyodbc from remote computer","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I try to use Flask extensions, my application is raising ModuleNotFound errors on lines like from flask.ext.sqlalchemy import SQLAlchemy or from flask.exthook import ExtDeprecationWarning. I have Flask and the extension installed. Why do I get this error?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5032,"Q_Id":50261135,"Users Score":21,"Answer":"Something in your code, or in code you're using, is trying to import flask.ext or flask.exthook, which no longer exists in Flask 1.0. They were completely removed after being visibly deprecated for at least a year, and implicitly deprecated for many years before that. Anything that still depends on it must be upgraded.\nAny use of from flask.ext import ... should be replaced with a direct import of the extension. For example flask.ext.sqlalchemy becomes flask_sqlalchemy.\nThe only reason to import from flask.exthook import ExtDeprecationWarning is to silence the previous deprecation warnings. Since it no longer exists, there is no warning to silence, and that code can be removed.","Q_Score":14,"Tags":"python,flask","A_Id":50261872,"CreationDate":"2018-05-09T19:58:00.000","Title":"Importing flask.ext raises ModuleNotFoundError","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm going to write a python script that copies some of the tables in a MySQL database from one remote machine(remote1) to another database on another remote machine(remote2) everyday.\nWhat's the most convenient way to do this?\nBecause the two tables are on different machines, the following MySQL command doesn't work.\n\nCREATE TABLE newtable LIKE oldtable; \nINSERT newtable SELECT * FROM oldtable;\n\nCurrently, my idea is:\n\nRead the table schema of srcTable on remote1. Create the table on remote2.\nRun SELECT * FROM theTable; on remote1. Save the result into a variable.\nRun INSERT INTO table_name (column1, column2, column3, ...)\nVALUES (value1, value2, value3, ...); on remote2 using the values in the variable.\n\nIs there any better solution which is simpler?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":50267306,"Users Score":0,"Answer":"single command for Export & Import:\n\/opt\/lampp\/bin\/mysqldump -u root -ppass bd_name table_name_optional | ssh root@192.168.3.252 \/opt\/lampp\/bin\/mysql -u root -ppass db_name\nTwo step: first export by below command:\n\/opt\/lampp\/bin\/mysqldump -u user-ppass db_name [table name separated by space -optional] > \/opt\/db_backup\/db_name.sql\n\nsecond command for import:\n\/opt\/lampp\/bin\/mysql -u user-ppass db_name < \/opt\/db_backup\/db_name.sql\nHope help you.","Q_Score":0,"Tags":"python,mysql","A_Id":50267725,"CreationDate":"2018-05-10T06:59:00.000","Title":"Best way to copy mysql tables from a remote machine to another","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I issue this statement: db = cx_Oracle.connect(\"user\/pass@IP\/BKTDW\")\nand I get this error: \nTraceback (most recent call last):\n File \"\", line 1, in \ncx_Oracle.DatabaseError: Error while trying to retrieve text for error ORA-01804\nIt seems that connect method doen't work at all. I have installed the Oracle Client and I am connecting normally via Toad or Sql Developer. \nPlease Help!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2921,"Q_Id":50295008,"Users Score":0,"Answer":"I had to set the ORACLE_HOME variable in the system variables and also add the bin directory in the PATH system variable. THNX","Q_Score":0,"Tags":"python,oracle,cx-oracle","A_Id":50345205,"CreationDate":"2018-05-11T14:53:00.000","Title":"Can't connect with cx_Oracle of Python to oracle remote database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I issue this statement: db = cx_Oracle.connect(\"user\/pass@IP\/BKTDW\")\nand I get this error: \nTraceback (most recent call last):\n File \"\", line 1, in \ncx_Oracle.DatabaseError: Error while trying to retrieve text for error ORA-01804\nIt seems that connect method doen't work at all. I have installed the Oracle Client and I am connecting normally via Toad or Sql Developer. \nPlease Help!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2921,"Q_Id":50295008,"Users Score":0,"Answer":"Ensure that sqlplus is working from cmd line. It could be if on 64bit windows in has in PATH there is a target to non-64 bin version of oracle bin folder. In our case we ensured that 64bit location is placed in PATH . For instance place c:\\Oracle\\Ora11g_r2_x64\\bin\\ and remove c:\\Oracle\\ora11g_2\\bin\\, it was not related with ORACLE_HOME .","Q_Score":0,"Tags":"python,oracle,cx-oracle","A_Id":62171520,"CreationDate":"2018-05-11T14:53:00.000","Title":"Can't connect with cx_Oracle of Python to oracle remote database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to transfer files from one S3 bucket to another S3 bucket using AWS Glue through a Python Script?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1360,"Q_Id":50348880,"Users Score":0,"Answer":"Create a Crawler for your bucket - it will discover the schema of your data and add it as table to Glue Catalog\nUse Job wizard and select your table as source, and new table as target\nGlue will generate the code for you where you have to select the destination of your data, specify format etc.","Q_Score":0,"Tags":"python-3.x,amazon-s3,aws-glue","A_Id":50351216,"CreationDate":"2018-05-15T11:18:00.000","Title":"Transfer files within S3 buckets using AWS Glue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write an application which is portable.\nWith \"portable\" I mean that it can be used to access these storages:\n\namazon s3\ngoogle cloud storage\nEucalyptus Storage\n\nThe software should be developed using Python.\nI am unsure how to start, since I could not find a library which supports all three storages.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":225,"Q_Id":50364766,"Users Score":3,"Answer":"You can use boto3 for accessing any services of Amazon.","Q_Score":7,"Tags":"python,amazon-s3,google-cloud-storage,portability","A_Id":50364799,"CreationDate":"2018-05-16T07:28:00.000","Title":"Portable application: s3 and Google cloud storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is it possible to create a hyperlink for a specific sheet of an excel file?\nI want to open a sheet on the click of a cell which is on another sheet of the same excel file. For example, if someone clicks on 'A1' cell which is in the sheet2 the sheet1 will be opened and both the sheets are in the abc.xlsx file.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":7157,"Q_Id":50369352,"Users Score":1,"Answer":"I was unable to get this to work using the \"write_url(A1, \"internal:'sheet name'!A2\")\" form. Can someone provide some guidance on this?\nI was able to successfully add hyperlinks to internal cells using the form: \nwrite('A1', '=HYPERLINK(CELL(\"address\", 'sheet name'!A2), \"Friendly Name\")\nNOTE: the word 'address' is literal\/not a generic reference, and, the quotes need to be specified as shown (i.e., single quotes for a multi-word sheet name, and double quotes for the word 'address' and the 'Friendly Name'...","Q_Score":5,"Tags":"python-3.x,xlsxwriter","A_Id":58612395,"CreationDate":"2018-05-16T11:11:00.000","Title":"Creating a hyperlink for a excel sheet: xlsxwriter","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am importing some data from file and every time I get different format of date. So I want to get date format by any function of postgres from database field which is char.\nFor example,\nmy_date\n-----------\n2018-01-30\nor\nmy_date\n-----------\n30.01.2018","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":998,"Q_Id":50384184,"Users Score":0,"Answer":"The solution is very simple\nto_char(mydate, 'DD-MM-YYYY')\nother types of date formats :\n'DD\/MON\/YYYY'\n'DD\/MON\/YY'\n'MON\/DD\/YYYY'\nu can switch \/ with a - or .","Q_Score":0,"Tags":"python,postgresql","A_Id":50391086,"CreationDate":"2018-05-17T05:51:00.000","Title":"postgres - How to get date format from string date","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi all I am tried to install the superset on OSX using the Python3. After the installation finished when I tried to add the Database using the mysql:\/\/ it said error No Module name MySQLDb. I tried to explore how to solved this, one of tutorial said try to install mysqlclient using pip3 install mysqlclient failed to install with error code mysql.h not found.\nThan I following another tutorial used the mysql-connector. After I installed it, finally I can connect to mysql DB and insert table to the system. But when I tried to run the analysis from superset it said no data. Also I tried using SQL Lab and got the error args.\nUpdated: on my superset currently I am used the mysql+mysql-connector as URI Database connected properly, but when I tested to run a query it said execute() got an unexpected keyword argument 'args'. How to solve this?\nAnyone have experience with this problem?\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1327,"Q_Id":50389982,"Users Score":0,"Answer":"Finally I got it working now. \nWhat I am doing is reinstall the superset, run the brew install mysql-connector-c than run pip install mysqlclient","Q_Score":0,"Tags":"mysql,python-3.x,mysql-connector,apache-superset","A_Id":50411232,"CreationDate":"2018-05-17T11:10:00.000","Title":"Apache superset on Mac osx","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I get a db record as an sqlalchemy object and I need to consult the original values during some calculation process, so I need the original record till the end. However, the current code modifies the object as it goes and I don't want to refactor it too much at the moment. \nHow can I make a copy of the original data? The deepcopy seems to create a problem, as expected. I definitely prefer not to copy all the fields manually, as someone will forget to update this code when modifying the db object.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1099,"Q_Id":50396458,"Users Score":0,"Answer":"You can have many options here to copy your object.Two of them which I can think of are :\n\nUsing __dict__ it will give the dictionary of the original sqlalchemy object and you can iterate through all the attributes using .keys() function which will give all the attributes.\nYou can also use inspect module and getmembers() to get all the attributes defined and set the required attributes using setattr() method.","Q_Score":1,"Tags":"python,python-2.7,sqlalchemy,copy","A_Id":50397021,"CreationDate":"2018-05-17T16:34:00.000","Title":"how to make a copy of an sqlalchemy object (data only)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a django application and a mysql database table having around 30,000 entries. \nI have to process each entry one by one, do some computation and store the result in database.\nWhen I start processing, the time taken to process 100 entries is around 40 seconds. But this time keeps on growing. So after processing 1000 entries, the time goes to 1 minute, then after processing 2000 entries, the time to compute 100 entries move to 1 minute 30 seconds. \nIf I stop the server, and start computing again from say 2000th entry, then the time taken to process 100 entries becomes 40 seconds again, but keeps on increasing as more entries are processed.\nDoes anyone know why is this happening?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":50429609,"Users Score":0,"Answer":"Sounds like you need an index.\nDon't process one row at a time; use SQL to process all rows 'simultaneously'. Describe the processing; we may be able to get more specific.","Q_Score":0,"Tags":"python,mysql,django,performance","A_Id":50498291,"CreationDate":"2018-05-19T21:23:00.000","Title":"Django queries gets slower over time","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence.\nWhat I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and adding all distances to get the total distance.\nThe pairs will be like\n2,3\n3,4 \n4,5 \n5,6 \n6,8.\nIs there any way to define a function that takes this array and finds the shortest path using pgr_dijkstra.\nQuery is:\nfor 1st pair(2,3)\nSELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads',2,3, false);\nfor 2nd pair(3,4)\nSELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,3,4,*** false)\nfor 3rd pair(4,5)\nSELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,4,5,*** false);\nNOTE: The array size is not fixed, it can be different.\nIs there any way to automate this in postgres sql may be using a loop etc?\nPlease let me know how to do it.\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":708,"Q_Id":50429760,"Users Score":0,"Answer":"If you want all pairs distance then use\nselect * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads)","Q_Score":1,"Tags":"mysql,sql,postgresql,mysql-python,pgrouting","A_Id":51161622,"CreationDate":"2018-05-19T21:46:00.000","Title":"how to get the distance of sequence of nodes in pgr_dijkstra pgrouting?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm testing Apache superset Dashboards, It s a great tool.\nI added an external Database source (Oracle), and I created nice Dashboards very easily.\nI would like to see my Dashboards updated regularly and automatically (3 times a day) in superset.\nBut my Dashboards are not updated.\nI mean when a row is inserted into the Oracle Tables, if I refresh the Dashboard, I cannot view the new data in the Dashboard.\nWhat is the best way to do it ?\n=> Is there a solution \/ an option to force the Datasource to be automatically updated regularly ? in a frequency ? What is the parameter \/ option ? \n=> is there a solution to import in batch csv files (for instance in python), then this operation will update the Dashboard ?\n=> other way ?\nIf you have examples to share... :-)\nMy environment: \nSuperset is Installed on ubuntu 16.04 and Python 2.7.12.\nOracle is installed on another Linux server.\nI connect from google chrome to Superset.\nMany thanks for your help","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1509,"Q_Id":50472282,"Users Score":0,"Answer":"You could set the auto-refresh interval for a dashboard if you click on the arrow next to the Edit dashboard-button.","Q_Score":1,"Tags":"python,oracle,superset","A_Id":59359358,"CreationDate":"2018-05-22T16:16:00.000","Title":"superset dashboards - dynamic updates","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm testing Apache superset Dashboards, It s a great tool.\nI added an external Database source (Oracle), and I created nice Dashboards very easily.\nI would like to see my Dashboards updated regularly and automatically (3 times a day) in superset.\nBut my Dashboards are not updated.\nI mean when a row is inserted into the Oracle Tables, if I refresh the Dashboard, I cannot view the new data in the Dashboard.\nWhat is the best way to do it ?\n=> Is there a solution \/ an option to force the Datasource to be automatically updated regularly ? in a frequency ? What is the parameter \/ option ? \n=> is there a solution to import in batch csv files (for instance in python), then this operation will update the Dashboard ?\n=> other way ?\nIf you have examples to share... :-)\nMy environment: \nSuperset is Installed on ubuntu 16.04 and Python 2.7.12.\nOracle is installed on another Linux server.\nI connect from google chrome to Superset.\nMany thanks for your help","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1509,"Q_Id":50472282,"Users Score":1,"Answer":"I just found the origin of my error... :-)\nIn fact I added records in the future (tomorow, the day after, ...)... \nAnd My dashboard was only showing all Records to the today date...\nI inserted a record before, I refreshed and It appeared.\nThanks to having read me...","Q_Score":1,"Tags":"python,oracle,superset","A_Id":50473011,"CreationDate":"2018-05-22T16:16:00.000","Title":"superset dashboards - dynamic updates","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm programming a bit of server code and the MQTT side of it runs in it's own thread using the threading module which works great and no issues but now I'm wondering how to proceed. \nI have two MariaDB databases, one of them is local and the other is remote (There is a good and niche reason for this.) and I'm writing a class which handles the databases. This class will start new threads of classes that submits the data to their respected databases. If conditions are true, then it tells the data to start a new thread to push data to one database, if they are false, the data will go to the other database. The MQTT thread has a instance of the \"Database handler\" class and passes data to it through different calling functions within the class.\nWill this work to allow a thread to concentrate on MQTT tasks while another does the database work? There are other threads as well, I've just never combined databases and threads before so I'd like an opinion or any information that would help me out from more seasoned programmers.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":50497250,"Users Score":0,"Answer":"Writing code that is \"thread safe\" can be tricky. I doubt if the Python connector to MySQL is thread safe; there is very little need for it.\nMySQL is quite happy to have multiple connections to it from clients. But they must be separate connections, not the same connection running in separate threads.\nVery few projects need multi-threaded access to the database. Do you have a particular need? If so let's hear about it, and discuss the 'right' way to do it.\nFor now, each of your threads that needs to talk to the database should create its own connection. Generally, such a connection can be created soon after starting the thread (or process) and kept open until close to the end of the thread. That is, normally you should have only one connection per thread.","Q_Score":0,"Tags":"python,database,multithreading,mariadb","A_Id":50634340,"CreationDate":"2018-05-23T20:49:00.000","Title":"Calling database handler class in a python thread","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello im new in programming with openerp ODOO , well my issue is where i can find the functions of inserting into odoo database , well i created a new field and i want to insert the data of this field into the db","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":513,"Q_Id":50511857,"Users Score":0,"Answer":"If you want to store field value in database then add store=True within your field in python file. Then Your value store into database.","Q_Score":0,"Tags":"python,postgresql,odoo","A_Id":50522026,"CreationDate":"2018-05-24T14:27:00.000","Title":"From where i can insert field input to odoo database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've got a Python script talking to a MySQL database.\nThis script has been working fine for months.\nAll of a sudden it isn't actually adding anything to the tables it's supposed to modify.\nThe script has a lot of print statements and error handlers and it still runs exactly as if it was working, but nothing shows up in the database.\nIt even prints out \"rows affected: 108\" or whatever, but when I go look at the database in phpMyAdmin it says there are zero rows in the table.\nThe only thing it will do is truncate the tables. There's a section at the beginning that truncates the relevant tables so the script can start filling them up again. If I manually create a new row in a table through phpMyAdmin, that row will disappear when the script runs, like it's properly truncating the tables. But nothing after that does anything. It still runs without errors, but it doesn't actually modify the database.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":50547017,"Users Score":0,"Answer":"Thanks, yeah for some reason the script was no longer autocommitting by default.\nI added \"cnx.autocommit(True)\" and it's working again.","Q_Score":0,"Tags":"python,mysql,rows","A_Id":50548886,"CreationDate":"2018-05-26T21:08:00.000","Title":"mysql rows affected yes but table is still empty","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Python 3.6.5.\nClass A, below for me represents a database table, using SQLAlchemy.\nI'm defining a @staticmethod method that returns a row, but if there's no result, it would return None.\nSince it returns an instance of class A, then the notation normally goes:\n-> A: \nat the end of the def signature, but because A is not yet defined, as it's on class A itself, you are supposed to quote it as:\n-> 'A':\nIs the -> 'A': sufficient?\nOr is there some sort of OR syntax?\nThanks in advance for your advice.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":50547498,"Users Score":1,"Answer":"You can use Optional[A], this means that it can return A or None\nTo make a \"or\" between classes A and B, use Union[A, B]\nNote that you should import Optional and Union from typing","Q_Score":1,"Tags":"python-3.6,type-hinting","A_Id":50547527,"CreationDate":"2018-05-26T22:27:00.000","Title":"Python PEP 484 Type Hints -> return type either class name or None?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have django 1.11 with latest django-storages, setup with S3 backend.\nI am trying to programatically instantiate an ImageFile, using the AWS image link as a starting point. I cannot figure out how to do this looking at the source \/ documentation. \nI assume I need to create a file, and give it the path derived from the url without the domain, but I can't find exactly how.\nThe final aim of this is to programatically create wagtail Image objects, that point to S3 images (So pass the new ImageFile to the Imagefield of the image). I own the S3 bucket the images are stored in it.\nUploading images works correctly, so the system is setup correctly.\nUpdate\nTo clarify, I need to do the reverse of the normal process. Normally a physical image is given to the system, which then creates a ImageFile, the file is then uploaded to S3, and a URL is assigned to the File.url. I have the File.url and need an ImageFile object.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2104,"Q_Id":50609686,"Users Score":6,"Answer":"It turns out, in several models that expect files, when using DjangoStorages, all I had to do is instead of passing a File on the file field, pass the AWS S3 object key (so not a URL, just the object key).\nWhen model.save() is called, a boto call is made to S3 to verify an object with the provided key is there, and the item is saved.","Q_Score":4,"Tags":"django,boto3,wagtail,python-django-storages","A_Id":50804853,"CreationDate":"2018-05-30T16:38:00.000","Title":"Django storages S3 - Store existing file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Everytime I try to run a django app in terminal using a python manage.py runserver command I get the following error:\nReferenced from: \/Users\/myname\/anaconda\/lib\/python2.7\/site-packages\/_mysql.so\n Reason: image not found.\nDid you install mysqlclient or MySQL-python?\n\nTo fix it, I just paste\nexport DYLD_LIBRARY_PATH=\/usr\/local\/mysql\/lib\/\n\ninto my terminal. How can I rearrange my files so that I don't have to paste this in every time? I am working on a Mac.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":254,"Q_Id":50751008,"Users Score":2,"Answer":"Assuming you're just using the default terminal, you could put the command in your .bash_profile\/.bashrc by running something like\necho \"export DYLD_LIBRARY_PATH=\/usr\/local\/mysql\/lib\/\" >> ~\/.bash_profile\nSwitching .bash_profile with your equivalent.Then when you start a new terminal it should already be applied. To apply it immediately just run\nsource ~\/.bash_profile","Q_Score":0,"Tags":"python,mysql,django,macos","A_Id":50751138,"CreationDate":"2018-06-07T22:31:00.000","Title":"MySQL and Python - \"image not found\" -- permanent fix?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Sorry if this question is too stupid but I can't find an answer to it.\nI'm a beginner in terms of databases so I'm taking a course in Udacity. In the course, they tell us to install Vagrant and VirtualBox in order to run an ubuntu virtual machine to make the exercises of the course. The problem is that my pc is not working properly with that virtual machine running, so I decided not to virtualize and do the stuff in my \"normal\" programming environment (in the course we use flask, sqlite and sqlalchemy in order to create a website using a database, and in the next lesson they teach to build a web server that uses our database). Somewhere on the internet I read about virtual machines being useful to work in your computer without messing our computer's configuration up. My question is, can this happen? Or what does it mean to \"mess the configuration up\"? Is it possible to make an important mistake that will make me wish I had virtualized?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":50892275,"Users Score":0,"Answer":"It shouldn\u2019t mess any configuration in your PC. The whole point of virtualizing your programming environment is for security reasons, or because developing is easier on a Linux machine. If you\u2019re going to do sql exercises, the worst you can possibly do is mess the database.","Q_Score":0,"Tags":"python,database,sqlite,sqlalchemy,virtualization","A_Id":50892290,"CreationDate":"2018-06-16T22:29:00.000","Title":"Can something go wrong if I choose not to virtualize even though I was told to do it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sorry if this question is too stupid but I can't find an answer to it.\nI'm a beginner in terms of databases so I'm taking a course in Udacity. In the course, they tell us to install Vagrant and VirtualBox in order to run an ubuntu virtual machine to make the exercises of the course. The problem is that my pc is not working properly with that virtual machine running, so I decided not to virtualize and do the stuff in my \"normal\" programming environment (in the course we use flask, sqlite and sqlalchemy in order to create a website using a database, and in the next lesson they teach to build a web server that uses our database). Somewhere on the internet I read about virtual machines being useful to work in your computer without messing our computer's configuration up. My question is, can this happen? Or what does it mean to \"mess the configuration up\"? Is it possible to make an important mistake that will make me wish I had virtualized?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":50892275,"Users Score":0,"Answer":"I don\u2019t think so. \nVM is useful here in two ways: first, it teaches you to work with DB as with remote server, like in real world. \nSecond: it prevents your main OS from junking up. It\u2019s not a big problem now, but windows can slow down due to many applications leaving junk in registry or whatever... I think it was mostly dealt with in win7, but was on Mac by that time.\nYou are probably fine, just don\u2019t use shutil.rmtree() on C:\\ ;)","Q_Score":0,"Tags":"python,database,sqlite,sqlalchemy,virtualization","A_Id":50892318,"CreationDate":"2018-06-16T22:29:00.000","Title":"Can something go wrong if I choose not to virtualize even though I was told to do it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create an Amazon RDS database from a snapshot but I'm getting this error:\n\nbotocore.errorfactory.KMSKeyNotAccessibleFault: An error occurred (KMSKeyNotAccessibleFault) when calling the RestoreDBInstanceFromDBSnapshot operation: The specified KMS key [arn:aws:kms:ap-southeast-2:ddddddddd] does not exist, is not enabled or you do not have permissions to access it.\n\nI am not sure which permissions are needed for my Jenkins job to run this task. Is it just a read-only IAM policy?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2132,"Q_Id":50905214,"Users Score":3,"Answer":"Go to IAM Console, choose Encryption Keys menu on the left side bar\nChoose the region from the drop down menu (just below the \"Create Key\" button)\nSearch for the the mentioned key and see if exists. \nIf it does not exist, you can never recover back the RDS instance from the snapshot.\nIf exists, \n\nSee whether the status is Enabled for the Key. If not, select the checkbox and Choose \"Actions -> Enable\"\nClick on it. Under \"Key Policy\" -> \"Key Users\", add your IAM user\nand you will be able to restore RDS instance from Snapshot.","Q_Score":2,"Tags":"python-3.x,amazon-web-services,amazon-rds,boto3","A_Id":50906398,"CreationDate":"2018-06-18T08:14:00.000","Title":"Python boto Amazon RDS error: KMSKeyNotAccessibleFault","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am running a process that updates some entries in my mongo db using Pymongo. I have another process who does polling on these entries (using 'find' evrey minutes) to see if the other process is done. \nI noticed that after about 30-40 minutes I get an empty cursor even though these entries are still in the database. \nAt first I thought it happens due to changing these entries but then I run a process that just use the same query once every minute and I saw the same phenomena: After 30-40 minutes I get no results.\nI noticed that if I wait 2-3 minutes I get the results I am requesting.\nI tried to use the explain function but couldn't find anything helpful there.\nDid you ever see something similar? If so what can I do?\nIs there a way to tell that the cursor is empty? Is the rate limit configurable?\nthank you in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":149,"Q_Id":50944128,"Users Score":0,"Answer":"Apparently it was due to high CPU in mongo. \nThe database was synced with another one once every hour and during that time the queries returned empty results. \nWhen we scheduled the sync to happen only once a day we stopped seeing this problem (we also added a retry mechanism to avoid error on the sync time. However, this retry will be helpful only when you know for sure that the query should not return an empty cursor).","Q_Score":1,"Tags":"python,mongodb,pymongo","A_Id":51122090,"CreationDate":"2018-06-20T08:54:00.000","Title":"MongoDB query returns no results after a while","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the openpyxl library in Python and I'm trying to read in the value of a cell. The cells value is a date in the format MM\/DD\/YYYY. I would like for the value to be read into my script simply as a string (i.e. \"8\/6\/2014\"), but instead Python is somehow automatically reading it as a date object (Result is \"2014-08-06 00:00:00\") I don't know if this is something I need to fix in Excel or Python, but how do I get the string I'm looking for?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2412,"Q_Id":50959226,"Users Score":2,"Answer":"I would suggest changing it in your Excel if you want to preserve what is being read in by openpyxl. That said, when a cell has been formatted to a date in Excel, it becomes altered to fit a specified format so you've lost the initial string format in either case.\nFor example, let's say that the user enters the date 1\/1\/2018 into a cell that is formatted MM\/DD\/YYYY, Excel will change the data to 01\/01\/2018 and you will lose the original string that was entered.\nIf you only care to see data of the form MM\/DD\/YYYY, an alternate solution would be to cast the date with date_cell.strftime(\"%m\/%d\/%Y\")","Q_Score":1,"Tags":"python,excel,parsing,openpyxl","A_Id":50959375,"CreationDate":"2018-06-21T01:45:00.000","Title":"Python: Reading Excel and automatically turning a string into a Date object?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to test celery task using pytest in Django. \nIf I use celery_worker parameter in test function then I get an error:\n\n{OperationalError}database table is locked (sqllite).\n\nIf I run worker before tests and do not use celery_worker parameter then task run seccesfully but I can't locate changes in the test database (pytestmark = pytest.mark.django_db) as all updates links to the original database.\nI try to run the test in Docker (postgresql db) but with parameter celery_worker I get error:\n\npsycopg2.InterfaceError: connection already closed","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":347,"Q_Id":50986520,"Users Score":1,"Answer":"Need to use celery_worker parameter in pair with pytest mark @pytest.mark.django_db(transaction=True).","Q_Score":1,"Tags":"python,django,postgresql,celery,pytest","A_Id":50987884,"CreationDate":"2018-06-22T11:03:00.000","Title":"Pytest celery: cleaned up database after task call","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have my sqlite3 database in linux machine. And I would like to pull data from this database to my windows machine in a GUI fashion.\nI haven't finalized on the design yet (so no code to provide).\nI am contemplating on using Flask for windows which will talk to HTTP server in linux machine. This server makes connection to the database and provides data to respective client.\nI am also rather new to GUI. Is there any loophole in this approach? Am not planning on anything exhuberant. Any help is much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1543,"Q_Id":51000711,"Users Score":0,"Answer":"SQLite is not a server-based database, but rather it is a file-based database, i.e., it is not designed to be accessed over the server, but only to save the data locally.\nI would change the design and use another server-based database (MySQL, PostGreSQL, etc.).\nIf you really cannot use anything else, and want to access it Sqlite database over network, I would expose the SQLite database file over the network (for example share it on the network using Samba if you are using linux). Another example to access SQLite file is to access the server remotely using SSH.\nAnother better approach is to program a web service on the server where the SQLite file exists, and use the API of that web service to CRUD the data from the database.\nAFAIK, this is approach i the only possible way.","Q_Score":0,"Tags":"python-3.x,http,flask","A_Id":51000756,"CreationDate":"2018-06-23T11:28:00.000","Title":"Need to pull data from remote sqlite3 database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Please suggest a way to execute SQL statement and pandas dataframe .to_sql() in one transaction\nI have the dataframe and want to delete some rows on the database side before insertion\nSo basically I need to delete and then insert in one transaction using .to_sql of dataframe\nI use sqlalchemy engine with pandas.df.to_sql()","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1173,"Q_Id":51023642,"Users Score":0,"Answer":"After further investigation I realized that it is possible to do only with sqllite3, because to_sql supports both sqlalchemy engine and plain connection object as conn parameter, but as a connection it is supported only for sqllite3 database\nIn other words you have no influence on connection which will be created by to_sql function of dataframe","Q_Score":1,"Tags":"python,pandas,dataframe,transactions,sqlalchemy","A_Id":51035810,"CreationDate":"2018-06-25T12:36:00.000","Title":"Python pandas dataframe transaction","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a week old in gremlin and graph databases. My question is: Is there a way to add nodes to the graph database using gremlin-python ? \nAny help is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":149,"Q_Id":51026783,"Users Score":2,"Answer":"The gremlin-python library gives you full access to the Gremlin language which includes mutating steps like addV() for adding vertices and addE() for adding edges...so, sure, just use Gremlin to add vertices\/edges in python as you would with any other language that Gremlin supports.","Q_Score":0,"Tags":"python,graph-databases,gremlin","A_Id":51027498,"CreationDate":"2018-06-25T15:15:00.000","Title":"How to add nodes continuously in graph databases using gremlin-python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a school-related app for school using Python and the PeeWee module. Everything is working fine, but the problem I am facing is as follows:\nWhen creating user accounts, a student account can ONLY be created if there is at least 1 teacher account in the database (every student gets a teacher assigned). How do I check if my teacher table has any instances? I want to check this before a student user account can be made.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":385,"Q_Id":51055956,"Users Score":0,"Answer":"If I understand your problem clearly then it seems you need foreign key in student table. If you have column like teacher_id in student table then do the below steps\n\nCreate primary key for id in teacher table\nCreate foreign key for teacher_id in student table and teacher_id is not null\n\nOnce you have foreign key in student table then database will automatically will check teacher exists or not. if teacher not exists then it will throw error.","Q_Score":0,"Tags":"python,database,peewee","A_Id":51061550,"CreationDate":"2018-06-27T06:29:00.000","Title":"Checking if a table has records - Peewee module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am developing a data collection application that involves constant insertion into a MySQL database. I am using Python and PyMySQL to accomplish this. I need to insert about 100 rows into the main table a second. The python process is constantly running and maintains a constant connection to the MySQL database which resides on a remote server.\nI know that, in general, it is best to insert data as buffers (multiple rows at once) rather than making individual insertions. Would making commits (connection.commit() in PyMySQL) every 100 or so insertions achieve some of the same overhead reductions as inserting large amounts of data at once would?\nFor syntactical reasons, it is easier to separate the row insertions into individual operations.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":249,"Q_Id":51117782,"Users Score":0,"Answer":"You should find that the biggest overhead with a distant server is not processing time, but rather the round-trip time for each query to be sent to the server and return a response... if the server is more than ~10ms distant, inserting 100 rows\/sec individually is impossible because too much time is wasted on the wire, waiting.\nThere are internal reasons that make bulk inserts or infrequent commits perform better, but those become less and less relevant when the server is more distant. Individual inserts will be always slower from your perspective than bulk inserts, simply because of the number of round trips.\nAt the server, itself, bulk inserts convey a small advantage... and in a transaction, committing after every n inserts will convey a small advantage... but again any difference in performance from using these strategies will disappear into the noise over a distant connection.\nIn short, inserting multiple rows in a single query rather than multiple queries is the only meaningful improvement you can make, because server performance is not your primary issue -- it's distance.\nOf course, if there is some reason why individual insert queries are more desirable, then using multiple threads in your program and multiple connections to the database is a possible strategy to improve the performance, since n connections can execute n queries in parallel, thus reducing the net practical impact of round trip time t down to something near t \/ n.","Q_Score":0,"Tags":"python,mysql,pymysql","A_Id":51119697,"CreationDate":"2018-06-30T19:06:00.000","Title":"Is buffering commits in MySQL\/PyMySQL a viable alternative to inserting multiple rows at once?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple computers running python applications, each using the same MySQL server. Each of the applications contains a tkinter GUI that allows editing of a set of data (corresponding to data in a table in the MySQL server). Whenever the data is updated one machine (and in turn updated on the MySQL server), I would like the other machines to be prompted to update there displayed data by pulling from the server. I know I could simply have the applications self-update after a given interval, but I would prefer to only update when there is new data to pull.\nHow should I go about this?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":51119912,"Users Score":2,"Answer":"This isn't something you can do with MySQL. \nThere is no provision in the client\/server protocol for the server to spontaneously emit messages to a client, so there is no mechanism in MySQL that allows connected clients to be notified of events via a push notification.","Q_Score":0,"Tags":"mysql,python-3.x,server","A_Id":51120353,"CreationDate":"2018-07-01T02:16:00.000","Title":"How can I communicate with other client connections on a MySQL server using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple computers running python applications, each using the same MySQL server. Each of the applications contains a tkinter GUI that allows editing of a set of data (corresponding to data in a table in the MySQL server). Whenever the data is updated one machine (and in turn updated on the MySQL server), I would like the other machines to be prompted to update there displayed data by pulling from the server. I know I could simply have the applications self-update after a given interval, but I would prefer to only update when there is new data to pull.\nHow should I go about this?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":37,"Q_Id":51119912,"Users Score":2,"Answer":"I would suggest that your other client to do a long polling to your database and return a response if there are any feedback.","Q_Score":0,"Tags":"mysql,python-3.x,server","A_Id":51120370,"CreationDate":"2018-07-01T02:16:00.000","Title":"How can I communicate with other client connections on a MySQL server using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install MySQLdb for Python on Mac OS.\nWhen I digit pip install MySQL-python shell returns to this:\n Collecting MySQL-python\n Using cached https:\/\/files.pythonhosted.org\/packages\/a5\/e9\/51b544da85a36a68debe7a7091f068d802fc515a3a202652828c73453cad\/MySQL-python-1.2.5.zip\n Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"\/private\/var\/folders\/9h\/2lp9kx993ygbrfk1lxr0sz500000gq\/T\/pip-install-7xyyBe\/MySQL-python\/setup.py\", line 17, in \n metadata, options = get_config()\n File \"setup_posix.py\", line 53, in get_config\n libraries = [ dequote(i[2:]) for i in libs if i.startswith(compiler_flag(\"l\")) ]\n File \"setup_posix.py\", line 8, in dequote\n if s[0] in \"\\\"'\" and s[0] == s[-1]:\n IndexError: string index out of range\n\n ----------------------------------------\nCommand \"python setup.py egg_info\" failed with error code 1 in \/private\/var\/folders\/9h\/2lp9kx993ygbrfk1lxr0sz500000gq\/T\/pip-install-7xyyBe\/MySQL-python\/\n\nWhat can I do? I searched everywhere but I couldn't find an answer.\n(I had installed Python 2.7)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3629,"Q_Id":51123044,"Users Score":7,"Answer":"I fixed the error. If anyone have this error just follow these steps:\n\nFirst of all install mysql connector\n\nbrew install mysql-connector-c\n\n\nYou have to modify lines in mysql_config (this is an alias)\n\n\nvim \/usr\/local\/bin\/mysql_config \n\n(I sincerely consider to open mysql_config with a file editor, you can find the \n exact folder here)\n\n\/usr\/local\/Cellar\/mysql-connector-c\/6.1.11\/bin\/ \n\n\nReplace these lines.\n\n\n\n # Create options \n libs=\"-L$pkglibdir\"\n libs=\"$libs -l \"\n\nshould be:\n \n # Create options \n libs=\"-L$pkglibdir\"\n libs=\"$libs -lmysqlclient -lssl -lcrypto\"\n\n\n\nSet environment variable\n\n\nbrew info openssl\n\nit would tell what\u2019s needed\n\n For compilers to find this software you may need to set:\n LDFLAGS: -L\/usr\/local\/opt\/openssl\/lib\n CPPFLAGS: -I\/usr\/local\/opt\/openssl\/include\n For pkg-config to find this software you may need to set:\n PKG_CONFIG_PATH: \/usr\/local\/opt\/openssl\/lib\/pkgconfig\n\n\n\nThen you can install MySQL\n\n\npip install MySQL-python\n\n\nYou can test if MySQL is installed with this:\n\n\npython -c \"import MySQLdb\" \n\nHope this works also for you!","Q_Score":2,"Tags":"python,macos,mysql-python","A_Id":51132141,"CreationDate":"2018-07-01T11:53:00.000","Title":"pip install MySQL-python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a game where I've (foolishly) made the db key equal to the users login email. I did this several years ago so I've got quite a few users now. Some users have asked to change their email login for my game. Is there a simple way to change the key? As far as I can tell I'd need to make a new entry with the new email and copy all the data across, then delete the old db entry. This is the user model but then I've got other models, like one for each game they are involved in, that store the user key so I'd have to loop though all of them as well and swap out for the new key. \nBefore I embark on this I wanted to see if anyone else had a better plan. There could be several models storing that old user key so I'm also worried about the process timing out. \nIt does keep it efficient to pull a db entry as I know the key from their email without doing a search, but it's pretty inflexible in hindsight","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":191,"Q_Id":51127822,"Users Score":0,"Answer":"I ended up adding a new property to my user model and running a crawler to copy the string key (the email) to that new property. I changed my code search for that property rather then the key string to get a user item. Most of my users still have keys that equal their email, but I can safely ignore them as if the string is meaningless. I can now change a users email easily without making a new recored and my other models that have pointers to these user keys can remain unchanged.","Q_Score":1,"Tags":"google-app-engine,app-engine-ndb,google-app-engine-python","A_Id":51229386,"CreationDate":"2018-07-01T23:56:00.000","Title":"Change appengine ndb key","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have 6 dimension tables, all in the form of csv files. I have to form a star schema using Python. I'm not sure how to create the fact table using Python. The fact table (theoretically) has at least one column that is common with a dimension table. \nHow can I create the fact table, keeping in mind that quantities from multiple dimension tables should correspond correctly in the fact table?\nI am not allowed to reveal the code or exact data, but I'll add a small example. File 1 contains the following columns: student_id, student_name. File 2 contains : student_id, department_id, department_name, sem_id. Lastly File 3 contains student_id, subject_code, subject_score. The 3 dimension tables are in the form of csv files. I now need the fact table to contain: student_id, student_name, department_id, subject_code. How can I form the fact table in that form? Thank you for your help.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1539,"Q_Id":51151263,"Users Score":0,"Answer":"Reading certain blogs look like it is not a good way to handle such cases in python in memory but still if the below post make sense you cn use it\nFact Loading\nThe first step in DW loading is dimensional conformance. With a little cleverness the above processing can all be done in parallel, hogging a lot of CPU time. To do this in parallel, each conformance algorithm forms part of a large OS-level pipeline. The source file must be reformatted to leave empty columns for each dimension's FK reference. Each conformance process reads in the source file and writes out the same format file with one dimension FK filled in. If all of these conformance algorithms form a simple OS pipe, they all run in parallel. It looks something like this.\nsrc2cvs source | conform1 | conform2 | conform3 | load\nAt the end, you use the RDBMS's bulk loader (or write your own in Python, it's easy) to pick the actual fact values and the dimension FK's out of the source records that are fully populated with all dimension FK's and load these into the fact table.","Q_Score":0,"Tags":"python,csv,star-schema","A_Id":51152822,"CreationDate":"2018-07-03T09:37:00.000","Title":"Creating star schema from csv files using Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working in a project that retrieves some data from a rest service. One of the retrieved fields is a department number that need to be searched in a set of data in order to get the department name. That set of data has been given to me in a csv file (at least is not excel) with 1200 records.\nThe dataset is fixed and will not be updated (let's assume that's true) and the project doesn't have a database.\nSo I'm looking for the best alternative for storing this set: could be a hard coded dictionary or sqlite, what do you think? is there a better alternative?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":51178233,"Users Score":2,"Answer":"For a REST service, storing the data in sqlite is better because loading 1200 records from a hard-coded python dict creates overhead in both memory and loading time every time your REST service is called. sqlite is fully indexed on the filesystem so all data retrieval will be on an as-needed basis, which will create only a minimal overhead to each of your REST service call.","Q_Score":0,"Tags":"python,django","A_Id":51178409,"CreationDate":"2018-07-04T16:58:00.000","Title":"Store fixed data in Django: dictionary or sqlite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Python 2.6 in centos. I'm trying to connect with MySQL server. I've tried pip install mysql-connector, mysql-connector-python and mysql-connector-python-rf yet I can't mysql_connector.so not found in pip libs. I get error: module not found MySQL.connector!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":693,"Q_Id":51223180,"Users Score":0,"Answer":"Perhaps try below:\n pip install mysql-connector=2.1.4","Q_Score":0,"Tags":"mysql-python,python-2.6","A_Id":51223206,"CreationDate":"2018-07-07T12:21:00.000","Title":"How to use mysql.connector in Python 2.6","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an application which uses a hash algorithm (currently MD5) to generate a unique ID in a database table. The hash is calculated based on some fields of a row, but nothing checks that calculation, for when those fields are changed later on, the ID of that row doesn't change.\nNow I want to change the code to add some new features, while generating a purely random number for the ID could simplify my work a lot (it's a long story to tell why it is much easier for me to generate that ID before I'm able to get all the necessary fields' content for the hash algorithm)\nI know that usually the programming language's own random generator generates pseudo random number, but I'm using Python's random.SystemRandom(), which uses operating system's cryptography level 'true' random generator, so I believe it should be the same collision probability comparing with generating the ID with hash algorithm.\nIs my understanding correct? If not, why?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":406,"Q_Id":51237701,"Users Score":2,"Answer":"Generating X number of bytes of random data gives exactly the same collision probability as using the hash function on some ID's... \nASSUMING...\n\nThe columns you're using the hash function on are themselves unique.\nYou haven't made mistakes doing #1\n\nI would recommend using the system's cryptographic random number provider. Because you've probably made mistakes. Here's an easy one:\nYour system: Concatenate column 1 and column 2, and hash the result. You can guarantee you'll never ever do this on those values of column 1 and column 2 ever again. NEVER.\nWhat about when:\n\nColumn 1 = \"abc\"\nColumn 2 = \"def\"\n\nvs\n\nColumn 1 = \"ab\"\nColumn 2 = \"cdef\"\n\nThose would create the same hash function.\nSo who would you trust more to give you random data? Yourself? Or a team of operating system developers including cryptography experts and decades of research and experience? :)\nGo with the system's cryptographic random function.","Q_Score":3,"Tags":"python,random,md5,uuid","A_Id":51237819,"CreationDate":"2018-07-09T02:54:00.000","Title":"Can hash algorithm such as MD5\/SHA-1 generate an ID with less probability of collision than pure random number?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In our company some database administrators are still querying MS SQL Server DB with Visual Foxpro.\nEventhough I'm not a DB administrator, I reckon it's time to migrate those queries to a more recent DB management system.\nQuestion 1:\nWhat are good substitutions\/alternatives for Visual Foxpro?\nWould Python be able to carry out all the task Visual Foxpro can?\nQuestion 2:\nThe administators still defend the use of this language, eventhough support stopped in 2007. Is it still justifiable to keep using Visual Foxpro in 2018?\nThank you for your help!","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":2486,"Q_Id":51244350,"Users Score":1,"Answer":"This question was 2 years ago, but I would like to contribute something.\nA2Q-1. MSSQL Server, MySQL, Postgre and other latest database applications are now more reliable and more secure. You should try them and then choose which is comfortable for you.\nA2Q-2. It is still justifiable to use VFP. I know a large company who is still using it and are not having a hard time maintaining it. They've been using it for almost 30 years now. They are now the experts, they no longer need any support from the original creators of VFP.","Q_Score":0,"Tags":"python,database,foxpro","A_Id":64473383,"CreationDate":"2018-07-09T11:21:00.000","Title":"Visual FoxPro in 2018","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In our company some database administrators are still querying MS SQL Server DB with Visual Foxpro.\nEventhough I'm not a DB administrator, I reckon it's time to migrate those queries to a more recent DB management system.\nQuestion 1:\nWhat are good substitutions\/alternatives for Visual Foxpro?\nWould Python be able to carry out all the task Visual Foxpro can?\nQuestion 2:\nThe administators still defend the use of this language, eventhough support stopped in 2007. Is it still justifiable to keep using Visual Foxpro in 2018?\nThank you for your help!","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":2486,"Q_Id":51244350,"Users Score":6,"Answer":"In our company some database administrators are still querying SQL Server database with Visual Foxpro. \n\nYou need to find out if they are just manually querying the SQL Server data for reports or are they actually using a VFP Application (which itself queries the SQL Server data). \nIf they are running a VFP Application, then to change, you (or they) will have to totally redesign and redevelop the application in the replacement language - depending on the complexity of the application - most often not an insignificant task. \nIf you are considering changing to another language rather than just following someone's advice or their 'gut' feeling such as \n\nI reckon it's time to migrate those queries to a more recent database management system\n\nYou need to do a business analysis. At the very least you should ask the following questions. \n\nIs the current operation Business Critical to the operations? \nWhat is the REAL reason for changing? \nWhat advantages will be gained by changing? \nWhat will the timeline and budget look like to make the change? \nIs this software to be run in-house only or will it be run across the web? \n\nIn regards to Question 2 \nSure VFP is a 'dated language', and its support from Microsoft is no longer available, but there is a VERY Active community of VFP developers who are available in various web forums who can offer far superior support to VFP questions than MS ever did. \nThose VFP developers are still using the language and plan to continue to do so for quite a while. So 'language support', by itself, seems like a moot issue. \nIn regards to question 1 \nThere are a number of languages to change to. Some are rather simplistic and others are more full-featured. \nAgain, if this is a VFP Application which is to be changed rather than just a few queries, then plan for an extensive effort no matter what language you change to. \nAlso your answers to the Business Analysis questions may guide you towards one language over another. \nI do find it odd that you say \n\nWe also observe a lot of time going to problem solving rather than improving the services\n\nI have developed FP\/VFP applications for 30+ years now (currently in addition to Android and VB.ASP) and have found that once developed and implemented these are very stable applications. Sure there can be Network issues and data change issues which are problematic, but it is not the VFP software that is 'mis-behaving' - instead it is 'external' things which are causing the stable VFP application to no longer behave as expected. \nAlthough I will say that a poorly designed application (VFP or other) will be frequently problematic. \nWhichever way you go - good luck.","Q_Score":0,"Tags":"python,database,foxpro","A_Id":51250704,"CreationDate":"2018-07-09T11:21:00.000","Title":"Visual FoxPro in 2018","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In our company some database administrators are still querying MS SQL Server DB with Visual Foxpro.\nEventhough I'm not a DB administrator, I reckon it's time to migrate those queries to a more recent DB management system.\nQuestion 1:\nWhat are good substitutions\/alternatives for Visual Foxpro?\nWould Python be able to carry out all the task Visual Foxpro can?\nQuestion 2:\nThe administators still defend the use of this language, eventhough support stopped in 2007. Is it still justifiable to keep using Visual Foxpro in 2018?\nThank you for your help!","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2486,"Q_Id":51244350,"Users Score":0,"Answer":"Question 1: What are good substitutions\/alternatives for Visual\n Foxpro? Would Python be able to carry out all the task Visual Foxpro\n can? \n\nI think you might want to look into Visual C# or Visual Basic using the .NET framework.\nPython might be able to replace Visual FoxPro from what I hear. \n\nQuestion 2: The administators still defend the use of this language, eventhough support stopped in 2007. Is it still justifiable to keep using Visual Foxpro in 2018? \n\nIt might be justifiable but you might want to use the latest .NET frameworks to keep up with security patches or new features.","Q_Score":0,"Tags":"python,database,foxpro","A_Id":51244694,"CreationDate":"2018-07-09T11:21:00.000","Title":"Visual FoxPro in 2018","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In our company some database administrators are still querying MS SQL Server DB with Visual Foxpro.\nEventhough I'm not a DB administrator, I reckon it's time to migrate those queries to a more recent DB management system.\nQuestion 1:\nWhat are good substitutions\/alternatives for Visual Foxpro?\nWould Python be able to carry out all the task Visual Foxpro can?\nQuestion 2:\nThe administators still defend the use of this language, eventhough support stopped in 2007. Is it still justifiable to keep using Visual Foxpro in 2018?\nThank you for your help!","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":2486,"Q_Id":51244350,"Users Score":1,"Answer":"I will say that, for our purposes VFP is the quickest way we have right now to convert data. That said, we run it a lot in server 2012 and it's really buggy. Requiring a lot of time clearing problems and restarting programs. It's days are numbered for us, I would continue to use it if we could get it updated a bit so it works on better on modern operating systems.","Q_Score":0,"Tags":"python,database,foxpro","A_Id":51245400,"CreationDate":"2018-07-09T11:21:00.000","Title":"Visual FoxPro in 2018","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to work with sql server in orange through anaconda.\nWhen installing pymssql I am getting the attached error.\nAfter hours of googling I could not find a solution for anaconda.\nPlease help!\nThank you\nMichael\n_mssql.c(266): fatal error C1083: Cannot open include file: 'sqlfront.h': No such file or directory\n error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.14.26428\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1519,"Q_Id":51251584,"Users Score":1,"Answer":"I faced a similar issue before.\nI would recommend downloading v2 of pymssql: \n pip install pymssql==2.1.3","Q_Score":2,"Tags":"python,sql-server,anaconda,pymssql","A_Id":51251644,"CreationDate":"2018-07-09T18:06:00.000","Title":"pip install pymssql fails in anaconda windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can i copy one column from one collection to another collection in mongoDB using Python?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":51258258,"Users Score":0,"Answer":"document:{\n _id: Object_ID\n username: \"john\"\n email: \"john@email.com\"\n age: 24\n}\nNow, you want to copy username column from this document of some collection1 to a document of collection2, for that,\nlet say you have build connection with MongoDB, and we use db object to further data manipulation.\n\nwe need to get that intended column data with column name\ndata = db.collection1.find_one(\n {\"email\": \"john@email.com\"},\n {\"username\": 1}\n)\n\/\/ now data is a python dictionary contains value of username on key username\nnow we are going to create a new document in another collection(or we can update that data based on some condition if you wish)\ndb.users.collection2.insert_one(data) \/\/hence we got our new document with that data in collection2\n\nNow here I use find_one to exactly collect a single data, but if you need to collect a list of data then, just iterate through the list while inserting those data into a collection.","Q_Score":0,"Tags":"python,mongodb","A_Id":51259901,"CreationDate":"2018-07-10T06:10:00.000","Title":"Adding column in different collection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a needs to do calculation like average of selected data grouped by time rage collections.\nExample:\nTable which is storing data has several main columns which are:\n | time_stamp | external_id | value |\nNow i want to calculate average for 20 (or more) groups of date ranges:\n1) 2000-01-01 00-00-00 -> 2000-01-04 00-00-00\n2) 2000-01-04 00-00-00 -> 2000-01-15 00-00-00\n...\nThe important thing is that there are no gaps and intersections between groups so it means that first date and last date are covering full time range.\nThe other important thing is that in set of \"date_from\" to \"date_to\" there can be rows for outside of the collection (unneeded external_id's).\nI have tried 2 approaches:\n1) Execute query for each \"time range\" step with average function in SQL query (but i don't like that - it's consuming too much time for all queries, plus executing multiple queries sounds like not good approach)\n2) I have selected all required rows (at one SQL request) and then i made loop over the results. The problem is that i have to check on each step to which \"data group\" current datetime belongs. This seams like a better approach (from SQL perspective) but right now i have not too good performance because of loop in the loop. I need to figure out how to avoid executing loop (checking to which group current timestamp belongs) in the main loop.\nAny suggestions would be much helpful.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":57,"Q_Id":51333360,"Users Score":1,"Answer":"Actually both approaches are nice, and both could benefit on the index on the time_stamp column in your database, if you have it. I will try to provide advice on them:\n\nMultiple queries are not such a bad idea, your data looks to be pretty static, and you can run 20 select avg(value) from data where time_stamp between date_from and date_to-like queries in 20 different connections to speed up the total operation. You'll eliminate need of transferring a lot of data to your client from DB as well. The downside would be that you need to include an additional where condition to exclude rows with unneeded external_id values. This complicates the query and can slow the processing down a little if there are a lot of these values.\nHere you could sort the data on server by time_stamp index before sending and then just checking if your current item is from a new data range (because of sorting you will be sure later items will be from later dates). This would reduce the inner loop to an if statement. I am unsure this is the bottleneck here, though. Maybe you'd like to look into streaming the results instead of waiting them all to be fetched.","Q_Score":1,"Tags":"python,postgresql","A_Id":51333655,"CreationDate":"2018-07-13T22:04:00.000","Title":"Calculating average of multiple sets of data (performance issue)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using tesseract-ocr and get the output in hOCR format. I need to store this hOCR output into the database (PostgreSQL in my case).\nSince I may need every piece of information (80% of it) from this hOCR individually, which would be the right approach? Should it be stored as XML datatype or parsed to JSON and stored? And in case of JSON, how to parse this hOCR to JSON with Python. Other related suggestions are also appreciated.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":3041,"Q_Id":51421283,"Users Score":3,"Answer":"hOCR appears to be a dialect of XML, so you should be able to use the xml.etree module from the stdlib to parse the hOCR code into a Python-navigable tree. Then navigate that tree to compose an object or nested dict, and then finally using the stdlib's json module to convert that dict to JSON.","Q_Score":1,"Tags":"python,postgresql,parsing,python-tesseract,hocr","A_Id":51426758,"CreationDate":"2018-07-19T11:16:00.000","Title":"Parsing hOCR to JSON with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have macros set up to automate all my reports. However, some of my macros can take up to 5-10 minutes due to the size of my data. \nI have been moving away from Excel\/VBA to Python\/pandas for data analysis and manipulation. I still use excel for data visualization (i.e., pivot tables). \nI would like to know how other people use python to automate their reports? What do you guys do? Any tips on how I can start the process? \nMajority of my macros do the following actions - \n\nImport text file(s)\nPaste the raw data into a table that's linked to pivot tables \/ charts.\nRefresh workbook \nSave as new","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":606,"Q_Id":51503471,"Users Score":0,"Answer":"When using python to automate reports I fully converted the report from Excel to Pandas. I use pd.read_csv or pd.read_excel to read in the data, and export the fully formatted pivot tables into excel for viewing. doing the 'paste into a table and refresh' is not handled well by python in my experience, and will likely still need macros to handle properly ie, export a csv with the formatted data from python then run a short macro to copy and paste.\nif you have any more specific questions please ask, i have done a decent bit of this","Q_Score":0,"Tags":"python,excel,vba,pandas,reporting","A_Id":51503697,"CreationDate":"2018-07-24T16:25:00.000","Title":"Python - pandas \/ openpyxl: Tips on Automating Reports (Moving Away from VBA).","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a server with multiple gunicorn workers and want to let them all have access to a specific variable. I'm using Redis to do this(it's in RAM, so it's fast, right?) but every GET or SET request adds another client. I'm performing maybe ~150 requests per second, so it quickly reaches the 25 connection limit that Heroku has. To access the database, I'm using db = redis.from_url(os.environ.get(\"REDIS_URL\")) and then db.set() and db.get(). Is there a way to lower that number? For instance, by using the same connection over and over again for each worker? But how would I do that? The 3 gunicorn workers I have are performing around 50 queries each per second.\nIf using redis is a bad idea(which it probably is), it would be great if you could suggest alternatives, but also please include a way to fix my current problem as most of my code is based off of it and I don't have enough time to rewrite the whole thing yet.\nNote: The three pieces of code are the only times redis and db are called. I didn't do any configuration or anything. Maybe that info will help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":51519333,"Users Score":0,"Answer":"Most likely, your script creates a new connection for each request.\nBut each worker should create it once and use forever.\nWhich framework are you using?\nIt should have some documentation about how to configure Redis for your webapp.\nP.S. Redis is a good choice to handle that :)","Q_Score":1,"Tags":"python,heroku,redis,gunicorn","A_Id":51519644,"CreationDate":"2018-07-25T12:50:00.000","Title":"Python Redis on Heroku reached max clients","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My Python script updates Oracle passwords regularly using the command\n alter user my_user identified by \"new_password\" replace \"old_password\"\nNow I need to update these passwords in the SQL Developer connection definitions. I have looked all over my Windows 7 machine but nowhere can I find Connections.xml, nor IDEConnections.xml. And if so, the passwords would be encrypted.\nCan anybody automate password updates for SQL Developer?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":449,"Q_Id":51524365,"Users Score":0,"Answer":"Robertus post pointed me in the right direction as to the location of the relevant configuration files; however, the password encryption is not solved yet. Re-opening a new question.","Q_Score":0,"Tags":"python,oracle,passwords,oracle-sqldeveloper","A_Id":51544618,"CreationDate":"2018-07-25T17:11:00.000","Title":"Can Python update Passwords for Oracle SQL Developer Connections","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an Excel workbook with close to 90 columns. I would like to create column alias for each column in the workbook, so that it will be easier for me to use the respective columns in formulas.\nNormally, I would select each column in the workbook and type in my alias for the column into the Cell Reference Bar at the top\nIs there a way to do this automatically, because i have a lot of columns? Especially in Python ?\nI tried the pandas.Series.to_excel function which has the header attribute. However all it does is change the column names to the string specified and does not modify the alias for all the cells in the column.\nThanks a lot for your help","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":632,"Q_Id":51554503,"Users Score":0,"Answer":"In Excel, is your data in a table, or named range? \nI'm kind of assuming a table (which could make this a snap) because, as a column heading in a (named) range, the 'header' (or alias, if I understand) isn't \"connected\" to the underlying data, as it would be in a table...\nCan you provide an example of how you would (or expect to) use the 'column alias' in a formula?","Q_Score":0,"Tags":"python,excel","A_Id":51567400,"CreationDate":"2018-07-27T09:05:00.000","Title":"How to create common alias for all cells in a column in Excel automatically through scripting","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an Excel workbook with close to 90 columns. I would like to create column alias for each column in the workbook, so that it will be easier for me to use the respective columns in formulas.\nNormally, I would select each column in the workbook and type in my alias for the column into the Cell Reference Bar at the top\nIs there a way to do this automatically, because i have a lot of columns? Especially in Python ?\nI tried the pandas.Series.to_excel function which has the header attribute. However all it does is change the column names to the string specified and does not modify the alias for all the cells in the column.\nThanks a lot for your help","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":632,"Q_Id":51554503,"Users Score":0,"Answer":"If I understand you correctly...\nTo name each of the columns something slightly different you can use a for-loop which contains an incrementing number that's added to the column name.\nThere are loads of examples of this available online, here's a really rough illustrative example:\nnum = 0\nfor header in column:\n num +=1\n header = header+str(num)\nI don't think you need to program this for just 90 columns in one book though tbh. \nYou could name the first 3 columns, select the three named cells, and then drag right when you see the + symbol in the bottom right corner of the most rightern cell of the three selected.\nDragging across 90 cells should only take one second.\nOnce you've named the 90 columns, you can always select row#1 and do some ctrl+h on it to change the header names later.","Q_Score":0,"Tags":"python,excel","A_Id":51556893,"CreationDate":"2018-07-27T09:05:00.000","Title":"How to create common alias for all cells in a column in Excel automatically through scripting","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hellow. It seems to me that I just don't understand something quite obvios in databases. \nSo, we have an author that write books and have books themselves. One author can write many books as well as one book could be written by many authors. \nThus, we have two tables 'Books' and 'Authors'.\nIn 'Authors' I have an 'ID'(Primary key) and 'Name', for example:\n1 - L.Carrol\n2 - D.Brown \nIn 'Books' - 'ID' (pr.key), 'Name' and 'Authors' (and this column is foreign key to the 'Authors' table ID)\n1 - Some_name - 2 (L.Carol)\n2 - Another_name - 2,1 (D.Brown, L.Carol)\nAnd here is my stumbling block, cause i don't understand how to provide the possibility to choose several values from 'Authors' table to one column in 'Books' table.But this must be so simple, isn't it?\nI've red about many-to-many relationship, saw many examples with added extra table to implement that, but still don't understand how to store multiple values from one table in the other's table column. Please, explain the logic, how should I do something like that ? I use SQLiteStudio but clear sql is appropriate too. Help ^(","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":343,"Q_Id":51572979,"Users Score":3,"Answer":"You should have third intermediate table which will have following columns:\n\nid (primary)\nauthor id (from Authors table)\nbook id (from Books table)\n\nThis way you will be able to create a record which will map 1 author to 1 book. So you can have following records:\n\n1 ... Author1ID ... Book1ID\n2 ... Author1ID ... Book2ID\n3 ... Author2ID ... Book2ID\n\nAuthorXID and BookXID - foreign keys from corresponding tables.\nSo Book2 has 2 authors, Author1 has 2 books.\nAlso separate tables for Books and Authors don't need to contain any info about anything except itself.\nAuthors .. 1---Many .. BOOKSFORAUTHORS .. Many---1 .. Books","Q_Score":0,"Tags":"python,sql,database,many-to-many,sqlitestudio","A_Id":51573120,"CreationDate":"2018-07-28T15:56:00.000","Title":"Many to many relationship SQLite (studio or sql)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I don't know what title should be, I just got stuck and need to ask.\nI have a model called shift\nand imagine the db_table like this:\n\n#table shift\n+---------------+---------------+---------------+---------------+------------+------------+\n| start | end | off_start | off_end | time | user_id |\n+---------------+---------------+---------------+---------------+------------+------------+\n| 2018-01-01 | 2018-01-05 | 2018-01-06 | 2018-01-07 | 07:00 | 1 |\n| 2018-01-08 | 2018-01-14 | 2018-01-15 | Null | 12:00 | 1 |\n| 2018-01-16 | 2018-01-20 | 2018-01-21 | 2018-01-22 | 18:00 | 1 |\n| 2018-01-23 | 2018-01-27 | 2018-01-28 | 2018-01-31 | 24:00 | 1 |\n| .... | .... | .... | .... | .... | .... |\n+---------------+---------------+---------------+---------------+------------+------------+\n\nif I use queryset with filter like start=2018-01-01 result will 07:00\nbut how to get result 12:00 if I Input 2018-01-10 ?...\nthank you!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":113,"Q_Id":51579732,"Users Score":1,"Answer":"Question isnt too clear, but maybe you're after something like \nstart__lte=2018-01-10, end__gte=2018-01-10?","Q_Score":0,"Tags":"python,django","A_Id":51579769,"CreationDate":"2018-07-29T11:14:00.000","Title":"Django Queryset find data between date","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I create many records in my DB as follows:\nSubproducts.create(mf_id=mf_id,\n co=co,\n mf_binary=mf_binary_data.getbuffer())\nmf_binary type is io.BytesIO() - it is binary representation of binary files collected into zipfile\nI've successfully created many records using this approach, however I have an issue with one particular dataset. \nIt is a bigger than other and it takes ~1,2GB. \nWhen I try to save it in DB following error occurs.\npeewee.InterfaceError: Error binding parameter 2 - probably unsupported type.\nField of mf_binary in my model is peewee.BlobField(default=b'0')\nHow can I store this kind of data in peewee Database?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":597,"Q_Id":51600405,"Users Score":0,"Answer":"Calling getbuffer() returns a memoryview object which is apparently not supported by the underlying database driver (which one is it, by the way?). The InterfaceError is raised by your database driver as opposed to Peewee, which indicates the problem comes from your driver not understanding how to handle memoryview objects.\nYour best bet is to use mf_binary_data.getvalue(), which should return a bytes object.","Q_Score":1,"Tags":"python,database,orm,zip,peewee","A_Id":51601055,"CreationDate":"2018-07-30T18:47:00.000","Title":"peewee.InterfaceError: Error binding parameter while saving big amout of data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"XLSXWRITER works on my Mac computer when I run my kivy app and successfully creates an XLSX file.\nUnfortunately once I compile the apk using buildozer the \"Export\" button I made doesn't create the XLSX. \nNo crash occurs, I just can't find the XLSX that should have been created. My theory is that the spreadsheet is created but 'lives' within the APK Package.\nPlease help me to create XLSX using Kivy on android!\n* Python 2.7.15\n* Kivy 1.9.1\n* Android Phone","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":51603822,"Users Score":0,"Answer":"With an aid from @\"John Anderson\", the solution lies in the AndroidManifest.xml file, where one can simply \"comment out\" using the html comments \"\". At the bottom one can find the list of permissions that the app may or may not require, and can thereby comment out unnecessary permissions like \"WRITE_EXTERNAL_STORAGE\".","Q_Score":0,"Tags":"python,apk,kivy,xlsxwriter","A_Id":59485426,"CreationDate":"2018-07-31T00:12:00.000","Title":"Why doesn't XLSXWRITER create a file once my Kivy app is compliled into an APK?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an s3 bucket which has a large no of zip files having size in GBs. I need to calculate all zip files data length. I go through boto3 but didn't get it.\nI am not sure if it can directly read zip file or not but I have a process-\n\nConnect with the bucket.\nRead zip files from the bucket folder (Let's say folder is Mydata).\nExtract zip files to another folder named Extracteddata.\nRead Extracteddata folder and do action on files.\n\nNote: Nothing shouldn't download on local storage. All process goes on S3 to S3.\nAny suggestions are appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8072,"Q_Id":51604689,"Users Score":0,"Answer":"This is not possible.\nYou can upload files to Amazon S3 and you can download files. You can query the list of objects and obtain metadata about the objects. However, Amazon S3 does not provide compute, such as zip compression\/decompression.\nYou would need to write a program that:\n\nDownloads the zip file\nExtracts the files\nDoes actions on the files\n\nThis is probably best done on an Amazon EC2 instance, which would have low-latency access to Amazon S3. You could do it with an AWS Lambda function, but it has a limit of 500MB disk storage and 5 minutes of execution, which doesn't seem applicable to your situation.\nIf you are particularly clever, you might be able to download part of each zip file ('ranged get') and interpret the zipfile header to obtain a listing of the files and their sizes, thus avoiding having to download the whole file.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,boto3","A_Id":51604927,"CreationDate":"2018-07-31T02:36:00.000","Title":"Read zip files from amazon s3 using boto3 and python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm doing some work on the data in an excel sheet using python pandas. When I write and save the data it seems that pandas only saves and cares about the raw data on the import. Meaning a lot of stuff I really want to keep such as cell colouring, font size, borders, etc get lost. Does anyone know of a way to make pandas save such things?\nFrom what I've read so far it doesn't appear to be possible. The best solution I've found so far is to use the xlsxwriter to format the file in my code before exporting. This seems like a very tedious task that will involve a lot of testing to figure out how to achieve the various formats and aesthetic changes I need. I haven't found anything but would said writer happen to in any way be able to save the sheet format upon import? \nAlternatively, what would you suggest I do to solve the problem that I have described?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":468,"Q_Id":51626502,"Users Score":0,"Answer":"Separate data from formatting. Have a sheet that contains only the data \u2013 that's the one you will be reading\/writing to \u2013 and another that has formatting and reads the data from the first sheet.","Q_Score":1,"Tags":"python,excel,pandas,xlsxwriter","A_Id":51628570,"CreationDate":"2018-08-01T06:16:00.000","Title":"Any way to save format when importing an excel file in Python?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the pandas documentation, it says that the optional dependencies for Excel I\/O are:\n\nxlrd\/xlwt: Excel reading (xlrd) and writing (xlwt)\nopenpyxl: openpyxl > version 2.4.0 for writing .xlsx files (xlrd >= 0.9.0)\nXlsxWriter: Alternative Excel writer\n\nI can't install any external modules. Is there any way to create an .xlsx file with just a pandas installation?\nEdit: My question is - is there any built-in pandas functionality to create Excel workbooks, or is one of these optional dependencies required to create any Excel workbook at all?\nI thought that openpyxl was part of a pandas install, but turns out I had XlsxWriter installed.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":831,"Q_Id":51635233,"Users Score":3,"Answer":"The pandas codebase does not duplicate Excel reading or writing functionality provided by the external libraries you listed. \nUnlike the csv format, which Python itself provides native support for, if you don't have any of those libraries installed, you cannot read or write Excel spreadsheets.","Q_Score":1,"Tags":"python,pandas,pandas.excelwriter","A_Id":51635610,"CreationDate":"2018-08-01T13:58:00.000","Title":"Can I create Excel workbooks with only Pandas (Python)?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Python and I apologize in advance for asking for help on this trivial matter. I installed Python 2.14.7 on Cygwin running on Win 10, and would like to install MySQL (for python) and the MySQLdb library to play around. I searched the net for the exact installation steps but did not find a conclusive answer. Can anyone please point me to any resources (download links & installation Steps) that might help me in this endeavor ?\nThanks in advance..","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":415,"Q_Id":51664719,"Users Score":0,"Answer":"So, MySql is a database. You don't install a database for Python. It's a standalone software. You can install any version of it that compatible with cygwin. What you do need is a Python database connector, which can be downloaded at Oracle.com. (sorry I am answering with my phone maybe you need to Google it yourself.) \nYou should Google it and register an account on Oracle.com. Then it's good to download the lib.","Q_Score":0,"Tags":"python,cygwin,mysql-python","A_Id":51664790,"CreationDate":"2018-08-03T03:00:00.000","Title":"Installing MySQL for Python on Cygwin","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using apache web hosting I launched my first web app today and I noticed that if I put examply.com\/example.py in my web browser I can see my python source code with my sql password in it. \nHow do I prevent this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":182,"Q_Id":51690304,"Users Score":0,"Answer":"Chown your python file with 640 will give a FORBIDDEN in the browser but the server will be able to execute it in PHP for example:\n$output = shell_exec(\"python python-test.py\");\necho ($output);","Q_Score":0,"Tags":"python,apache","A_Id":61774431,"CreationDate":"2018-08-04T23:29:00.000","Title":"How to hide serverside python file from web browser?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I run a flask server with sqlalchemy connecting to an sqlite database.\nTo update my database tables I exported my data to a file, updated the necessary tables and then imported the data again. This data in the database is all correct and in the right place.\nThe problem is in the passwords, they are stored as a blob (largebinary in sqlalchemy).\nWhen creating a new account with a password, the account works immediately without any problems.\nHowever all the old passwords that I have imported from the old database do not work anymore and throw an error. The error that gets thrown is the following: \nTypeError: string argument without an encoding\nThe type of the new passwords in the database is what it should be:\n\nWith the migrated passwords, I can not even check what type it is, as it throws the TypeError during retrieval of this field.\nThe data is stored as a blob: password BLOB NOT NULL\nA not working password: $2b$12$CC6OVZTOy3Bc9bsxAeALpuJPc.iZmVwXFB\/Cj6.xRlgF2dRdTh11y\nA working password:\n$2b$12$NL8reAO7rx1NC5DwgeWVt.ojV0I6czlOKcXAOF87L5NoVsdmOulle","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":403,"Q_Id":51709917,"Users Score":0,"Answer":"As it turns out the tool I was using did not correctly export the data. The binary data then got lost as sqlite does not have a hard type enforcing.\nI fixed this by exporting the data again and manually converting all the strings to their character representation and importing it again.","Q_Score":0,"Tags":"python,sqlite,sqlalchemy","A_Id":51722623,"CreationDate":"2018-08-06T14:27:00.000","Title":"Sqlalchemy largebinary does not have an encoding after migrating data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a project that is using Psycopg2 to connect to a Postgre database in python, and in the project there is a place where all the modifications to the database are being committed after performing a certain amount of insertions.\nSo i have a basic question:\nIs there any benefit to committing in smaller chunks, or is it the same as waiting to commit until the end?\nfor example, say im going to insert 7,000 rows of data, should i commit after inserting 1,000 or just wait until all the data is added?\nIf there is problems with large commits what are they? could i possibly crash the database or something? or cause some sort of overflow?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":261,"Q_Id":51716121,"Users Score":1,"Answer":"Unlike some other database systems, there is no problem with modifying arbitrarily many rows in a single transaction.\nUsually it is better to do everything in a single transaction, so that the whole thing succeeds or fails, but there are two considerations:\n\nIf the transaction takes a lot of time, it will keep VACUUM from doing its job on the rest of the database for the duration of the transaction. That may cause table bloat if there is a lot of concurrent write activity.\nIt may be useful to do the operation in batches if you expect many failures and don't want to restart from the beginning every time.","Q_Score":2,"Tags":"python,postgresql,psycopg2","A_Id":51720738,"CreationDate":"2018-08-06T21:51:00.000","Title":"pyscopg2, should I commit in chunks?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The max_row function returns a value higher than it should be (the largest row that has a value in it is row 7, but max_row returns 10), and if I try iterating through a column to find the first row that has nothing in it I get the same value as max_row.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1845,"Q_Id":51747535,"Users Score":0,"Answer":"This would be easier to understand if you work with excel on java.\nExcel cell have properties which define them as active or inactive. If you enter a value to a cell then delete the value, the cell still remains active.\nmax_row returns the row number of the last active cell, hence you get 10 rather than 7 even if the sheet now have data only till row 7 it may once have data till 10.\nManually you can clear the cell (Editing->Clear->Clear All) for the cell in excel making it inactive again. Not sure how to do the same via code in python.","Q_Score":0,"Tags":"python,excel,openpyxl","A_Id":56765286,"CreationDate":"2018-08-08T13:05:00.000","Title":"How do I find the max row in Openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I dont have much knowledge in dbs, but wanted to know if there is any technique by which when i update or insert a specific entry in a table, it should notify my python application to which i can then listen whats updated and then update that particular row, in the data stored in session or some temporary storage.\nI need to send data filter and sort calls again n again, so i dont want to fetch whole data from sql, so i decided to keep it local, nd process it from there. But i was worried if in the mean time the db updates, and i could have been passing the same old data to filter requests.\nAny suggestions?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":51759304,"Users Score":1,"Answer":"rdbs only will be updated by your program's method or function sort of things.\nyou can just print console or log inside of yours.\nif you want to track what updated modified deleted things, \nyou have to build a another program to able to track the logs for rdbs\n\nthanks.","Q_Score":2,"Tags":"python,sql,sql-server,database,connection","A_Id":51759339,"CreationDate":"2018-08-09T05:17:00.000","Title":"Is there any way mssql can notify my python application when any table or row has been updated?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to run a python flask app that pulls from a sql database, while also running a python script that updates the sql database every few seconds?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":51767855,"Users Score":1,"Answer":"Yes, databases are designed to handle this type of concurrent access. If the database is in the middle of an update, it will wait until the update is complete before handling the Flask app's query, and it will complete the query before starting the next incoming update.","Q_Score":0,"Tags":"python,sql,multithreading,sqlalchemy,flask-sqlalchemy","A_Id":51767955,"CreationDate":"2018-08-09T13:04:00.000","Title":"Pulling from database and updating database at the same time","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to store a picture in sqlite3 table. I'm using python and sqlite3.\nPlease let me know if you have a sample code or how to save a picture to a sqlite3 table.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2248,"Q_Id":51779568,"Users Score":1,"Answer":"Using blob type for image data is good.The data stored\nusing the sqlite.Binary type.","Q_Score":0,"Tags":"python,database,python-3.x,sqlite,sql-insert","A_Id":51782100,"CreationDate":"2018-08-10T05:31:00.000","Title":"How do I store a picture in a table with python-sqlite3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently using the following statement:\nREPLACE INTO TableName (Col1, Col2, Col3, Col4) SELECT * FROM AnotherTable WHERE Col2 = 'Something' AND Col3 <> 'word' \nThe issue with this is that TableName contains additional columns (i.e. Col0, Col5, Col6) that are not present in AnotherTable and whose data I do not want wiped.\nI noticed that there is a statement called INSERT INTO ... ON DUPLICATE KEY UPDATE but I am unsure how to merge this with the SELECT * FROM statement that I am using to copy specific data from another table. How is this done, if possible?\nSupplementary Python2.7 code is possible if necessary.\nServer version: 10.1.23-MariaDB-9+deb9u1 Raspbian 9.0","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":810,"Q_Id":51846102,"Users Score":2,"Answer":"INSERT INTO TableName (Col1, Col2, Col3, Col4) SELECT * FROM AnotherTable WHERE Col2 = 'Something' AND Col3 <> 'word' ON DUPLICATE KEY UPDATE Col1=AnotherTable.Col1, Col2=AnotherTable.Col2, Col3=AnotherTable.Col3, Col4=AnotherTable.Col4\nNotice the ability to use fully qualified references via the format Table.Column as an alternative to a static\/specified value.\nAs danblack reminds, the REPLACE functionality is taken over by the ON DUPLICATE KEY UPDATE that is introduced, which is dependant on UNIQUE indexes or PRIMARY KEYs.","Q_Score":0,"Tags":"python,mysql,python-2.7,mariadb","A_Id":51848118,"CreationDate":"2018-08-14T16:38:00.000","Title":"MySQL 'replace into select from' Alternative","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a single gunicorn worker process running to read an enormous excel file which takes up to 5 minutes and uses 4GB of RAM. But after the request was finished processing I noticed at system monitor that it stills allocating 4GB of RAM forever. Any ideas on what to do to release the memory?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":3631,"Q_Id":51866904,"Users Score":-1,"Answer":"--max-requests 1 worked for me for a similar case.","Q_Score":6,"Tags":"python,gunicorn","A_Id":63113091,"CreationDate":"2018-08-15T21:47:00.000","Title":"Gunicorn worker doesn't deflate memory after request","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I read the data from oracle database to panda dataframe, then, there are some columns with type 'object', then I write the dataframe to hive table, these 'object' types are converted to 'binary' type, does any one know how to solve the problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":51869125,"Users Score":0,"Answer":"When you read data from oracle to dataframe it's created columns with object datatypes.\nYou can ask pandas dataframe try to infer better datatypes (before saving to Hive) if it can:\ndataframe.infer_objects()","Q_Score":0,"Tags":"python-2.7,hive","A_Id":51877123,"CreationDate":"2018-08-16T03:16:00.000","Title":"Why there is binary type after writing to hive table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the below error when trying to pip install mysql:\n\nbuilding '_mysql' extension\n creating build\/temp.macosx-10.6-intel-2.7\n \/usr\/bin\/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I\/usr\/local\/Cellar\/mysql\/8.0.12\/include\/mysql -I\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/include\/python2.7 -c _mysql.c -o build\/temp.macosx-10.6-intel-2.7\/_mysql.o\n _mysql.c:44:10: fatal error: 'my_config.h' file not found\n #include \"my_config.h\"\n ^~~~~~~~~~~~~\n 1 error generated.\n error: command '\/usr\/bin\/clang' failed with exit status 1\n\nAny ideas?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":378,"Q_Id":52002400,"Users Score":2,"Answer":"you don't have my_config file. It is because you have not installed mysql on your computer. Try to install after setup mysql","Q_Score":1,"Tags":"python,mysql,macos,pip","A_Id":52054252,"CreationDate":"2018-08-24T10:21:00.000","Title":"pip install mysql error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a dynamoDB table where I store sensor events.\n\nHASH KEY: sensor id\nRANGE KEY: timestamp\nsensor info\n\nI now need a query for the latest event of every sensor.\nThe only solution I can come up with is to query the latest event for each sensor id. But that would be a lot of queries with 2000+ sensors. \nI don't want to scan the whole table to sort it out afterwards either since the table can grow quite fast.\nAny ideas?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4111,"Q_Id":52007936,"Users Score":2,"Answer":"You have to decide what is important to you and design your table(s) to match your use cases.\nYou say you want to query the last value for every sensor and that there are 2000+ sensors. What will you do with these 2000+ values? How often do you need these values and can the values be slightly out of date?\nOne solution would be to have two tables: one where you append historical values (time series data) and another table where you always update the most recent reading for each sensor. When you need the most recent sensor data, just scan this second table to get all your sensors\u2019 most recent values. It's as efficient as it gets for reads. For writes, it means you have to write twice for each sensor update.\nThe other potential solution would be to write your time series data partitioned by time, as opposed to the sensor ids. Assuming all sensors are updated at each time point, with a single query you can get the value of all sensors. This works but only if you update the vales of all sensors every time, and only if you do it with regular cadence.\nHowever, if you update all sensors at once, then further optimizations may be had by combining multiple sensor readings into a single item, therefore requiring less writes to update all 2000 of them.","Q_Score":3,"Tags":"python,aws-lambda,amazon-dynamodb,dynamodb-queries","A_Id":52014186,"CreationDate":"2018-08-24T15:52:00.000","Title":"query dynamoDB for the latest entry of every hash key","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to add alembic to an existing ,sqlalchemy using, project, with a working production db. I fail to find what's the standard way to do a \"zero\" migration == the migration setting up the db as it is now (For new developers setting up their environment) \nCurrently I've added import the declarative base class and all the models using it to the env.py , but first time alembic -c alembic.dev.ini revision --autogenerate does create the existing tables. \nAnd I need to \"fake\" the migration on existing installations - using code. For django ORM I know how to make this work, but I fail to find what's the right way to do this with sqlalchemy\/alembic","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":9086,"Q_Id":52121596,"Users Score":34,"Answer":"alembic revision --autogenerate inspects the state of the connected database and the state of the target metadata and then creates a migration that brings the database in line with metadata.\nIf you are introducing alembic\/sqlalchemy to an existing database, and you want a migration file that given an empty, fresh database would reproduce the current state- follow these steps.\n\nEnsure that your metadata is truly in line with your current database(i.e. ensure that running alembic revision --autogenerate creates a migration with zero operations).\n\nCreate a new temp_db that is empty and point your sqlalchemy.url in alembic.ini to this new temp_db.\n\nRun alembic revision --autogenerate. This will create your desired bulk migration that brings a fresh db in line with the current one.\n\nRemove temp_db and re-point sqlalchemy.url to your existing database.\n\nRun alembic stamp head. This tells sqlalchemy that the current migration represents the state of the database- so next time you run alembic upgrade head it will begin from this migration.","Q_Score":23,"Tags":"python,sqlalchemy,alembic","A_Id":56651578,"CreationDate":"2018-08-31T19:23:00.000","Title":"Creating \"zero state\" migration for existing db with sqlalchemy\/alembic and \"faking\" zero migration for that existing db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have multiple Excel spreedsheets in given folder and it's sub folder. All have same file name string with suffix as date and time. How to merge them all into one single file while making worksheet name and titles as index for appending data frames. Typically there would be small chunks of 200 KB each file of ~100 files in subfolders or 20 MB of ~10 files in subfolders","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2067,"Q_Id":52159865,"Users Score":0,"Answer":"I have tried operating using static file name definitions, would be good if it makes consolation by column header from dynamic file list pick, whichever starts with .xls* (xls \/ xlsx \/ xlsb \/ xlsm) and .csv and .txt\nimport pandas as pd\ndb = pd.read_excel(\"\/data\/Sites\/Cluster1 0815.xlsx\")\ndb1 = pd.read_excel(\"\/data\/Sites\/Cluster2 0815.xlsx\")\ndb2 = read_excel(\"\/data\/Sites\/Cluster3 0815.xlsx\")\nsdb = db.append(db1)\nsdb = sdb.append(db2)\nsdb.to_csv(\"\/data\/Sites\/sites db.csv\", index = False, na_rep = \"NA\", header=None)","Q_Score":0,"Tags":"python","A_Id":52160816,"CreationDate":"2018-09-04T06:14:00.000","Title":"How to merge multiple Excel files from a folder and it's sub folders using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to speed up the home page of a website that queries the database for a random URL to use as the background image. One thing I have tried is to add a function to the Python code that caches the result of that database query for 60 minutes, and when I run the server locally I see that it seems to work correctly: reloading the page shows the same image as the previous time, instead of a new random image.\nHowever when I deployed this code to a Digital Ocean droplet running an Apache server, it didn't seem to work: reloading the page would show a different image. I suspect that what is happening is that different workers are handling my request each time, and each of these workers has its own cached result from the database.\nIs there any way to cache these database queries across workers or achieve some similar result? Note: the obvious solution of hard-coding the background image is not an option, as the person for whom I am working wants the background image to vary.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":52176392,"Users Score":0,"Answer":"Apache has shared memory between workers, but I'm not aware of anyway for python (say uwsgi) to access it. Same with nginx.\nAlternative would be to use an algorithm to determine what to display rather then being truly random. For example, all queries with hour == 1 -> picture_1, hour == 2 -> picture_2, etc.","Q_Score":0,"Tags":"python,apache,web.py","A_Id":52237262,"CreationDate":"2018-09-05T02:54:00.000","Title":"Can a Python server cache db queries across Apache workers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to speed up the home page of a website that queries the database for a random URL to use as the background image. One thing I have tried is to add a function to the Python code that caches the result of that database query for 60 minutes, and when I run the server locally I see that it seems to work correctly: reloading the page shows the same image as the previous time, instead of a new random image.\nHowever when I deployed this code to a Digital Ocean droplet running an Apache server, it didn't seem to work: reloading the page would show a different image. I suspect that what is happening is that different workers are handling my request each time, and each of these workers has its own cached result from the database.\nIs there any way to cache these database queries across workers or achieve some similar result? Note: the obvious solution of hard-coding the background image is not an option, as the person for whom I am working wants the background image to vary.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":52176392,"Users Score":0,"Answer":"After doing some more reading it seems the standard solution to this problem is to use a db query caching system like Memcached or Redis.","Q_Score":0,"Tags":"python,apache,web.py","A_Id":52240149,"CreationDate":"2018-09-05T02:54:00.000","Title":"Can a Python server cache db queries across Apache workers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a large number of records to a postgres database, using psycopg2.extras.execute_values(cursor, query, data, page_size=100)\nI get what the page_size parameter does, but don't really know what would be a sensible value to set it to. (Above uses the default value of 100.) What are the downsides of simply setting this to something ridiculously large?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2195,"Q_Id":52204311,"Users Score":1,"Answer":"Based on my understanding, the page_size gives the size of input values per sql statement. Give larger number means longer sql statement, and hence more memory usage for the query. If you do not need the query to return any values, it would be safe to use a smaller value such as 100 by default.\nHowever, if you would like to insert\/update certain table with returning statement, you may like to increate page_size to at least the same length as your data. You may set it at length(data) (your data should be a list of lists or a list of tuples), and the downside is that you have to introduce some limit to the number of data values per call. Postgresql allows very long sql, so if you have enough memory, millions of records should be acceptable.","Q_Score":6,"Tags":"python,psycopg2","A_Id":52874618,"CreationDate":"2018-09-06T12:37:00.000","Title":"psycopg2: What page_size to use","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We encountered a strange issue when altering a table. We use Cassandra python driver sync_table() method to sync from our model (defined in a py file) to Cassandra. The cluster is a 20 node being stressed decently (all nodes in range of 50-70% max usage). \nWhen the schema is synced using the Cassandra python driver, internally it is executing the \"ALTER TABLE ADD \" commands. In a particular table, when we added seven new columns, we noticed this strange behavior\n\nDESCRIBE TABLE command shows 3 or 4 out of the new 7 columns created. Once, it showed all 7 columns in the DESCRIBE TABLE output.\n\nBut in the select * output, the new columns are not shown. \n\n\nThe behavior is inconsistent. We dropped the columns manually and then resynced the schema. Every time the issue appears with select command not showing few of the 7 columns.\nAny pointers to debug this issue? Is it due to stress on Cassandra nodes?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":52207766,"Users Score":2,"Answer":"The most probable issue is that you hit the schema agreement problem because of execution of many schema change commands. \nTypically, you need to send the schema change commands only to one host, and get confirmation about schema agreement. The first thing is usually done by creating a session that uses white-list policy where list consists only of one node (as opposite to token-aware or round robin policies). The second thing is easy - you either check corresponding flag of the result set returned after execution of the command, or by checking corresponding field\/method of the cluster's metadata.","Q_Score":1,"Tags":"cassandra,cassandra-python-driver","A_Id":52210324,"CreationDate":"2018-09-06T15:40:00.000","Title":"Cassandra schema issue in 2.1.14","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose that we have two tables, table A and table B, and suppose that A and B are both very large: table A consists of 500000 rows and 20 columns and table B consist of 1000000 rows and and 20 columns. Suppose furthermore that there is no unique index available for the rows.\nQuestion: What is the fastest way to check the overlap between the two tables? Should I use some form of hashing? Would it be doable to compare the tables within a couple of minutes and if not; how long would it take? I guess that just comparing each row of A with each row of B would take a lot of computing time?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":67,"Q_Id":52208787,"Users Score":1,"Answer":"I suspect the fastest solution would be to create an index on one of the tables on some field that is well distributed, i.e. where there would be few cases of two records having the same value in that field. Then you could do a fast search with a join on that field.\nCreating the index and then running the comparison will almost certainly be faster than running a comparison without an index.\nExactly how long it will take will depend on the size of the fields, how fast your server is, etc. But with a decent index, \"a few minutes\" is not an unreasonable expectation.\nIf there's some reason why you don't want an index, then delete it when you're done.","Q_Score":0,"Tags":"python,mysql,database","A_Id":52208990,"CreationDate":"2018-09-06T16:45:00.000","Title":"Comparing large databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019ve build a web based data dashboard that shows 4 graphs - each containing a large amount of data points.\nWhen the URL endpoint is visited Flask calls my python script that grabs the data from a sql server and then starts manipulating it and finally outputs the bokeh graphs. \nHowever, as these graphs get larger\/there becomes more graphs on the screen the website takes long to load - since the entire function has to run before something is displayed.\nHow would I go about lazy loading these? I.e. it loads the first (most important graph) and displays it while running the function for the other graphs, showing them as and when they finish running (showing a sort of loading bar where each of the graphs are or something).\nWould love some advice on how to implement this or similar.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":14269,"Q_Id":52238200,"Users Score":3,"Answer":"I had the same problem as you. The problem with any kind of flask render is that all data is processed and passed to the page (i.e. client) simultaneously, often at large time cost. Not only that, but the the server web process is quite heavily loaded.\nThe solution I was forced to implement as the comment suggested was to load the page with blank charts and then upon mounting them access a flask api (via JS ajax) that returns chart json data to the client. This permits lazy loading of charts, as well as allowing the data manipulation to possibly be performed on a worker and not web server.","Q_Score":9,"Tags":"python,flask,bokeh","A_Id":52238332,"CreationDate":"2018-09-08T18:25:00.000","Title":"Lazy loading with python and flask","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I created an excel sheet with python xlsxwriter. When i resaved it with libreoffice calc the size decreased by more than 60 percent. why is it ? Is there any way i can mimic this in python to reduce my file size?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":52288610,"Users Score":0,"Answer":"As far as I know that also happens with files produced by Excel so it isn\u2019t an issue with XlsxWriter files. You can verify this by saving the file in Excel and then LibreOffice to test the change. \nXlsxWriter files should be the same size (with small percentage differences) as Excel files since the XML part is almost exactly the same. The zip compression may differ though.","Q_Score":1,"Tags":"python,xlsx,xlsxwriter,libreoffice-calc","A_Id":52289221,"CreationDate":"2018-09-12T06:18:00.000","Title":"Excel sheet size reduced on resaving","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created an excel sheet with python xlsxwriter. When i resaved it with libreoffice calc the size decreased by more than 60 percent. why is it ? Is there any way i can mimic this in python to reduce my file size?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":52288610,"Users Score":0,"Answer":"The size sometimes reduces if you have empty cells on the list. Similar behavior is when you move in Excel to cell A1 and then save, the file size decreases. But you generated the file, so this behaviour may not necessarily apply.","Q_Score":1,"Tags":"python,xlsx,xlsxwriter,libreoffice-calc","A_Id":52289339,"CreationDate":"2018-09-12T06:18:00.000","Title":"Excel sheet size reduced on resaving","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Import pymysql is giving error as AttributeError: module 'pymysql.constants.ER' has no attribute 'CONSTRAINT_FAILED'\n\nUsing pymysql in Python 3.5 \nOperating System is Windows 10","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":248,"Q_Id":52289683,"Users Score":0,"Answer":"I usually think about attributes as nouns that belong to an object.An attribute in Python means some property that is associated with a particular type of object. In other words, the attributes of a given object are the data and abilities that each object type inherently possesses. An object in Python is a simply an enclosed collection of these abilities and data, and is said to be of a specific type.\nAttribute errors in Python are generally raised when the Python interpreter cannot find a specified data or method attribute on an object that allows for attribute references, it will raise an \"AttributeError\" exception. When you get an attribute error in Python, it means you tried to access the attribute value of, or assign an attribute value to, a Python object or class instance in which that attribute simply does not exist.","Q_Score":0,"Tags":"python,mysql,python-3.x,python-import,mysql-python","A_Id":52289834,"CreationDate":"2018-09-12T07:27:00.000","Title":"AttributeError: module 'pymysql.constants.ER' has no attribute 'CONSTRAINT_FAILED' in pymysql Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For many years, my company has used the win32com module and ADO to connect to databases via ODBC in Python scripts. I do not like ADO because it is ancient and because COM is inherently slow and because it tends to throw one particular exception for which there is no workaround I've ever found. We use ODBC because we cannot assume that our customers have any particular database system (although most of them use PostgreSQL). We have a class that wraps ADO and provides access to most (maybe all) of the functionality in ADO. I am at a point where I could recommend a complete changeover to pyodbc. Before I do that, I'm curious: are there advantages to ADO via win32com? Does it have more capability than pyodbc?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":909,"Q_Id":52298387,"Users Score":0,"Answer":"are there advantages to ADO via win32com? Does it have more capability than pyodbc?\n\nPractically speaking, and specifically with regard to ODBC, not really. ADODB would have the advantage of being able to use an OLEDB provider for a database that had an OLEDB provider but not an ODBC driver, but that would be a rare occurrence. (The only such database I can recall is \"SQL Server Compact Edition\", which was discontinued long ago.)\nAs mentioned in the comments to the question, pyodbc would have the advantage of avoiding extra layers of middleware when communicating with the database, i.e.,\nyour\u00a0Python\u00a0app\u00a0\u2194 pyodbc\u00a0\u2194 ODBC\u00a0Driver\u00a0Manager\u00a0\u2194 ODBC\u00a0Driver\u00a0\u2194 database\nvs.\nyour\u00a0Python\u00a0app\u00a0\u2194 win32com\u00a0\u2194 ADODB\u00a0\u2194 OLEDB\u00a0provider\u00a0for\u00a0ODBC\u00a0\u2194 ODBC\u00a0Driver\u00a0Manager\u00a0\u2194 ODBC\u00a0Driver\u00a0\u2194 database\nAs also mentioned, win32com\/ADODB is a Windows-only technology, whereas a pyodbc solution could also be deployed on Linux or Mac if the appropriate ODBC drivers were available for those platforms.","Q_Score":0,"Tags":"python,ado,pyodbc","A_Id":52345221,"CreationDate":"2018-09-12T15:09:00.000","Title":"pyodbc vs ADO via COM for Python scripts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I make a relationship between two APIs, where an application is in node.js with the mongodb and another in python with mysql, I need to make a relation between them of 1: N between client:sales, in one I have the register client (name, cpf) and the other part is a sale, but for me to carry out the sales I need customer data.\nIn a traditional application this relationship would be through a foreign key, as we are talking about two separate applications how could this relationship be made ??","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":189,"Q_Id":52327099,"Users Score":0,"Answer":"You maintain relation via you services. All you have with you is two services, Client Service and sale service. \nClient service ideally should have no information about sales service, it should not know even if the sale service or any other service even exist.\nHowever for you to record sale you need to do it against client, and in that case sales service can keep client id and can pull information from client when needed.\nIt is upto sale service to decide what relation ship is, it can allow for one client to have multiple sales record or one. Service can decide that, This might not exactly be required to be embedded into database by means of FK.\nSales service can keep all the information related to sales alongside an id (Just like you would have in traditional database). Based on database choices you can put constraint against this id if you have to.","Q_Score":0,"Tags":"python,node.js,api,microservices","A_Id":52327376,"CreationDate":"2018-09-14T07:35:00.000","Title":"Relationship between two microservices with different database by means of api rest","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"python 3.6 64 bit,Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production,cx_oracle 6.4.1\nAble to connect DB thru sqlplus and sql developer manually.\nwhen trying to connect through python:\ncx_Oracle.DatabaseError: DPI-1047: 64-bit Oracle Client library cannot be loaded: \"C:\\Oracle\\product\\11.2.0\\client_1\\bin\\oci.dll is not the correct architecture\" .... \nmanually verified that \n\"C:\\Oracle\\product\\11.2.0\\client_1\\BIN\\\" has the oci.dll\nPlease help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":426,"Q_Id":52330386,"Users Score":0,"Answer":"The DLL at C:\\Oracle\\product\\11.2\\client_1\\bin\\oci.dll is not a 64-bit DLL. You will need to download and extract a 64-bit Oracle Instant Client in order to resolve this issue. Note that the fact that the server is 64-bit is not relevant in this case. The client must be 64-bit!","Q_Score":0,"Tags":"python,cx-oracle","A_Id":52341121,"CreationDate":"2018-09-14T10:45:00.000","Title":"Trying to connect a DB : Win7 Enterprise 64bit- Python cx_Oracle - oci.dll is not found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In sqlite python, one has to run commit() to make sure a SQL statement is executed if auto-commit is not enabled. Auto-commit is enabled by doing something like this sqlite3.connect('sqlitedb.db', isolation_level=None) \nIs it a good practice to enable auto-commit all the time? THis is to avoid bugs that can happen when one forgets to run commit().\nWhat are some situations, if any, that auto-commit is better to be disabled?\nI am using sqlite3 and python v3.6","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2295,"Q_Id":52333800,"Users Score":3,"Answer":"Autocommit should be disabled if multiple operations belong together logically to ensure that not only some of them are executed (atomicity).\nAdditionally it should be disabled if multiple operations are done consecutively in a short period of time for performance reasons.\nFor databases with concurrent access from different threads\/processes additional consistency considerations apply but this usage is unlikely for Sqlite.","Q_Score":2,"Tags":"python,python-3.x,sqlite","A_Id":52334104,"CreationDate":"2018-09-14T14:13:00.000","Title":"Is it a good practice to enable auto-commit in sqlite python all the time?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a predictive curve using 12 different datasets of empirical data. Essentially I want to write a function that passes 2 variables (Number of Applications, Days) and generates a predictive curve based on the 12 datasets that i have. The datasets all have 60 days and have Number of Applications from 500 to 100,000.\nI'm not really sure of what the best approach would be, I was thinking maybe taking the average percentage of total applications at each day (ex: at day 1 on average there are 3% of total applications issued, day 10 on average there are 10%, etc)would be a good place to start but i'm not sure if that's the best approach.\nI have python, SQL, and excel at my disposal but I'm not necessarily looking for a specific solution as much as just a general suggestion on approach. Any help would be much appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":52334490,"Users Score":0,"Answer":"It sounds like you want to break it all out into (60*12) rows with 3 columns: one recording the application number, another recording the time, and another recording the location. Then a model could dummy out each location as a predictor, and you could generate 12 simulated predictions, with uncertainty. Then, to get your one overall prediction, average those predictions instead - bootstrap and then pool the predictions if you're fancy. Model time however you want - autoregression, Kalman filter, nearest-neighbor (probably not enough data for that one though). Just don't dummy out each time point individually or you'll have a perfect-fitting model.\nBut be aware of the possible universe of interactions between the locations that you could model here. Dummying them all out assumes no interactions between them, or at least one you care about, or that relate to anything you care about. It just accounts for fixed effects, i.e. you're assuming that the time dynamic within each location is the same, it's just that some locations tend overall and on average to have higher application numbers than others. You could derive tons of predictors pertaining to any given location based on the application number(s) in other location(s) - current number, past number, etc. All depends on what you consider to be possible and informative to account for.","Q_Score":0,"Tags":"python,sql,excel,statistics","A_Id":52335074,"CreationDate":"2018-09-14T14:55:00.000","Title":"Python\/SQL\/Excel I have 12 datasets and I want to combine them to one representative set","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on Glue since january, and have worked multiple POC, production data lakes using AWS Glue \/ Databricks \/ EMR, etc. I have used AWS Glue to read data from S3 and perform ETL before loading to Redshift, Aurora, etc.\nI have a need now to read data from a source table which is on SQL SERVER, and fetch data, write to a S3 bucket in a custom (user defined) CSV file, say employee.csv.\nAm looking for some pointers, to do this please.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1571,"Q_Id":52336996,"Users Score":0,"Answer":"This task fits AWS DMS (Data Migration Service) use case. DMS is designed to either migrate data from one data storage to another or keep them in sync. It can certainly keep in sync as well as transform your source (i.e., MSSQL) to your target (i.e., S3).\nThere is one non-negligible constraint in your case thought. Ongoing sync with MSSQL source only works if your license is the Enterprise or Developer Edition and for versions 2016-2019.","Q_Score":3,"Tags":"python,python-2.7,amazon-web-services,amazon-s3,aws-glue","A_Id":66705862,"CreationDate":"2018-09-14T17:50:00.000","Title":"AWS Glue - read from a sql server table and write to S3 as a custom CSV file","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had been working with pyodbcfor database connection in windows envirnment and it is working fine but now I want to switch to pymssql so that it is easier to be deployed to Linux machine as well. But I am getting this error:\n\n(20009, b'DB-Lib error message 20009, severity 9:\\nUnable to connect: Adaptive Server is unavailable or does not exist (localhost:1433)\\nNet-Lib error during Unknown error (10060)\\n')\n\nMy connection code for using both pyodbc and pymssql is:\n\n import pyodbc\n import pymssql\n\n def connectODSDB_1():\n conn_str = (\n r\"Driver={SQL Server};\"\n r\"Server=(local);\"\n r\"Database=populatedSandbox;\"\n r\"Trusted_Connection=yes;\"\n )\n return pyodbc.connect(conn_str)\n\n def connectODSDB_2():\n server = '(local)'\n database = 'populatedSandbox'\n conn = pymssql.connect(server=server, database=database)\n return conn\n\n\nWhat could be the problem? And solution?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4195,"Q_Id":52401252,"Users Score":5,"Answer":"Well after browsing internet for a while, it seems pymssql needs TCP\/IP be enabled for communication. \n\nOpen Sql Server Configuration Manager\nExpand SQL Server Network Configuration\nClick on Protocols for instance_name\nEnable TCP\/IP","Q_Score":4,"Tags":"python,database-connection,pymssql","A_Id":52401546,"CreationDate":"2018-09-19T08:17:00.000","Title":"Database connection failed for local MSSQL server with pymssql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have multiple mongodbs like mongodb_1, mongodb_2, mongodb_3 with same kind of data like employee details of different organizations.\nWhen user triggers GET request to get employee details from all the above 3 mongodbs whose designation is \"TechnicalLead\". then first we need to connect to mongodb_1 and search and then disconnect with mongodb_1 and connect to mongodb_2 and search and repeat the same for all dbs.\nCan any one suggest how can we achieve above using python EVE Rest api framework.\nBest Regards,\nNarendra","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":52418721,"Users Score":0,"Answer":"First of all, it is not a recommended way to run multiple instances (especially when the servers might be running at the same time) as it will lead to usage of the same config parameters like for example logpath and pidfilepath which in most cases is not what you want.\nSecondly for getting the data from multiple mongodb instances you have to create separate get requests for fetching the data. There are two methods of view for the model that can be used:\n\nquery individual databases for data, then assemble the results for viewing on the screen.\nQuery a central database that the two other databases continously update.","Q_Score":0,"Tags":"python,mongodb,eve","A_Id":52872010,"CreationDate":"2018-09-20T06:14:00.000","Title":"How to search for all existing mongodbs for single GET request","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to convert following dates in excel in float values by Python.\nProblem Statement: I have a dates data in which dates are entered but users and users used different cell format to enter dates.For example some used cell format asdd-yy-yyyy and some used mm-dd-yyyy in excel cell while entering the data ( in fact user used different excel format for dates in their files).When I am trying to consolidate dates then find that dates are in different cell formats randomly. In excel I can copy dates column and paste all in a separate column as a value only,It gives me integer value which is same regardless and format used in cell. And later applied a single format to all value and gets all my dates in same format.\nBut , I want to make a script in Python in which first: all different cell formats for dates are converted to float value (like i do in excel) then I will convert all dates back to standard format i.e dd\/mm\/yyyy.\nFormet Dates format Date in numeric value Reformatted in excle as dd-mm-yyyy\nformat 1 30-08-2018 dd-mm-yyyy 43342.51551 30-08-2018\nformat 2 08-30-2018 mm-dd-yyyy 43342.51551 30-08-2018","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":831,"Q_Id":52423873,"Users Score":0,"Answer":"No script can tell you, if 03-05-2014 means 3rd May or 5th March. It's not God and can't do, what you're unable to do.\nOnly clues:\n\nIf some value consists of four digits, it's the year.\nIf one value is higher than twelve, you can say that this value must be the day (or the year)\nIf there are several date values by the same user, you may presume that he or she kept consistlently one format and deduce the format from another field.","Q_Score":0,"Tags":"python","A_Id":52424438,"CreationDate":"2018-09-20T11:15:00.000","Title":"Python date Time for Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python pandas dataframe on my local machine, and have access to a remote mongodb server that has additional data that I can query via pymongo.\nIf my local dataframe is large, say 40k rows with 3 columns in each row, what's the most efficient way to check for the intersection of my local dataframe's features and a remote collection containing millions of documents?\nI'm looking for general advice here. I thought I could just take a distinct list of values from each of the 3 features, and use each of these in an $or find statement, but if I have 90k distinct values for one of the 3 features it seems like a bad idea.\nSo any opinion would be very welcome. I don't have access to insert my local dataframe into the remote server, I only have select\/find access.\nthanks very much!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":112,"Q_Id":52460327,"Users Score":1,"Answer":"As you already explained that you won't be able to insert data. So only thing is possible is first take the unique values to a list.df['column_name'].unique(). Then you can use the $in operator in .find() method and pass your list as a parameter. If it takes time or it is too much. Then break your list in equal chunks, I mean list of list [[id1, id2, id3], [id4, id5, id6] ... ] and do a for loop for sub-list in list: db.xyz.find({'key':{'$in': sublist}}, {'_id': 1}) and use the sub list as parameter in $in operator. Then for each iteration if the value exist in the db it will return the _id and we can easily store that in a empty list and append it and we will be able to get all the id's in such cases where the value exist in the collection.\nSo it's just the way I would do. Not necessarily the best possible.","Q_Score":0,"Tags":"python,mongodb,pandas,pymongo","A_Id":52462006,"CreationDate":"2018-09-22T19:51:00.000","Title":"Efficient Intersection of pandas dataframe with remote mongodb?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"From long time i am having migration issues. Each time i am droping my postgres database and creating a new When i add new 3\/4 table or a relation or Circular migration happens. Or most of the time some unwanted issue comes in migration. \nBut it's okay as long i am in development phase. But very soon when it will be on production i can't do that. Removing database each time.\nI have heard a lot about django-south. But the issue is it's not updated from long time i think last time it was updated December,14(according to it's bitbucket repo). \nNow is it a good choice for a project of 2018 ? Or any other 3rd party i can use. I just don't want to take rick writing raw sql each time in production as i am not too good on it as well. So i want to depend on django 100% in migration. \nPlease share you ideas on the migration issue\nThanks in advance :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":52465655,"Users Score":3,"Answer":"South was the predecessor of django migrate. It became a part of Django core, so no need to install it.\nIf you are having migration issues, you should learn how to fix them, instead of just re-installing. You can edit every single migration file. They are just regular .py files with regular Django functions that do the necessary changes to your tables.\nRead the error message, try to understand what went wrong, and fix the migration file that caused the error. They are numbered and located in projectfolder\/appfolder\/migrations\/.\nI am using migrate all the time and never had an issue that wasn't fixable.","Q_Score":1,"Tags":"python,django,django-models,django-south,django-migrations","A_Id":52465934,"CreationDate":"2018-09-23T11:29:00.000","Title":"Is it safe to use Django south for handling migration on big project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm doing some excel sheet Python automation using openpyxl and I'm having an issue when I try to insert columns or rows into my sheet. \nI'm modifying an existing excel sheet which has basic formula in it (i.e. =F2-G2) however when I insert a row or column before these cells, the formula do not adjust accordingly like they would if you would perform that action in excel. \nFor example, inserting a column before column F should change the formula to =G2-H2 but instead it stays at =F2-G2...\nIs there any way to work around this issue? I can't really iterate through all the cells and fix the formula because the file contains many columns with formula in them.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1105,"Q_Id":52472436,"Users Score":0,"Answer":"openpyxl is a file format library and not an application like Excel and does not attempt to provide the same functionality. Translating formulae in cells that are moved should be possible with the library's tokeniser but this ignores any formulae that refer to the cells being moved on the same worksheet or in the same workbook.","Q_Score":2,"Tags":"python,excel,openpyxl","A_Id":52475659,"CreationDate":"2018-09-24T03:47:00.000","Title":"Maintaining Formulae when Adding Rows\/Columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two django REST API projects which I have decoupled them into micro services architecture, one of the services is an (SSO) that handles authentication (I'm using JWT token based authentication) and manage Users info and the other is a payroll service.\nThe problem is the user has a relation to some model in payroll service. To be specific I have an Employee class in payroll service which has a user_id field. This is where I will add a user UUID which I will get from querying the SSO service.\nHow do I share database across micro-services taking into consideration each service has its own database.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1538,"Q_Id":52475303,"Users Score":4,"Answer":"Sharing database across bounded contexts is not advisable as each microservice should have the capability to make changes on how it persist data. \nAllowing multiple microservices to manage databases would lead you to a death star pitfall pattern\nHowever, you might want to send a copy\/updates of user data on the authentication context towards your payroll service. In this way, you can\nhave independent data persistence strategies. One way to do this is to implement event emission strategy on your authentication context,\nthis event emission strategy would be in-charge of broadcasting data changes made on the authentication context that subscribers residing from\nanother bounded context can listen so that they can store a copy of your user data on their own persistence layers.","Q_Score":4,"Tags":"python,django,database,django-rest-framework,microservices","A_Id":52490138,"CreationDate":"2018-09-24T08:28:00.000","Title":"Sharing database relations across micro-services with Django rest framework","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using \"django-pyodbc-azure\" 3rd party library for making connection and django1.11 and python version 3.5.2\n\n*** Error in `\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python': double free or corruption (!prev): 0x00007f7f08078a90 ***\n======= Backtrace: =========\n\/lib\/x86_64-linux-gnu\/libc.so.6(+0x777e5)[0x7f7f2a7a17e5]\n\/lib\/x86_64-linux-gnu\/libc.so.6(+0x8037a)[0x7f7f2a7aa37a]\n\/lib\/x86_64-linux-gnu\/libc.so.6(cfree+0x4c)[0x7f7f2a7ae53c]\n\/usr\/lib\/x86_64-linux-gnu\/libodbc.so.2(SQLDriverConnectW+0x9a0)[0x7f7f24055100]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/lib\/python3.5\/site-packages\/pyodbc.cpython-35m-x86_64-linux-gnu.so(_Z14Connection_NewP7_objectbblbS0_R6Object+0x2c3)[0x7f7f242a52a3]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/lib\/python3.5\/site-packages\/pyodbc.cpython-35m-x86_64-linux-gnu.so(+0x111de)[0x7f7f242a21de]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyCFunction_Call+0x77)[0x4e9ba7]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x59f5)[0x53c6d5]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4b04)[0x53b7e4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4b04)[0x53b7e4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4b04)[0x53b7e4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x540199]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x50b2)[0x53bd92]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4b04)[0x53b7e4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x5434af]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyCFunction_Call+0x4f)[0x4e9b7f]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x614)[0x5372f4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalCodeEx+0x13b)[0x540f9b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ebd23]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4fb9ce]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_CallFunctionObjArgs+0x128)[0x5c1988]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x20bc)[0x538d9c]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4b04)[0x53b7e4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalCodeEx+0x13b)[0x540f9b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ebd98]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_CallFunctionObjArgs+0x128)[0x5c1988]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(_PyObject_GenericGetAttrWithDict+0x1bd)[0x593b9d]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x44d)[0x53712d]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x540199]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x50b2)[0x53bd92]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x540199]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x50b2)[0x53bd92]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ed3f5]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x5b7994]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x5b7fbc]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x57f03c]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4ec6)[0x53bba6]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4b04)[0x53b7e4]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalCodeEx+0x13b)[0x540f9b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ebd23]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4fb9ce]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x584716]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x5761aa]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x54320c]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x4ce6)[0x53b9c6]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalCodeEx+0x13b)[0x540f9b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ebe37]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x252b)[0x53920b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x5406df]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x54f0)[0x53c1d0]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalCodeEx+0x13b)[0x540f9b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ebe37]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyObject_Call+0x47)[0x5c1797]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalFrameEx+0x252b)[0x53920b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python(PyEval_EvalCodeEx+0x13b)[0x540f9b]\n\/home\/ubuntu\/GET-Services_mssql\/env_mssql\/bin\/python[0x4ebe37]","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":52511704,"Users Score":0,"Answer":"My comment to the question apparently solved the problem:\n\nPython 3.5.2 and unixODBC 2.3.1 sounds like Ubuntu 16.04. Some packages in the Ubuntu repositories can be very old, e.g., unixODBC 2.3.1 is almost 7 years old now. Since it is unixODBC that is crashing (in SQLDriverConnectW) you might want to try upgrading it to the latest version (currently 2.3.7).","Q_Score":1,"Tags":"python,sql-server,django,ubuntu,pyodbc","A_Id":52669412,"CreationDate":"2018-09-26T06:56:00.000","Title":"Any Solution to SQLDriverConnect Error in Django Occurred Below using pyodbc","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am connecting the mongodb database via pymongo and achieved the expected result of fetching it outside the db in json format . but my task is that i need to create a hive table via pyspark , I found that mongodb provided json (RF719) which spark is not supporting .when i tried to load the data in pyspark (dataframe) it is showing as corrupted record. . and if any possible ways of converting the json format in python is also fine ..Please suggest a response","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":208,"Q_Id":52559131,"Users Score":0,"Answer":"mport json with open('D:\/json\/aaa.json') as f: d = f.read() da = ''.join(d.split()) print(type(da)) print(da) daa=da.replace('u'','') daaa= json.loads(daa) print(daaa)\nsatisfied with the answer. Hence closing this question","Q_Score":1,"Tags":"python,mongodb,hive,pymongo,pyspark-sql","A_Id":63618206,"CreationDate":"2018-09-28T16:10:00.000","Title":"unable to read the mongodb data (json) in pyspark","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an application for a venue that will have large-scale competitions. In order to effectively manage those competitions, multiple employees need to engage with and modify a set of data in real-time, from multiple machines in the gym. I have created a Python application which accomplishes this by communicating with a MySQL server (which allows as many instances of the application as necessary to communicate with it). \nIs there a nice way to get MySQL server installed on a client machine along with this Python application (It only necessarily needs to end up on one machine)? Perhaps is there a way to wrap the installers together? Am I asking the right question? I have no experience with application distribution, and I'm open to all suggestions.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":260,"Q_Id":52629295,"Users Score":1,"Answer":"The 'normal' way to do it is to have a network setup (ethernet and\/or wireless) to connect many Clients (with your Python app) to a single Server (with MySQL installed).\nTo have the \"Server\" distributed among multiple machines becomes much messier.\nPlan A: One Master replicating to many Slaves -- good for scaling reads, but not writes.\nPlan B: Galera Cluster -- good for writing to multiple instances; repairs itself in some situations.\nBut if you plan on having the many clients go down a lot, you are better off having a single server that you try to keep up all the time and have a reliable network so that the clients can get to that on server.","Q_Score":0,"Tags":"python,mysql,windows-installer,software-distribution","A_Id":52727076,"CreationDate":"2018-10-03T14:25:00.000","Title":"Is it possible to install MySQL Server along with standalone client-side software?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a worksheet with VBA code (on Excel, right-click on the sheet name and View code) that I would like to copy on the same workbook.\nWhen using workbook.copy_worksheet() , the VBA code contained in the worksheet is lost. \nI've had a look at the worksheet.vba_code property but it seems to only contain some sheets properties, not the VBA code.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":662,"Q_Id":52647650,"Users Score":1,"Answer":"I think the problem will be that worksheets themselves do not contain any VBA code. This is stored as a blob in the XLSX package and may well contain hard-coded references to particular worksheets. Unfortunately the VBA blobs are not covered by the OOXML specification so there is no way to know. You might be okay if you copy the vba_code property manually but there is no guarantee and it's just as likely that Excel will complain about the file.","Q_Score":1,"Tags":"python,excel,vba,openpyxl","A_Id":52647821,"CreationDate":"2018-10-04T13:12:00.000","Title":"How to keep VBA code when copying worksheet with openpyxl?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that reads data from an OPCDA server and then push it to InfluxDB.\nSo basically it connects to the OPCDA using the OpenOPC library and to InfluxDB using the InfluxDB Python client and then starts an infinite while loop that runs every 5 seconds to read and push data to the database.\nI have installed the script as a Service using NSSM. What is the best practice to ensure that the script is running 24\/7 ? How to avoid crashes ?\nShould i daemonize the script ?\nThank you in advance,\nBnjroos","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":534,"Q_Id":52713401,"Users Score":0,"Answer":"I suggest at least to add logging at the script level. You could also use custom Exit Codes from python so NSSM knows to report failure. Your failure would probably be when connecting to your services so, i.e. netowrk down or something so you could write custom exceptions for NSSM to restart. If it's running every 5 seconds you would probably know very soon.\nEnsuring availability and avoiding crashes is about your code more than infrastructure, hence the above recommendations. \nI believe using NSSM (for scheduling and such) is better than daemonizing, since you're basically adding functionality of NSSM in your script and potentially adding more code that may fail.","Q_Score":1,"Tags":"python,windows,loops,service","A_Id":52713941,"CreationDate":"2018-10-09T04:28:00.000","Title":"Best practice for an infinite loop Python script that runs on Windows as a Service","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a excelfile with around 50 worksheets. All information is in the summary in the first worksheet, but for detailed information people can check the source in the worksheet.\nI thought it would be nice to have an internal link to the worksheet (people want to know why the sales were down in July 2016 worksheet etc). \nBut while I seem to be able to create hyperlinks to websites, I just want to make it work in this excel file.\nIs this possible at all?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1585,"Q_Id":52721944,"Users Score":0,"Answer":"Actually, you can add local hyperlinks but have to control the location. The specification says this of the location attribute: \n\nLocation within target. If target is a workbook (or this workbook)\n this shall refer to a sheet and cell or a defined name. Can also be an\n HTML anchor if target is HTML file.\n\nI think this works by setting target to None and ref to the cell reference.","Q_Score":2,"Tags":"python,excel,python-3.x,openpyxl","A_Id":52724598,"CreationDate":"2018-10-09T13:12:00.000","Title":"Create internal links within excelsheet with openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to python and am trying to set up Flask and Python. I need to set my DATABASE_URL to run flask, but I can't find out my database url, because when I click on Heroku Postgres in my dashboard over view I get the error message:\n{\"error\":{\"id\":\"unauthorized\",\"message\":\"Invalid credentials provided.\"}}\nAny ideas? Feel like I'm trapped in a weird circle.\nThanks in advance\nVicky","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1134,"Q_Id":52763624,"Users Score":0,"Answer":"thanks, seemed to have sorted it, not quite sure how. Heroku was showing details I required today, so maybe it was something to do with their site yesterday.","Q_Score":1,"Tags":"python,heroku","A_Id":52793606,"CreationDate":"2018-10-11T15:18:00.000","Title":"This error message {\"error\":{\"id\":\"unauthorized\",\"message\":\"Invalid credentials provided.\"}} when I click on Heroku Postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wasn't able to find this question asked before and this is driving me crazy. If this question is a duplicate, I'd appreciate it if someone could point me to an answer.\nI tried to install mysql (using pip install mysql) in the PyCharm terminal using Python 3.6. This is the error I get. I already updated pip to the latest version.\n\nCommand \"C:\\Users\\David\\PycharmProjects\\cryptocurrency2\\cryptocurrencytrading\\Scripts\\python.exe -u -c \"import setuptools, tokenize;file='C:\\Users\\David\\AppData\\Local\\Temp\\pip-install-alqyprfu\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(f\n ile);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, file, 'exec'))\" install --record C:\\Users\\David\\AppData\\Local\\Temp\\pip-record-gcaxj074\\install-record.txt --single-version-externally-managed --compile --install-headers C:\\Users\\David\\PycharmProjects\\cryptocurrency2\\cryptocurrencytrading\\include\\site\\python3.6\\mysqlclient\" failed with error code 1 in C:\\Users\\David\\AppData\\Local\\Temp\\pip-install-alqyprfu\\mysqlclient\\\n You are using pip version 10.0.1, however version 18.1 is available.\n You should consider upgrading via the 'python -m pip install --upgrade pip' command.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2631,"Q_Id":52779570,"Users Score":-1,"Answer":"try to use python -m pip install --upgrade pip\nur pip version is too low","Q_Score":0,"Tags":"python,pip,pycharm,mysql-python","A_Id":52779656,"CreationDate":"2018-10-12T12:28:00.000","Title":"Installing mysql in PyCharm community edition","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm struggling with a question.\nSo I created a Neural Network, but now I want to put values from my database in it. It's important that the data from the database is collected by a php script (already created), and has been send to my python Neural Network script. How do I transfer multiple variables and even multiple rows from my database to a python script?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":52797713,"Users Score":0,"Answer":"Best way to share data is via API. Prepare array containing multiple data you want to send then convert the data to JSON. On python, decode the JSON then take data out of it one by one. Hope this answers your question.","Q_Score":0,"Tags":"php,python,sql","A_Id":52797757,"CreationDate":"2018-10-13T22:00:00.000","Title":"How to send SQL data gain from a php script to a python script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to access a folder object inside S3 bucket. How can I access a folder inside S3 bucket using python boto3.\nCode is working for a folder in S3 bucket but to for folders inside S3 bucket","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":4249,"Q_Id":52805052,"Users Score":2,"Answer":"If I understand you correctly.. I had this issue in my python3 script. Basically you need to pass the to the boto3 function an s3 bucket and the file name. Make this file name include the folder, with the forward slash separating them. Instead of passing just the file name and trying to pass the folder as a separate parameter. \nSo if you have MyS3Bucket and you want to upload file.txt to MyFolder inside MyS3Bucket, then pass the file_name=\u201cMyFolder\u201d+\u201d\/\u201c+\u201dfile.txt\u201d as a parameter to the upload function.\nLet me know if you need a code snippet. \nEven if you don\u2019t have the folder in the S3 bucket, boto3 will create it for you on the fly. This is cool because you can grant access in s3 based on a folder, not just the whole bucket at once. \nGood luck!","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,boto3","A_Id":55203529,"CreationDate":"2018-10-14T17:03:00.000","Title":"How to upload file to folder in aws S3 bucket using python boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Need to extract table schema (using describe\\list columns) into .txt or .csv file and later want to convert those files into .avsc(avro schema) file using python.\nsample.txt:\nCOLUMN_NAME |TYPE_NAME|DEC&|NUM&|COLUM&|COLUMN_DEF|CHAR_OCTE&|IS_NULL&\nAIRLINE |CHAR |NULL|NULL|2 |NULL |4 |NO\nAIRLINE_FULL |VARCHAR |NULL|NULL|24 |NULL |48 |YES\nNeed to convert sample.txt into sample.avsc","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":298,"Q_Id":52828409,"Users Score":0,"Answer":"Read CSV into python dict\nGet header part to other dict(header_dict)\ncreate empty dict(final_dict) and append type,namespace reletad thing and pass rows from header_dict to this final_dict.\nDump final_dict to file, which will be your avsc","Q_Score":0,"Tags":"python-2.7,avro,avsc","A_Id":55391303,"CreationDate":"2018-10-16T05:22:00.000","Title":"Python - How to convert .txt\/.csv file holding table schema to .avsc file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to import an Excel file into a new table in a Teradata database, using SQLAlchemy and pandas.\nI am using the pandas to_sql function. I load the Excel file with pandas and save it as a dataframe named df. I then use df.to_sql and load it into the Teradata database.\nWhen using the code:\ndf.to_sql('rt_test4', con=td_engine, schema='db_sandbox')\nI am prompted with the error:\nDatabaseError: (teradata.api.DatabaseError) (3534, '[42S11] [Teradata][ODBC Teradata Driver][Teradata Database] Another index already exists, using the same columns and the same ordering. ') [SQL: 'CREATE INDEX ix_db_sandbox_rt_test4_index (\"index\") ON db_sandbox.rt_test4']\nWhen I try this and use Teradata SQL Assistant to see if the table exists, I am prompted with selecting txt or unicode for each column name, and to pick a folder directory. A prompt titled LOB information pops open and I have to select if it's UTF or unicode, and a file directory. Then it loads and all the column titles populate, but they are left as empty fields. Looking for some direction here, I feel I've been spinning my wheels on this.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":780,"Q_Id":52847985,"Users Score":1,"Answer":"I solved it! Although I do not know why, I'm hoping someone can explain:\ntf.to_sql('rt_test4', con=td_engine, schema='db_sandbox', index = False, dtype= {'A': CHAR, 'B':Integer})","Q_Score":1,"Tags":"python,pandas,sqlalchemy,teradata","A_Id":52848240,"CreationDate":"2018-10-17T05:46:00.000","Title":"Importing a pandas dataframe into Teradata database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Assuming I have an excel sheet already open, make some changes in the file and use pd.read_excel to create a dataframe based on that sheet, I understand that the dataframe will only reflect the data in the last saved version of the excel file. I would have to save the sheet first in order for pandas dataframe to take into account the change.\nIs there anyway for pandas or other python packages to read an opened excel file and be able to refresh its data real time (without saving or closing the file)?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1261,"Q_Id":52862768,"Users Score":1,"Answer":"There is no way to do this. The table is not saved to disk, so pandas can not read it from disk.","Q_Score":10,"Tags":"python,excel,pandas","A_Id":72310873,"CreationDate":"2018-10-17T20:03:00.000","Title":"How do I use python pandas to read an already opened excel sheet","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a gazillion CSV files and in column 2 is the x-data and column 3 is the y-data. Each CSV file is a different time stamp. The x-data is slightly different in each file, but the number of rows is constant. I'm happy to assume the x-data is in fact identical.\nI am persuaded that Tableau is a good interface for me to do some visualization and happily installed tabpy and \"voila\", I can call python from Tableau... except... to return an array I will need to return a string with comma separated values for each time stamp, and then one of those strings per x-axis and then.... Hmm, that doesnt sound right.\nI tried telling Tableau just open them all and I'd join them later, but gave up after 30 mins of it crunching.\nSo what do you reckon? I am completely agnostic. Install an SQL server and create a database? Create a big CSV file that has a time-stamp for each column? Google? JSON?\nOr maybe there is some clever way in Tableau to loop through the CSV files?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1928,"Q_Id":52863882,"Users Score":1,"Answer":"I would suggest doing any data prep outside of Tableau. Since you seem to be familiar with Python, try Pandas to combine all the csv files into one dataframe then output to a database or a single csv. Then connect to that single source.","Q_Score":0,"Tags":"python,csv,tableau-api","A_Id":52864018,"CreationDate":"2018-10-17T21:28:00.000","Title":"Load thousands of CSV files into tableau","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a gazillion CSV files and in column 2 is the x-data and column 3 is the y-data. Each CSV file is a different time stamp. The x-data is slightly different in each file, but the number of rows is constant. I'm happy to assume the x-data is in fact identical.\nI am persuaded that Tableau is a good interface for me to do some visualization and happily installed tabpy and \"voila\", I can call python from Tableau... except... to return an array I will need to return a string with comma separated values for each time stamp, and then one of those strings per x-axis and then.... Hmm, that doesnt sound right.\nI tried telling Tableau just open them all and I'd join them later, but gave up after 30 mins of it crunching.\nSo what do you reckon? I am completely agnostic. Install an SQL server and create a database? Create a big CSV file that has a time-stamp for each column? Google? JSON?\nOr maybe there is some clever way in Tableau to loop through the CSV files?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1928,"Q_Id":52863882,"Users Score":0,"Answer":"If you are using Windows, you can combine all the csv files into a single csv, then import that into Tableau. This of course assumes that all of your csv files have the same data structure.\n\nOpen the command prompt\nNavigate to the directory where the csv files are (using the cd command)\nUse the command copy *.csv combined-file.csv. The combined-file.csv can be whatever name you want.","Q_Score":0,"Tags":"python,csv,tableau-api","A_Id":52864473,"CreationDate":"2018-10-17T21:28:00.000","Title":"Load thousands of CSV files into tableau","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been working extensively with peewee and postgresql for months. Suddenly this started happening. If I run any query command and get an error, then all subsequent commands start returning peewee.InternalError: current transaction is aborted, commands ignored until end of transaction block .\nI thought this behavior started when I upgraded peewee from 3.5.2 to 3.7.2, but I have since downgraded and the behavior continues. This has definitely not always happened.\nIn the simplest case, I have a database table with exactly one record. I try to create a new record with the same id and I get an IntegrityError as expected. If I then try to run any other query commands on that database, I get the InternalError as above.\nThis does not happen with an sqlite database.\nI have reinstalled peewee and psycopg2, to no avail.\nWhat am I missing?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1611,"Q_Id":52899051,"Users Score":1,"Answer":"While it's fine to use autorollback, it's much better to explicitly manage your transactions so that where an integrity error might occur you are catching the error and explicitly rolling back. For instance, if you have a user signup page and there's a unique constraint on the username, you might wrap it in a try\/except and rollback upon failure.","Q_Score":4,"Tags":"python,postgresql,peewee","A_Id":52916016,"CreationDate":"2018-10-19T19:51:00.000","Title":"Why am I getting peewee.InternalError for all subsequent commands after one failed command, using peewee ORM with posgresql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to connect MongoDB from Atlas.\nMy mongo uri is: mongodb+srv:\/\/abc:123@something.something.com\/admin?retryWrites=True\nMy pymongo version is 3.6.1\nI have installed dnspython and done import dns\nBut i still get this error:\n\ndnspython module must be installed to use mongodb+srv:\/\/ URI","AnswerCount":10,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":69143,"Q_Id":52930341,"Users Score":19,"Answer":"I solved this problem with:\n$ python -m pip install pymongo[srv]","Q_Score":70,"Tags":"python,mongodb,pymongo","A_Id":57184519,"CreationDate":"2018-10-22T13:15:00.000","Title":"pymongo - \"dnspython\" module must be installed to use mongodb+srv:\/\/ URIs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to connect MongoDB from Atlas.\nMy mongo uri is: mongodb+srv:\/\/abc:123@something.something.com\/admin?retryWrites=True\nMy pymongo version is 3.6.1\nI have installed dnspython and done import dns\nBut i still get this error:\n\ndnspython module must be installed to use mongodb+srv:\/\/ URI","AnswerCount":10,"Available Count":5,"Score":0.0798297691,"is_accepted":false,"ViewCount":69143,"Q_Id":52930341,"Users Score":4,"Answer":"you can use mongo:\/\/ instead of mongodb+srv:\/\/","Q_Score":70,"Tags":"python,mongodb,pymongo","A_Id":58155698,"CreationDate":"2018-10-22T13:15:00.000","Title":"pymongo - \"dnspython\" module must be installed to use mongodb+srv:\/\/ URIs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to connect MongoDB from Atlas.\nMy mongo uri is: mongodb+srv:\/\/abc:123@something.something.com\/admin?retryWrites=True\nMy pymongo version is 3.6.1\nI have installed dnspython and done import dns\nBut i still get this error:\n\ndnspython module must be installed to use mongodb+srv:\/\/ URI","AnswerCount":10,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":69143,"Q_Id":52930341,"Users Score":10,"Answer":"I got stuck with the same problem and tried\npip install dnspython==2.0.0\nThis is the latest version from https:\/\/pypi.org\/project\/dnspython\/\nIt worked :D","Q_Score":70,"Tags":"python,mongodb,pymongo","A_Id":63568124,"CreationDate":"2018-10-22T13:15:00.000","Title":"pymongo - \"dnspython\" module must be installed to use mongodb+srv:\/\/ URIs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to connect MongoDB from Atlas.\nMy mongo uri is: mongodb+srv:\/\/abc:123@something.something.com\/admin?retryWrites=True\nMy pymongo version is 3.6.1\nI have installed dnspython and done import dns\nBut i still get this error:\n\ndnspython module must be installed to use mongodb+srv:\/\/ URI","AnswerCount":10,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":69143,"Q_Id":52930341,"Users Score":0,"Answer":"May be the protocol, your URI should start with:\nmongo+srv instead of mongo+src\nIf it still not working please put a pip list with the versions of PyMongo and dnspython (and version of python that you are using)","Q_Score":70,"Tags":"python,mongodb,pymongo","A_Id":53311044,"CreationDate":"2018-10-22T13:15:00.000","Title":"pymongo - \"dnspython\" module must be installed to use mongodb+srv:\/\/ URIs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to connect MongoDB from Atlas.\nMy mongo uri is: mongodb+srv:\/\/abc:123@something.something.com\/admin?retryWrites=True\nMy pymongo version is 3.6.1\nI have installed dnspython and done import dns\nBut i still get this error:\n\ndnspython module must be installed to use mongodb+srv:\/\/ URI","AnswerCount":10,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":69143,"Q_Id":52930341,"Users Score":18,"Answer":"I would like to answer my own questions here. As I mentioned in the comment, the kernel of the jupyter notebook has to be restarted in order for the pymongo to take effect of the loaded dnspython.","Q_Score":70,"Tags":"python,mongodb,pymongo","A_Id":53644925,"CreationDate":"2018-10-22T13:15:00.000","Title":"pymongo - \"dnspython\" module must be installed to use mongodb+srv:\/\/ URIs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Im trying to use django with oracle nosql. I know django supports oracleDB but I don't know if oracle regular driver can be used by oracle nosql too. is there any driver for Nosql to support code first?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":122,"Q_Id":52948200,"Users Score":1,"Answer":"Django supports Oracle Database Server versions 12.1 and higher. Version 5.2 or higher of the cx_Oracle Python driver is required.","Q_Score":0,"Tags":"python,django,oracle,oracle-nosql","A_Id":52948367,"CreationDate":"2018-10-23T11:43:00.000","Title":"Does Django support oracleDB nosql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Please could someone help this newbie. I've installed the following on my Android phone:\nQPython v2.4.2\nMysql Connector v1.0.8 (via QPYPI)\nMariaDBServer v10.3.8\nI can connect to the server using a separate Android SQL client app, so know the server is running OK.\nWhen I open the terminal and try to import mysql.connector, I get the below error message. I'd really appreciate some help in solving this. \nThanks in advance. \n\n\n\nimport mysql.connector\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"\/data\/user\/0\/org.qpython.qpy\/files\/lib\/python3.6\/site-packages\/mysql\/connector\/init.py\", line 33, in \n from mysql.connector.connection import MySQLConnection\n File \"\/data\/user\/0\/org.qpython.qpy\/files\/lib\/python3.6\/site-packages\/mysql\/connector\/connection.py\", line 123\n except Exception, err:\n ^\n SyntaxError: invalid syntax","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":212,"Q_Id":52955195,"Users Score":0,"Answer":"I managed to resolve this myself by uninstalling Mysql Connector v1.0.8 (via QPYPI) and installing it via pip install.","Q_Score":0,"Tags":"mysql-python,qpython","A_Id":53021517,"CreationDate":"2018-10-23T18:01:00.000","Title":"Cannot connect to Mariadb from Qpython terminal using Mysql Connector","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using xlwings to expose python functions as user defined functions within Excel. It works perfectly if the excel file is in the same directory as the .py file which contains my UDF functions.\nI would like to save my Excel file anywhere and just update my xlwings.conf file to have the location of the python module which contains the udf definitions.\nIf I set the conf file to have\n\"UDF MODULES\",\"C:\\src\\xlwings_wrapper\\xlwings_udfs\"\nI get the following error ModuleNotFound: No module named 'C:\\src\\xlwings_wrapper\\xlwings_udfs'. How ever I have checked and the xlwings_udfs.py file is in that location.\nDoes anyone know if setting an absolute path for the UDF Modules is supported by xlwings?\nThanks\nDavid","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":580,"Q_Id":52975054,"Users Score":0,"Answer":"I made the same mistake. Should have read the manual\nFor completeness, when using either the xlwings.conf sheet or the UDF module box on the ribbon you need to adopt this setting configuration.","Q_Score":0,"Tags":"python,xlwings","A_Id":62959116,"CreationDate":"2018-10-24T17:41:00.000","Title":"Can I specify an absolute location for UDF_Modules within xlwings?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you change the file metadata on a cloud database using Apache Beam? From what I understand, Beam is used to set up dataflow pipelines for Google Dataflow. But is it possible to use Beam to change the metadata if you have the necessary changes in a CSV file without setting up and running an entire new pipeline? If it is possible, how do you do it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":294,"Q_Id":52978251,"Users Score":0,"Answer":"You could code Cloud Dataflow to handle this but I would not. A simple GCE instance would be easier to develop and run the job. An even better choice might be UDF (see below).\nThere are some guidelines for when Cloud Dataflow is appropriate:\n\nYour data is not tabular and you can not use SQL to do the analysis.\nLarge portions of the job are parallel -- in other words, you can process different subsets of the data on different machines.\nYour logic involves custom functions, iterations, etc...\nThe distribution of the work varies across your data subsets.\n\nSince your task involves modifying a database, I am assuming a SQL database, it would be much easier and faster to write a UDF to process and modify the database.","Q_Score":1,"Tags":"java,python,google-cloud-platform,apache-beam,database-metadata","A_Id":52980803,"CreationDate":"2018-10-24T21:37:00.000","Title":"Change file metadata using Apache Beam on a cloud database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using postgres 10.5, python 3, flask and sqlalchemy. I'm trying to create a column in a users table with the following command\nid = db.Column(db.Integer, db.Sequence('user_id_seq'), primary_key=True, autoincrement=True)\n\nHowever, when I run this code and create a user, I get the error: \nerror creating user (psycopg2.ProgrammingError) relation \"user_id_seq\" does not exist\nHow can I create the sequence 'user_id_seq' programmatically? Is there some way to check if it exists and create it if it does not using sqlalchemy","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":550,"Q_Id":52982931,"Users Score":0,"Answer":"Interestingly, this just ended up being a problem with the way my database table was previously defined. I dropped the table, ran the code and the sequence existed correctly.","Q_Score":1,"Tags":"python,postgresql,sqlalchemy,psycopg2","A_Id":52999643,"CreationDate":"2018-10-25T06:43:00.000","Title":"Creating a db.Sequence with sqalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I develop an application that will be used for running simulation and optimization over graphs (for instance Travelling salesman problem or various other problems).\nCurrently I use 2d numpy array as graph representation and always store list of lists and after every load\/dump from\/into DB I use function np.fromlist, np.tolist() functions respectively.\nIs there supported way how could I store numpy ndarray into psql? Unfortunately, np arrays are not JSON-serializable by default.\nI also thought to convert numpy array into scipy.sparse matrix, but they are not json serializable either","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":975,"Q_Id":52993954,"Users Score":2,"Answer":"json.dumps(np_array.tolist()) is the way to convert a numpy array to json. np_array.fromlist(json.loads(json.dumps(np_array.tolist()))) is how you get it back.","Q_Score":1,"Tags":"python,numpy,psql","A_Id":52994178,"CreationDate":"2018-10-25T16:21:00.000","Title":"How to store np arrays into psql database and django","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have table called EMP that has a column deptno with the values 10,20,30,40.\nso requirement is first create list\/diction which deptno number we need data for?\nsay we need data only for Deptno 10 & 20 and each of these deptno should go into their own respective flat files say empdep10.csv and empdept20.csv\nI was hoping to get some help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":53073235,"Users Score":0,"Answer":"You can use the format function to make queries more dynamic. For example:\n\"\"\"\nSELECT *\nFROM sdfkjshdf\nWHERE sdkfags LIKE '%{}%'\n\"\"\".format(my_variable)\nThen use the same function when exporting the result set to a file so you can call the file something based on the parameters of the where function.","Q_Score":0,"Tags":"python-3.x,jupyter-notebook","A_Id":53073273,"CreationDate":"2018-10-30T21:35:00.000","Title":"based on Oracle Query where clause create dynamic flat files using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are getting an extra column 'int64_field_0' while loading data from CSV to BigTable in GCP. Is there any way to avoid this first column. We are using the method load_table_from_file and setting option AutoDetect Schema as True. Any suggestions please. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":182,"Q_Id":53077155,"Users Score":0,"Answer":"Per the comments, using Pandas Data Frame's pd.to_csv(filename, index=false) resolved the issue.","Q_Score":0,"Tags":"python,csv","A_Id":55940953,"CreationDate":"2018-10-31T05:57:00.000","Title":"Google Cloud Platform int64_field_0","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working on something where data is produced, with a fair amount of processing, within SQL server, that I ultimately need to manipulate within Python to complete the task. It seems to me I have a couple different options:\n(1) Run the SQL code from within Python, manipulate output\n(2) Create an SP in SSMS, run the SP from within Python, manipulate output\n(3) ?\nThe second seems cleanest, but I wonder if there's a better way to achieve my objective without needing to create a stored procedure every time I need SQL data in Python. Copying the entirety of the SQL code into Python seems similarly kludgy, particularly for larger or complex queries. \nFor those with more experience working between the two: can you offer any suggestions on workflow?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":53124221,"Users Score":0,"Answer":"There is no silver bullet.\nIt really depends on the specifics of what you're doing. What amount of data are we talking? Is it even feasible to stream it all over the network, through Python, and back? How much more load can the database server handle? How complex are the manipulations you consider doing in Python? How proficient are you and your team in SQL, and in Python?\nI've seen both approaches in production, and one slight advantage that sometimes gets overlooked is that when you have all the SQL nicely formatted inside your Python program, it's automatically under some Version Control, and you can check who edited what last and is thus to blame for the latest SNAFU ;-)","Q_Score":0,"Tags":"python,sql,sql-server,python-3.x","A_Id":53124483,"CreationDate":"2018-11-02T18:43:00.000","Title":"Suggestions on workflow between SQL Server and Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I hope the title is pretty self explanatory. I set up a database and web-server on the same machine using Amazon RDS and EC2 instance. I am running a Python script in the machine's cgi folder, and am having trouble connecting to the database. The errors are on the order of: _mysql_exceptions.OperationalError: (2003, \"Can't connect to MySQL server on '127.0.0.1' (111)\") \nI have tried this with mySQLdb and _mysql without success. What I can't understand is that I am able to successfully connect to the mysql client via the command line with mysql -u username -p -h edutechfinal.cqk0lckbky4e.us-east-2.rds.amazonaws.com but not inside the script. \nThis is what I have tried in the Python script\ndb = _mysql.connect(\"127.0.0.1\",\"st4rgut25\",\"pwd\",\"st4rgut25\")\nand \ndb = MySQLdb.connect(host=\"127.0.0.1\",user=\"st4rgut25\",passwd=\"pwd\",db=\"st4rgut25\")","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":175,"Q_Id":53215921,"Users Score":1,"Answer":"The RDS instance is not running on the EC2 instance, they're separate \"machines\". From the EC2 instance, instead of using the loopback address 127.0.0.1, which would assume MySQL is running on the local EC2 instance, just use the host name edutechfinal.cqk0lckbky4e.us-east-2.rds.amazonaws.com as you're doing from the MySQL client.","Q_Score":0,"Tags":"python,mysql,amazon-ec2,amazon-rds","A_Id":53215979,"CreationDate":"2018-11-08T20:50:00.000","Title":"Can Connect to Database via Command Line but not in Python Script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My remote MySQL database and local MySQL database have the same table structure, and the remote and local MySQL database is utf-8charset.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":254,"Q_Id":53221361,"Users Score":0,"Answer":"You'd better merge value and sql template string and print it , make sure the sql is correct.","Q_Score":0,"Tags":"python,mysql,sql,pymysql","A_Id":53221802,"CreationDate":"2018-11-09T07:20:00.000","Title":"how do I insert some rows that I select from remote MySQL database to my local MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Airflow version 1.8\nPython version 3.6\ni am getting No module named 'MySQLdb' error\nwhen i configure the Airflow with LocalExecutor and Mysql as metadata database.\ni am not able to install the MySQLdb package due to version issue.\nanyone having idea how to solve this issue?\nThanks\nKalanidhi","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1053,"Q_Id":53225462,"Users Score":0,"Answer":"after configuring airflow.cfg like sql_alchemy_conn = mysql+pymysql:\/\/airflowuser:mysql@localhost:3306\/airflowdb its started working. Actually here i am using pymysql package instead of MySQLdb package(@joeb: - it seems MySQLdb package not supporting python 3+ version)","Q_Score":0,"Tags":"python-3.6,mysql-python,airflow","A_Id":53239536,"CreationDate":"2018-11-09T12:09:00.000","Title":"Apache Airflow + Python 3.6 + Local Executor + Mysql as a metadata database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am planning to run scripts to copy S3 files from one bucket to other bucket in same region( same account and different account - both cases are there). I am using Python scripts and running on EC2 instance. \n1) Will the performance depend on EC2 server type?\n2) What is the best way to improve performance when copying S3 files from one account to another ( and also one bucket to another in same account, same region) . Given they are in same region and different regions. File sizes are around 1 GB each with total size of 5TB\nThanks\ntom\nLet me know if you need any other information.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":290,"Q_Id":53231038,"Users Score":2,"Answer":"No, in this instance the type of EC2 will not matter because you are using the AWS network to transfer data from 1 bucket to another. If you wanted to spin off parallel processing of the data (Run multiple s3 cp at the same time) then you would choose a specific instance, but in your case a T2 Small would do just fine.","Q_Score":0,"Tags":"python-3.x,amazon-s3,amazon-ec2","A_Id":53232124,"CreationDate":"2018-11-09T17:57:00.000","Title":"aws s3 to s3 copy using python script on EC2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a telegram bot with python.\nI ask users to input their phone numbers. The problem is, if they enter Persian numbers (like \u06f0\u06f6\u06f0\u06f7\u06f5\u06f0), their data doesn't set in my database, and after updating database its field is empty!\nBut if they enter English digits, it saves in database?!\n\npython 3.7 \ndatabase: MySQL\nOS: win 10","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":237,"Q_Id":53257063,"Users Score":4,"Answer":"Convert to the string and save it later in the database\nUse the number to convert the number to the number\ndatabase----> '\u06f0\u06f6\u06f0\u06f7\u06f5\u06f0'\nusing ---> get database-----> int('\u06f0\u06f6\u06f0\u06f7\u06f5\u06f0')","Q_Score":1,"Tags":"python,mysql,python-3.x","A_Id":53257581,"CreationDate":"2018-11-12T06:43:00.000","Title":"how can I save persian numbers in database (mysql)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using datetime.strptime(11\/12\/18 02:20 PM, '%m\/%d\/%y %I:%M %p') I enter the date and time into sql server using a stored procedure and I get no errors, all seems fine. But the actual value in the database when checked is 2018-11-12 00:00:00.000. This is the value coming out of strptime 2018-11-12 14:20:00. Why am I not getting the time value? I have checked both the table design and Stored Procedure to make sure that datetime is being used throughout.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":53267376,"Users Score":0,"Answer":"Ok, Figured it out, since I was doing this in 'Ignition' and using a python library, I went into the library file. It was using 'system.db.createSPProcCall' and 'registerInParam' where you must include a value and a type. Since 'system.db.type' does not have a datetime, they had used DATE, wrong! I changed it to use TimeStamp. It worked fine after that. A shout out to @Jeffrey Van Laethem for setting me on the right path.","Q_Score":0,"Tags":"python,sql-server,datetime","A_Id":53284251,"CreationDate":"2018-11-12T17:39:00.000","Title":"Python date not showing Time in sql server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using openpyxl, I'm able to read 2 numbers on a sheet, and also able to read their sum by loading the sheet with data_only=True. \nHowever, when I alter the 2 numbers using openpyxl and then try to read the answer using data_only=True, it returns no output. How do I do this?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1946,"Q_Id":53271690,"Users Score":1,"Answer":"You can have either the value or the formula in openpyxl. It is precisely to avoid the confusion that this kind of edit could introduce that the library works like this. To evaluate the changed formulae you'll need to load the file in an app like MS Excel or LibreOffice that can evaluate the formulae and store the results.","Q_Score":2,"Tags":"python,excel,openpyxl","A_Id":53276640,"CreationDate":"2018-11-12T23:39:00.000","Title":"openpyxl how to read formula result after editing input data on the sheet? data_only=True gives me a \"None\" result","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to get flask with virtual environment and wsgi configured to work but Apache keeps giving me this error: \n\n[Tue Nov 13 13:23:55.179153 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] app.session_interface = self._get_interface(app)\n[Tue Nov 13 13:23:55.179160 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/var\/www\/html\/project\/python\/lib\/python3.6\/site-packages\/flask_session\/init.py\",\n line 93, in _get_interface\n[Tue Nov 13 13:23:55.179163 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] config['SESSION_USE_SIGNER'], config['SESSION_PERMANENT'])\n[Tue Nov 13 13:23:55.179169 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/var\/www\/html\/project\/python\/lib\/python3.6\/site-packages\/flask_session\/sessions.py\",\n line 314, in init\n[Tue Nov 13 13:23:55.179172 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] self.cache = FileSystemCache(cache_dir, threshold=threshold, mode=mode)\n[Tue Nov 13 13:23:55.179177 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/var\/www\/html\/project\/python\/lib\/python3.6\/site-packages\/werkzeug\/contrib\/cache.py\",\n line 717, in init\n[Tue Nov 13 13:23:55.179180 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] os.makedirs(self._path)\n[Tue Nov 13 13:23:55.179185 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/lib64\/python3.6\/os.py\", line 220, in makedirs\n[Tue Nov 13 13:23:55.179188 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] mkdir(name, mode)\n[Tue Nov 13 13:23:55.179215 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] PermissionError: [Errno 13] Permission denied: '\/flask_session'\n\nI tried giving the project different permissions but nothing worked","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1157,"Q_Id":53282275,"Users Score":0,"Answer":"You should specify which type of session interface to use.\nTry to set the SESSION-TYPE to \"null\".","Q_Score":2,"Tags":"python,apache,flask,wsgi,flask-session","A_Id":59171814,"CreationDate":"2018-11-13T13:38:00.000","Title":"Apache showing Permission denied: flask_session error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Im trying to get flask with virtual environment and wsgi configured to work but Apache keeps giving me this error: \n\n[Tue Nov 13 13:23:55.179153 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] app.session_interface = self._get_interface(app)\n[Tue Nov 13 13:23:55.179160 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/var\/www\/html\/project\/python\/lib\/python3.6\/site-packages\/flask_session\/init.py\",\n line 93, in _get_interface\n[Tue Nov 13 13:23:55.179163 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] config['SESSION_USE_SIGNER'], config['SESSION_PERMANENT'])\n[Tue Nov 13 13:23:55.179169 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/var\/www\/html\/project\/python\/lib\/python3.6\/site-packages\/flask_session\/sessions.py\",\n line 314, in init\n[Tue Nov 13 13:23:55.179172 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] self.cache = FileSystemCache(cache_dir, threshold=threshold, mode=mode)\n[Tue Nov 13 13:23:55.179177 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/var\/www\/html\/project\/python\/lib\/python3.6\/site-packages\/werkzeug\/contrib\/cache.py\",\n line 717, in init\n[Tue Nov 13 13:23:55.179180 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] os.makedirs(self._path)\n[Tue Nov 13 13:23:55.179185 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] File \"\/lib64\/python3.6\/os.py\", line 220, in makedirs\n[Tue Nov 13 13:23:55.179188 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] mkdir(name, mode)\n[Tue Nov 13 13:23:55.179215 2018] [wsgi:error] [pid 11819] [x.x.x.x:xxxx] PermissionError: [Errno 13] Permission denied: '\/flask_session'\n\nI tried giving the project different permissions but nothing worked","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1157,"Q_Id":53282275,"Users Score":0,"Answer":"The directory is failing to be created is on behalf of flask-session, which has a SESSION_FILE_DIR setting that'll let you override its default. Point that to some place with the appropriate permissions, and you'll likely be fine.","Q_Score":2,"Tags":"python,apache,flask,wsgi,flask-session","A_Id":53286630,"CreationDate":"2018-11-13T13:38:00.000","Title":"Apache showing Permission denied: flask_session error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building an app that involves storing statistics of professional baseball players, and am using MongoDB. If I am concerned about lookup time, does it make more sense to have an individual collection for each player, with the dictionary data to be organized like {statistic_name : statistic}, or each statistic to be a collection with the dictionary data organized like {player_name : statistic} ? \nThere will be significantly more players than there will be categories of statistics","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":53287467,"Users Score":0,"Answer":"In my opinion, you should make the first method : {statistic_name : statistic}.\nIt will be easier to select one of them. I think, i'm not 100% sure. \nSydney","Q_Score":0,"Tags":"python,database,mongodb","A_Id":53287682,"CreationDate":"2018-11-13T18:35:00.000","Title":"MongoDB collection design","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I download the some of pdf and stored in directory. Need to insert them into mongo database with python code so how could i do these. Need to store them by making three columns (pdf_name, pdf_ganerateDate, FlagOfWork)like that.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1972,"Q_Id":53318410,"Users Score":1,"Answer":"You can use GridFS. Please check this url http:\/\/api.mongodb.com\/python\/current\/examples\/gridfs.html.\nIt will help you to store any file to mongoDb and get them. In other collection you can save file metadata.","Q_Score":0,"Tags":"python,mongodb,insert,store","A_Id":53320112,"CreationDate":"2018-11-15T11:25:00.000","Title":"How Do I store downloaded pdf files to Mongo DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm fairly new to MongoDB. I need my Python script to query new entries from my Database in real time, but the only way to do this seems to be replica sets, but my Database is not a replica set, or with a Tailable cursor, which is only for capped collections.\nFrom what i understood, a capped collection has a limit, but since i don't know how big my Database is gonna be and for when i'm gonna need to send data there, i am thinking of putting the limit to 3-4 million documents. Would this be possible?.\nHow can i do that?.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":203,"Q_Id":53342146,"Users Score":1,"Answer":"so do you want to increase the size of capped collection ? \nif yes then if you know average document size then you may define size like: \ndb.createCollection(\"sample\", { capped : true, size : 10000000, max : 5000000 } ) here 5000000 is max documents with size limit of 10000000 bytes","Q_Score":0,"Tags":"python,mongodb","A_Id":53372398,"CreationDate":"2018-11-16T16:47:00.000","Title":"MongoDB - how can i set a documents limit to my capped collection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing an application in which I am explicitly using memcache with Google Appengine's NDB library. I want something like this.\n1) Get 100 records from datastore and put them in memcache.\n2) Now whenever user wants these records I would get these records from memcache instead of datastore.\n3) I would invalidate the memcache if there is a new record in datastore and then populate the memcache with 101 records.\nI am thinking of an approach like I compare the number of records in memcache and datastore and if there is a difference, I would update the memcache.\nBut if we see documentation of NDB, we can only get count by retrieving all the records, and this is not required as datastore query is not being avoided in this way.\nAny help anyone? Or any different approach I could go with?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":53350054,"Users Score":0,"Answer":"Rather than relying on counts, you could give each record a creation timestamp, and keep the most recent timestamp in memcache. Then to see if there are new records you just need to check if there are any timestamps newer than that, which assuming you have an index on that field is a very quick query.","Q_Score":0,"Tags":"django,python-2.7,nosql,app-engine-ndb,google-app-engine-python","A_Id":53350546,"CreationDate":"2018-11-17T09:52:00.000","Title":"Best way to count number of records from NDB library","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Trying to install with pip install mysqlclient\nI am installing MysqlClient in Python virtualenv but installation failed with error\nIt requires MS Visual C++ 10.0\nI downloaded it, which again requires .NET Framework 4.\nI again downloaded .NET Framework 4, which is giving error that you cannot install .NET Framework 4 as higher version is already installed.\nI searched all over the internet there isn't any solution to this problem is available.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":53399716,"Users Score":0,"Answer":"After struggling approximately 24 hrs. I found a solution that will fix this ridiculous problem which has no solution on the whole internet.\nWhile Installing mysqlclient use the following command, and the latest mysqlclient will be installed without any problem.\npip install --only-binary :all: mysqlclient","Q_Score":1,"Tags":"python,mysql,django","A_Id":53412861,"CreationDate":"2018-11-20T18:54:00.000","Title":"Mysqlclient requires MS Visual C++ 10.0 which again requires .Net Framework 4","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an issue which I am not sure where the root cause is:\nI use python cx_Oracle to connect to an Oracle DB.\ncursor.fetchall() returns me records in this format [(4352,)]\nI want to retrieve the '4352' so i proceed to do this: pk = cursor.fetchall()[0][0]\nHowever i get: IndexError: list index out of range\nI am not sure what I am doing wrong since when i manually create this return object on my python console as such: item = [(4352,)], I can retrieve the '4352' by calling item[0][0]\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1306,"Q_Id":53444838,"Users Score":0,"Answer":"Are you sure about the list returned by the fetchall() statement?\nIt looks like the resulting list is empty.","Q_Score":0,"Tags":"python-3.6,cx-oracle,index-error","A_Id":53451817,"CreationDate":"2018-11-23T10:22:00.000","Title":"cursor.fetchall() throws index out or range error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"(1064, \"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '')' at line 1\")\nI do not understand why \"')\" chucks an error\nI have tried substituting the values of the query in multiple different ways\nand they all route back to that.\nAny help would be greatly appreciated.\nquery in question:\nsql = \"INSERT INTO Teams VALUES (%s, %s)\"\ncursor.execute(sql, (self.varTeamID, self.varTeamName))\nalternate attempts chucking same error:\nsql = \"INSERT INTO Teams VALUES (\" + self.varTeamID + \", '\" + self.varTeamName + \"')\"\nPlease note that the query works and is added to the database however python thinks it is wrong crashing the program","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":161,"Q_Id":53464806,"Users Score":1,"Answer":"We found it, this statement was working fine, the error was actually in the next SQL statement just after this one.","Q_Score":0,"Tags":"python,mysql,python-3.x,kivy","A_Id":53474616,"CreationDate":"2018-11-25T05:05:00.000","Title":"SQL query is added to DB but thinks it is wrong crashing the python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"(1064, \"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '')' at line 1\")\nI do not understand why \"')\" chucks an error\nI have tried substituting the values of the query in multiple different ways\nand they all route back to that.\nAny help would be greatly appreciated.\nquery in question:\nsql = \"INSERT INTO Teams VALUES (%s, %s)\"\ncursor.execute(sql, (self.varTeamID, self.varTeamName))\nalternate attempts chucking same error:\nsql = \"INSERT INTO Teams VALUES (\" + self.varTeamID + \", '\" + self.varTeamName + \"')\"\nPlease note that the query works and is added to the database however python thinks it is wrong crashing the program","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":53464806,"Users Score":0,"Answer":"These may be the reasons why your SQL query is not working:\n\nIf the Teams table has more than two columns in its schema, then you should rephrase your statement as follows: \nINSERT INTO Teams (col1_name, col2_name) VALUES (\"%s\", \"%s\");\nwhere col1_name and col2_name are the actual names of the columns, between single or double quotation marks if they contains spaces.\nYou should enclose the values between single or double quotation marks: \nINSERT INTO Teams VALUES (\"%s\", \"%s\");\nYou may need to end the query with a semicolon.","Q_Score":0,"Tags":"python,mysql,python-3.x,kivy","A_Id":53465131,"CreationDate":"2018-11-25T05:05:00.000","Title":"SQL query is added to DB but thinks it is wrong crashing the python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to obtain an application variable (app user id) in before_execute(conn, clauseelement, multiparam, param) method. The app user id is stored in python http request object which I do not have any access to in the db event.\nIs there any way to associate a piece of sqlalchemy external data somewhere to fetch it in before_execute event later? \nAppreciate your time and help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":53551209,"Users Score":0,"Answer":"Answering my own question here with a possible solution :)\n\nFrom http request copied the piece of data to session object\nSince the session binding was at engine level, copied the data from session to connection object in SessionEvent.after_begin(session, transaction, connection). [Had it been Connection level binding, we could have directly set the objects from session object to connection object.]\n\nNow the data is available in connection object and in before_execute() too.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":53667953,"CreationDate":"2018-11-30T04:23:00.000","Title":"Sqlalchemy before_execute event - how to pass some external variable, say app user id?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I need to use a table of a database on a university server for test data for a project. \nI'm pretty new to databases and MySQL. My professor has send me username and password for the server. And an extra username&password for the MySQL server. \nIt took me a while but in the end I was able to connect to the server over ssh and then managed to navigate to $cd \/ $cd usr\/bin\/MySQL then logged in and found the data\/sentences in a table in one of the databases.\nNow there is the question: How do I get the data on my computer? I thought about a python script. But I cannot write a script what is logging in on a different server and then navigates to the MySQL folder to log in there to copy somehow the sentences in the table to a txt file I can use?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":482,"Q_Id":53558775,"Users Score":2,"Answer":"You might not need to ssh into the remote server; depending how their server and database are set up you may be able to connect a mysql client on your local machine to the database server. While there are security advantages to limiting where connections are permitted from, accessing a database at localhost is actually just a special case.\nYou might not even need a python script. You can export directly from mysql to a text file, or your client may have a feature to copy data directly from the remote server into a local database. \nI would guess that something like this would work for you, although getting the output into the format you want can be tricky:\nmysql -h \"host address\" -u \"username\" -p -e \"SELECT * FROM `table`\" > localFile.txt\nIf you wanted to do it with a python script running on the server as you're describing, you'll want to use the ssh credentials to do FTP over SSH to get the files back and forth. Your FTP client will certainly support that.","Q_Score":0,"Tags":"python,mysql,database,ssh","A_Id":53558994,"CreationDate":"2018-11-30T13:45:00.000","Title":"Copying tables from database from MySQL Server on different server to my computer","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am executing the query using pymysql in python. \n\nselect (sum(acc_Value)) from accInfo where acc_Name = 'ABC'\n\nThe purpose of the query is to get the sum of all the values in acc_Value column for all the rows matchin acc_Name = 'ABC'.\nThe output i am getting when using cur.fetchone() is \n\n(Decimal('256830696'),)\n\nNow how to get that value \"256830696\" alone in python. \nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":283,"Q_Id":53559241,"Users Score":-1,"Answer":"It's a tuple, just take the 0th index","Q_Score":0,"Tags":"python,pymysql","A_Id":53559488,"CreationDate":"2018-11-30T14:16:00.000","Title":"pymysql - Get value from a query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just learnt about GCP Composer and am trying to move the DAGs from my local airflow instance to cloud and had a couple of questions about the transition. \n\nIn local instance I used HiveOperator to read data from hive and create tables and write it back into hive. If I had to do this in GCP how would this be possible? Would I have to upload my data to Google Bucket and does the HiveOperator work in GCP?\nI have a DAG which uses sensor to check if another DAG is complete, is that possible on Composer?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":238,"Q_Id":53560978,"Users Score":0,"Answer":"Composer have connection store. See menu Admin--> Connection. Check connection type available.\nSensors are available.","Q_Score":1,"Tags":"python,google-cloud-platform,airflow,google-cloud-composer","A_Id":53672600,"CreationDate":"2018-11-30T16:03:00.000","Title":"Creating Airflow DAGs on GCP Composer","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I just learnt about GCP Composer and am trying to move the DAGs from my local airflow instance to cloud and had a couple of questions about the transition. \n\nIn local instance I used HiveOperator to read data from hive and create tables and write it back into hive. If I had to do this in GCP how would this be possible? Would I have to upload my data to Google Bucket and does the HiveOperator work in GCP?\nI have a DAG which uses sensor to check if another DAG is complete, is that possible on Composer?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":238,"Q_Id":53560978,"Users Score":0,"Answer":"Yes, Cloud Composer is just managed Apache Airflow so you can do that.\nMake sure that you use the same version of Airflow that you used locally. Cloud Composer supports Airflow 1.9.0 and 1.10.0 currently.","Q_Score":1,"Tags":"python,google-cloud-platform,airflow,google-cloud-composer","A_Id":53564941,"CreationDate":"2018-11-30T16:03:00.000","Title":"Creating Airflow DAGs on GCP Composer","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Since Postgres supports JSON fields, is that more preferential than say saving each config item in a row in some table the way WP still does?\nFull disclosure: I'm asking as a Django user new to postgres.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":53566994,"Users Score":2,"Answer":"It depends. Here are some things to think about:\n\nDoes the data structure map well to JSON? Try to visualize it. Is the data more relational (think of links to other data), or a hierarchy? If the data you are storing needs to mix a lot with relationally stored data, it could become a pain.\nDo you do complicated searches on these fields? There is powerful functionality for querying jsonb fields, but there will likely be complications and learning involved. I like the power of querying jsonb and manipulating it in my queries, but it's never as easy or natural to me as regular SQL.\nWill your ORM and any other tools like query builders play well with it? It's an especially important question if you are using older technology. If you use an ORM, it may become annoying to have to work with plain SQL just to do jsonb queries.\nLazy loading is another consideration. If you have a lazy load strategy, it may or may not translate well into jsonb fields. If your lazy loading strategy is complex, it may require some work to get it to work well with jsonb fields.\nA good strategy for serialization and deserialization will be necessary. If you don't have a good implementation of that, it will get clumsy.\n\nBy the way...I am a big advocate for using jsonb. In my opinion, it has been a good game changer. So don't think that I am discouraging it. That said, if you just try to make everything jsonb, you will probably soon regret it. You just need to sort of evaluate what works best for each case. In general, I think it does work well for config, usually, though I'd have to know about the particular case.I hope this helps a bit!","Q_Score":0,"Tags":"python,django,postgresql","A_Id":53567055,"CreationDate":"2018-12-01T01:27:00.000","Title":"Saving data in rows in a Config table vs. saving as keys in a Postgres JSONField--which is better?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The only way I can find of adding new data to a TinyDB table is with the table.insert() method. However this appends the entry to the end of the table, but I would like to maintain the sequence of entries and sometimes I need to insert into an arbitrary index in the middle of the table. Is there no way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":260,"Q_Id":53575563,"Users Score":0,"Answer":"There is no way to do what you are asking. Normally, the default index created tracks insertion order. When you add data, it will go at the end. If you need to maintain a certain order, you could create a new property the handle that case, and retrieve with a sort on that property. \nIf you truly want to insert in a specific id, you would need to add some logic to cascade the documents down. The logic would flow as:\n\nInsert a new record which is equal to the last record.\nThen, go backwards and cascade the records to the new open location\nStop when you get to the location you need, and update the record with what you want to insert by using the ID.\n\nThe performance would drag since you are having to shift the records down. There are other ways to maintain the list - it would be similar to inserting a record in the middle of an array. Similar methods would ally here. Good Luck!","Q_Score":0,"Tags":"python,tinydb","A_Id":53576107,"CreationDate":"2018-12-01T22:23:00.000","Title":"TinyDB insert in between table rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pretty simple question with no literature to expound on it here. If you start a connection, then run a job continually for 12 hours inserting data to the .db, if you do not commit and the python script terminates, can you go back in, connect to the database and commit and see the changes?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":53605132,"Users Score":0,"Answer":"This may be a silly question but way are you don't committing records as you process? Isn't it more efficient to commit say 100,000 records now than try to commit 1,000,000 at once at the end of the process?","Q_Score":0,"Tags":"python,sqlite","A_Id":53636522,"CreationDate":"2018-12-04T03:07:00.000","Title":"SQLite3 Python - Comit() in a later session","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The following are query results are from two different tables in the same database, both id columns are integer. How should I remove the number 16789 from the first table id to match the second id?\nnews=> select id from log limit 10;\n id\n1678923\n 1678924\n 1678925\n 1678926\n 1678927\n 1678928\n 1678929\n 1678930\n 1678931\n 1678932\n(10 rows)\nnews=> select id from articles limit 10;\nid\n23\n 24\n 25\n 26\n 27\n 28\n 30\n 29\n(8 rows)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":53615450,"Users Score":0,"Answer":"In the database:\nSELECT\n TRIM (\n LEADING '16789'\n FROM 1678929\n CAST (news AS TEXT)\n ); -- 29","Q_Score":0,"Tags":"python,database,psql","A_Id":53615582,"CreationDate":"2018-12-04T14:44:00.000","Title":"How do I remove part of the number from psql query return?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have just created a virtual environment on my machine (I am running on ubuntu 18.04 LTS). I have the python version of 3.6.7 and now I want to install mysqlclient into my virtual environment.\nAfter I do pip install mysqlclient it didn't work, instead it gave me errors saying; \n\nCommand \"python.py egg_info\" failed with error code 1 in \/tmp\/pip-install-zd21vfb3\/mysqlclient\/', and that the msql_config file is not found. \n\nMy setup tools are all up to date.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":13288,"Q_Id":53641541,"Users Score":23,"Answer":"mysqlclient has a dependency on the mysql client & dev packages being installed. In order to fix this on ubuntu, you have to use apt-get to install a couple of mysql packages.\nIn your case, it looks like the missing mysql_config might be missing on your system. You can fix that by installing libmysqlclient-dev on ubuntu bionic.","Q_Score":8,"Tags":"python,mysql,django","A_Id":53641741,"CreationDate":"2018-12-05T22:12:00.000","Title":"why 'pip install mysqlclient' not working in ubuntu 18.04 LTS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pycharm for the current project. \nWhen using the sqlite console under pycharm it shows that it has the version 3.25.1, which means that the sqlite upsert works perfectly. But on python, when I do import sqlite3 it imports the version 3.20.1 of it. \nI don't know why that difference in versions and I want to import the latest version of sqlite in python to be able to work with upserts.\nEdit: I'm using Fedora 27 and python 3.7.0","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":493,"Q_Id":53671129,"Users Score":0,"Answer":"It seems that sqlite 3.24+ requires Fedora 29+.\nI just upgraded my fedora to version 29 and I got sqlite 3.24.0","Q_Score":0,"Tags":"python,sqlite,pycharm,upsert","A_Id":54520508,"CreationDate":"2018-12-07T14:03:00.000","Title":"upgrade sqlite to 3.24+ pycharm","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It doesn't have to be exactly a trigger inside the database. I just want to know how I should design this, so that when changes are made inside MySQL or SQL server, some script could be triggered.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":146,"Q_Id":53678628,"Users Score":0,"Answer":"One Way would be to keep a counter on the last updated row in the database, and then you need to keep polling(Checking) the database through python for new records in short intervals.\nIf the value in the counter is increased then you could use the subprocess module to call another Python script.","Q_Score":0,"Tags":"python,mysql,sql-server","A_Id":53678958,"CreationDate":"2018-12-08T01:12:00.000","Title":"Is it possible to trigger a script or program if any data is updated in a database, like MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created some models and when I run python manage.py db migrate command it creates migrations file, so that is fine.\npython manage.py db upgrade command also creates table in Database.\nIf I again run the python manage.py db migrate command then it is creating migrations file for those models that I have upgraded recently.\nCan you please help me to resolve it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":221,"Q_Id":53737789,"Users Score":0,"Answer":"I think the problem is to manage.py. If you did it as described on flask-migration site and stored all your models in this file - flask-migration just get these models and generates migrations and will do it always. You wrapped the standard command in your file and this is the problem.\nIf you want to fix it - store models in another directory (or another file), add them to an app and use command flask db migrate. In this case, flask-migration will generate migration for models only at first time, for others, it will detect changes and generate migrations only for changes.\nBut be careful, flask-migration don't see all changes. From site:\n\nThe migration script needs to be reviewed and edited, as Alembic currently does not detect every change you make to your models. In particular, Alembic is currently unable to detect table name changes, column name changes, or anonymously named constraints. A detailed summary of limitations can be found in the Alembic autogenerate documentation.","Q_Score":1,"Tags":"python,flask,flask-sqlalchemy,flask-migrate","A_Id":53746095,"CreationDate":"2018-12-12T07:12:00.000","Title":"migrations are getting created repeatedly","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm attempting to create an in-memory sqlite3 cache to store oauth tokens, but am running into issues regarding multithreading. After running several tests, I've noticed the behavior differs substantially from non-in-memory databases and multithreading. \nNotably, reader threads immediately fail with \"table is locked\" if a writer thread has written without committing. This is true with multiple threads even with isolation_level=None.\nIt's not simply that readers are blocked until the transaction is complete, but rather they fail immediately, regardless of timeout or PRAGMA busy_timeout = 10000.\nThe only way I can get it working is to set isolation_level=None and to do PRAGMA read_uncommitted=TRUE. I would rather not do this, however.\nIs it possible to let the reader threads simply wait the lock instead of immediately failing?\n\n\nimport sqlite3\nimport threading\n\ndef get_conn(name, is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=None):\n uri = 'file:%s' % name\n if is_memory:\n uri = uri + '?mode=memory&cache=shared'\n conn = sqlite3.connect(uri, uri=True, timeout=timeout, isolation_level=isolation_level)\n if pragmas is None:\n pragmas = []\n if not isinstance(pragmas, list):\n pragmas = [pragmas]\n for pragma in pragmas:\n conn.execute(pragma)\n return conn\n\n\ndef work1(name, is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=None, loops=1):\n conn = get_conn(name, is_memory=is_memory, timeout=timeout, isolation_level=isolation_level, pragmas=pragmas)\n for i in range(loops):\n conn.execute('INSERT INTO foo VALUES (1)')\n\n\ndef work2(name, is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=None, loops=1):\n conn = get_conn(name, is_memory=is_memory, timeout=timeout, isolation_level=isolation_level, pragmas=pragmas)\n for i in range(loops):\n len(conn.execute('SELECT * FROM foo').fetchall())\n\n\ndef main(name, is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=None, loops=1, num_threads=16):\n conn = get_conn(name, is_memory=is_memory, timeout=timeout, isolation_level=isolation_level, pragmas=pragmas)\n try:\n conn.execute('CREATE TABLE foo(a int)')\n except sqlite3.OperationalError:\n conn.execute('DROP TABLE foo')\n conn.execute('CREATE TABLE foo(a int)')\n threads = []\n for i in range(num_threads):\n threads.append(threading.Thread(target=work1, args=(name, is_memory, timeout, isolation_level, pragmas, loops)))\n threads.append(threading.Thread(target=work2, args=(name, is_memory, timeout, isolation_level, pragmas, loops)))\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n\n# In-Memory Tests\n# All of these fail immediately with table is locked. There is no delay; timeout\/busy_timeout has no effect.\nmain('a', is_memory=True, timeout=5, isolation_level='IMMEDIATE', pragmas=None)\nmain('b', is_memory=True, timeout=5, isolation_level='DEFERRED', pragmas=None)\nmain('c', is_memory=True, timeout=5, isolation_level='EXCLUSIVE', pragmas=None)\nmain('d', is_memory=True, timeout=5, isolation_level=None, pragmas=None)\nmain('e', is_memory=True, timeout=5, isolation_level='IMMEDIATE', pragmas=['PRAGMA busy_timeout = 10000'])\nmain('f', is_memory=True, timeout=5, isolation_level='DEFERRED', pragmas=['PRAGMA busy_timeout = 10000'])\nmain('g', is_memory=True, timeout=5, isolation_level='EXCLUSIVE', pragmas=['PRAGMA busy_timeout = 10000'])\nmain('h', is_memory=True, timeout=5, isolation_level=None, pragmas=['PRAGMA busy_timeout = 10000'])\nmain('i', is_memory=True, timeout=5, isolation_level='IMMEDIATE', pragmas=['PRAGMA read_uncommitted=TRUE'])\nmain('j', is_memory=True, timeout=5, isolation_level='DEFERRED', pragmas=['PRAGMA read_uncommitted=TRUE'])\nmain('k', is_memory=True, timeout=5, isolation_level='EXCLUSIVE', pragmas=['PRAGMA read_uncommitted=TRUE'])\n# This is the only successful operation, when isolation_level = None and PRAGMA read_uncommitted=TRUE\nmain('l', is_memory=True, timeout=5, isolation_level=None, pragmas=['PRAGMA read_uncommitted=TRUE'])\n# These start to take a really long time\nmain('m', is_memory=True, timeout=5, isolation_level=None, pragmas=['PRAGMA read_uncommitted=TRUE'], loops=100)\nmain('n', is_memory=True, timeout=5, isolation_level=None, pragmas=['PRAGMA read_uncommitted=TRUE'], loops=100, num_threads=128)\n\n# None of the on disk DB's ever fail:\nmain('o', is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=None)\nmain('p', is_memory=False, timeout=5, isolation_level='DEFERRED', pragmas=None)\nmain('q', is_memory=False, timeout=5, isolation_level='EXCLUSIVE', pragmas=None)\nmain('r', is_memory=False, timeout=5, isolation_level=None, pragmas=None)\nmain('s', is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=['PRAGMA busy_timeout = 10000'])\nmain('t', is_memory=False, timeout=5, isolation_level='DEFERRED', pragmas=['PRAGMA busy_timeout = 10000'])\nmain('u', is_memory=False, timeout=5, isolation_level='EXCLUSIVE', pragmas=['PRAGMA busy_timeout = 10000'])\nmain('v', is_memory=False, timeout=5, isolation_level=None, pragmas=['PRAGMA busy_timeout = 10000'])\nmain('w', is_memory=False, timeout=5, isolation_level='IMMEDIATE', pragmas=['PRAGMA read_uncommitted=TRUE'])\nmain('x', is_memory=False, timeout=5, isolation_level='DEFERRED', pragmas=['PRAGMA read_uncommitted=TRUE'])\nmain('y', is_memory=False, timeout=5, isolation_level='EXCLUSIVE', pragmas=['PRAGMA read_uncommitted=TRUE'])\nmain('z', is_memory=False, timeout=5, isolation_level=None, pragmas=['PRAGMA read_uncommitted=TRUE'])\n# These actually fail with database is locked\nmain('aa', is_memory=False, timeout=5, isolation_level=None, pragmas=['PRAGMA read_uncommitted=TRUE'], loops=100)\nmain('ab', is_memory=False, timeout=5, isolation_level=None, pragmas=['PRAGMA read_uncommitted=TRUE'], loops=100, num_threads=128)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":53769548,"Users Score":0,"Answer":"I do not believe that the SQLite3 interface is meant to be re-entrant. I think that each thread would have to obtain a mutex, perform the query, and then release the mutex. Attempt to perform only one database operation at a time. (Python's API-layer would not be expected to do this, as there would ordinarily be no need for any such thing.)","Q_Score":1,"Tags":"python,python-3.x,multithreading,sqlite","A_Id":53782653,"CreationDate":"2018-12-13T20:24:00.000","Title":"Sqlite3 In Memory Multithreaded Issues - Blocked threads immediately fail","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new at StackOverflow and I have researched everywhere for reasons why to pick Google BigQuery vs Jupyter Notebooks for creating new variables and preparing data for Machine Learning projects. Until now, I have lots of experience doing Data Science projects with Jupyter Notebooks (love python!) but now we are working with GCP at the office and no one has been able to answer why (or when) is better to choose one over the other one. \nDatalab does a great job with Jupyter Notebooks, and the data we have right now is stored part at GCS and part in Cloud SQL (I only retrieve data from there and start playing with variables). \nThanks a lot !","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":165,"Q_Id":53816842,"Users Score":2,"Answer":"BigQuery is a datalake, a large database. For your problems is a data source like Cloud SQL\/GCS. You need to store rows in BQ and use in your tools to write the charts\/algorithms. \nBigQuery cannot be compared to Jupyter Notebook, because is just two different products.","Q_Score":1,"Tags":"python,google-bigquery,jupyter-notebook","A_Id":53817851,"CreationDate":"2018-12-17T14:03:00.000","Title":"Difference between BigQuery and Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I tried to insert right double quotes (\u201d) using python MySQLdb it produces UnicodeEncodeError: 'latin-1' codec can't encode character u'\\u201d' in position 0: ordinal not in range(256). python MySQLdb uses latin-1 codec by default and from the index.xml file in the \/usr\/share\/mysql\/charsets\/, it is described as cp1252 West European. Hence I think that latin1 will cover cp1252 characters also. But latin1 won't cover cp1252 characters, If they does I will not get the Error.\nThe right double quotes are lies in cp1252 charset but not in ISO 8859-1( or latin1) charset. \nThere is no cp1252.xml file in \/usr\/share\/mysql\/charsets\/. Why python MySQLdb is missing cp1252 charset?\nOr whether the latin1 is same as cp1252 as they described in index.xml.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1135,"Q_Id":53828618,"Users Score":0,"Answer":"You really need cp1252, not utf-8?\nI strongly recommend using utf-8.\nWhat you need is:\n\nPass charset=\"utf8mb4\" option to MySQLdb.connect().\nConfigure database to use utf-8.\n\nYou can create database with utf-8 by CREATE DATABASE DEFAULT CHARACTER SET utf8mb4.\nIf you already have database, you can change default character set by ALTER DATABASE CHARACTER SET utf8mb4. But you need to change all character set for existing tables in the database too.","Q_Score":0,"Tags":"python-2.7,character-encoding,mysql-python,cp1252","A_Id":53862166,"CreationDate":"2018-12-18T08:00:00.000","Title":"How to insert cp1252 characters using MySQLdb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that repopulates a large database and would generate id values from other tables when needed. \nExample would be recording order information when given customer names only. I would check to see if the customer exists in a CUSTOMER table. If so, SELECT query to get his ID and insert the new record. Else I would create a new CUSTOMER entry and get the Last_Insert_Id(). \nSince these values duplicate a lot and I don't always need to generate a new ID -- Would it be better for me to store the ID => CUSTOMER relationship as a dictionary that gets checked before reaching the database or should I make the script constantly requery the database? I'm thinking the first approach is the best approach since it reduces load on the database, but I'm concerned for how large the ID Dictionary would get and the impacts of that. \nThe script is running on the same box as the database, so network delays are negligible.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":53842401,"Users Score":0,"Answer":"\"Is it more efficient\"?\nWell, a dictionary is storing the values in a hash table. This should be quite efficient for looking up a value.\nThe major downside is maintaining the dictionary. If you know the database is not going to be updated, then you can load it once and the in-application memory operations are probably going to be faster than anything you can do with a database.\nHowever, if the data is changing, then you have a real challenge. How do you keep the memory version aligned with the database version? This can be very tricky.\nMy advice would be to keep the work in the database, using indexes for the dictionary key. This should be fast enough for your application. If you need to eke out further speed, then using a dictionary is one possibility -- but no doubt, one possibility out of many -- for improving the application performance.","Q_Score":0,"Tags":"python,mysql,sql,database,python-2.7","A_Id":53844012,"CreationDate":"2018-12-18T23:06:00.000","Title":"Is it more efficient to store id values in dictionary or re-query database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem in installing the mysql-python package for flask.\nI had tried using the command:\npip install mysql-python\nBut i did'nt workout for me\nFailed building wheel for mysql-python\n Running setup.py clean for mysql-python\nFailed to build mysql-python\nInstalling collected packages: mysql-python\n Running setup.py install for mysql-python ... error\n Complete output from command c:\\users\\yuvan\\appdata\\local\\programs\\python\\python37-32\\python.exe -u -c \"import setuptools, tokenize;file='C:\\Users\\Yuvan\\AppData\\Local\\Temp\\\n\\pip-install-c23xj5e_\\mysql-python\\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, file, 'exec'))\" install -\n-record C:\\Users\\Yuvan\\AppData\\Local\\Temp\\pip-record-7h3c9v2a\\install-record.txt --single-version-externally-managed --compile:\n running install\n running build\n running build_py\n creating build\n creating build\\lib.win32-3.7\n copying _mysql_exceptions.py -> build\\lib.win32-3.7\n creating build\\lib.win32-3.7\\MySQLdb\n copying MySQLdb__init__.py -> build\\lib.win32-3.7\\MySQLdb\n copying MySQLdb\\converters.py -> build\\lib.win32-3.7\\MySQLdb\n copying MySQLdb\\connections.py -> build\\lib.win32-3.7\\MySQLdb\n copying MySQLdb\\cursors.py -> build\\lib.win32-3.7\\MySQLdb\n copying MySQLdb\\release.py -> build\\lib.win32-3.7\\MySQLdb\n copying MySQLdb\\times.py -> build\\lib.win32-3.7\\MySQLdb\n creating build\\lib.win32-3.7\\MySQLdb\\constants\nGit\nGitHub\nInitialize E:\\chakra with a Git repository\nCreate repository\nI expect that this package would be installed successfully","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1288,"Q_Id":53893304,"Users Score":1,"Answer":"You can install mysqlclient for python using the following command :\n\npip install mysqlclient\n\nAnd If you are looking for particular version :\n\npip install mysqlclient== (e.g. version -> 1.3.6)","Q_Score":1,"Tags":"mysql,python-3.x,flask","A_Id":56610692,"CreationDate":"2018-12-22T05:30:00.000","Title":"How to fix the error of Failed building wheel for mysqlclient in flask","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As a legacy from the previous version of our system, I have around 1 TB of old video files on AWS S3 bucket. Now we decided to migrate to AWS Media Services and all those files should be moved to MediaStore for the access unification.\nQ: Is there any way to move the data programmatically from S3 to MediaStore directly?\nAfter reading AWS API docs for these services, the best solution I've found is to run a custom Python script on an intermediate EC2 instance and pass the data through it.\nAlso, I have an assumption, based on pricing, data organization and some pieces in docs, that MediaStore built on top of S3. That's why I hope to find a more native way to move the data between them.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":376,"Q_Id":53897584,"Users Score":0,"Answer":"I've clarified this with AWS support. There is no way to transfer files directly, although, it's a popular question and, probably, will be implemented.\nNow I'm doing this with an intermediate EC2 server, a speed of internal AWS connections between this, S3 and MediaStore is quite good. So I would recommend this way, at least, for now.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,amazon-ec2,aws-mediastore","A_Id":53904676,"CreationDate":"2018-12-22T16:52:00.000","Title":"Move objects from AWS S3 to MediaStore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have created a machine learning algorithm in Python that is served through a REST API and would like to implement it on Google Cloud \/ Amazon EC2 to make real-time predictions. Before I do this, I would like to create a 'log' of every request\/prediction that comes in\/out of the model - this seems like good practice to me and may also help with creating unique prediction identifiers. Just a simple 1 or 0 stored in a database with a datetime stamp and unique ID.\nHow should I send this data to the database without impacting the run time of the model? An INSERT INTO statement in the API? A seperate API altogether?\nThank you very much for your help!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":344,"Q_Id":53948225,"Users Score":1,"Answer":"It depends on the latency of the results. If you want it to persist data in the db instantly then an API has to be made instantly when you have received the request. As these will be log files by nature my recommendation would be to store locally and insert the logs once in a day to reduce the network congestion over the time. If your existing API is already connected to db and then I do not see a point of creating a new API altogether for a simple post call","Q_Score":0,"Tags":"python,database,machine-learning,server,google-cloud-platform","A_Id":53948363,"CreationDate":"2018-12-27T16:43:00.000","Title":"How to connect machine learning algorithm to database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am making a browser-based multiplayer game. \nIn the game, registered players can \"attack\" other players. Once a player has \"attacked\" a player he must wait 30 minutes before his ability to 'attack' again is reset.\nI have an idea of implementing this but am not sure if my approach is bad practice:\nI thought about adding a \"TimeToReset\" field in the database to each registered user and fill that field with a timer object. Once a player 'attacks' his 'TimeToReset' field starts counting down from 30 minutes to 0.\nand then have the application continuously query all the users in the database with a while True loop, looking for users that their \"TimeToReset\" reached 0. And then run code to reset their ability to 'attack' again.\nI am not sure how efficient my approach or if it is even possible is and would love some input. So to summarize:\n1)Is it ok to store a timer\/stopwatch object(which continuously changes) in a database?\n2)Is it efficient to continuously run a while true loop to query the database?\nOr if is there a better approach to implement this feature I would love to hear it.\nThank you","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":102,"Q_Id":53958039,"Users Score":3,"Answer":"I'm not sure what the benefit of doing this would be. Surely all you need to do is to store the time of the last attack for each user, and just disallow attacks if that is less than 30 minutes before the current time.","Q_Score":1,"Tags":"python,django","A_Id":53958195,"CreationDate":"2018-12-28T11:41:00.000","Title":"Is it bad practice to store a timer object in a database and also have the program continuously query in Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using the peewee ORM to manage a few Postgres databases. I've recently had a problem where the primary keys are not being automatically added when save() or execute() is called like it should be.\nHere's the code that's being called:\nMacro.insert(name=name, display_text=text).on_conflict(conflict_target=(Macro.name,), preserve=(Macro.display_text,), update={Macro.name: name}).execute()\n\nHere's the error:\nCommand raised an exception: IntegrityError: null value in column \"id\" violates non-null constraint; \nDETAIL: Failing row contains (null, nametexthere, displaytexthere)\nThe macro class has an id (AutoField [set to be primary key]), name (CharField), and display_text (CharField). I've tried using the built in PrimaryKeyField and an IntegerField set to primary key to no change.\nBefore, I was using Heroku with no issue. I've since migrated my apps to my Raspberry Pi and that's when this issue popped up.\nThis also isn't the only case where I've had this problem. I have another database with the same AutoField primary key that seems to have broken from the transition from Heroku to Pi. That one uses the save() method rather than insert()\/execute(), but the failing row error still shows up.\nShould also mention that other non-insert queries work fine. I can still select without issue.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":344,"Q_Id":53992747,"Users Score":1,"Answer":"The problem didn't have anything to do with Peewee, it had to do with the dump. Heroku does not dump sequences for you automatically, so I had to add them all again manually. Once those was added the connections worked fine.","Q_Score":0,"Tags":"python,postgresql,peewee","A_Id":53996728,"CreationDate":"2019-01-01T02:50:00.000","Title":"Peewee primary keys not showing up (Failing row contains null)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to generate an avro file from mysql table. I'm currently using pandavro. But pandavro not yet supports datetime datatype. How can I solve the problem? Not using pandavro is fine.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":210,"Q_Id":54003472,"Users Score":1,"Answer":"(1) Convert the datetimes to strings via strftime.\n(2) The strings should then write to the avro.\n(3) Convert datestrings back to datetime when reading the avro.\nSomething else to consider is using a parquet file, which supports datetime.","Q_Score":1,"Tags":"python,mysql,avro","A_Id":57539779,"CreationDate":"2019-01-02T08:50:00.000","Title":"Python : Generate avro schema using pandavro invalid datatype64[ns]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are using H2O (latest version 3.22.1.1) to read parquet data from s3. We use python to talk to H2O. This is single H2O instance - not cluster.\nSometimes we get this error:\nServer error water.exceptions.H2OIllegalArgumentException:\n Error: Cannot determine file type. for s3a:\/\/BUCKET_NAME\/5c2e3fdc0c9c1800019c73f9\/part-00001-c33635a2-76dc-4e49-948b-465726b7e3d9-c000.snappy.parquet\nFile exists and is valid parquet file. Subsequent imports work fine.\nThis is our python code to import file into H2O\nh2o.import_file(path='s3a:\/\/BUCKET_NAME\/5c2e3fdc0c9c1800019c73f9\/part-00001-c33635a2-76dc-4e49-948b-465726b7e3d9-c000.snappy.parquet')\nIs there any way to force h2o to use parquet type?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":755,"Q_Id":54039594,"Users Score":0,"Answer":"H20 Manual says to do it like df = h2o.import_file(\"\/pathToFile\/fileName\") When \n you need to load data from the machine(s) running H2O to the machine running Python. \nSo if your server is not running H20 probably that's why it is showing error.","Q_Score":0,"Tags":"python,amazon-s3,parquet,h2o","A_Id":54040066,"CreationDate":"2019-01-04T13:10:00.000","Title":"h2o and parquet - can not determine file type error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I had a problem when I needed to update a lot of records using write example: self.sudo().write({'field': True})\nIn this case, took me like 10-15 minutes to do it. However, when I tried with a sql query it took me a few seconds. \nMy doubts are, Why does that happen?, why is it better to use one or the other? or in which cases should I use one or the other?.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":274,"Q_Id":54043580,"Users Score":3,"Answer":"Because there is a lot going on in write not just a query executing. For example:\n\nChecking Model access right.\nChecking record rules access right if there is, and ggis the most heavy step. \nComputing others values that depends on this update if there is. \nPosting mail.thread messages.\nUsing api.one in some method this way odoo will repeat the same steps for each record and execute the query N times (very very bad thing to do) \n\nKeep in mind that using plain SQL will not trigger computing values or security checking so don't use it or use it carefully.","Q_Score":2,"Tags":"python,odoo","A_Id":54045437,"CreationDate":"2019-01-04T17:31:00.000","Title":"Why does sql query works faster with many records in odoo?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"while uploading csv file on BigQuery through storage , I am getting below error:\nCSV table encountered too many errors, giving up. Rows: 5; errors: 1. Please look into the error stream for more details.\nIn schema , I am using all parameter as string.\nIn csv file,I have below data:\nIt's Time. Say \"I Do\" in my style.\nI am not able upload csv file in BigQuery containing above sentence","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1178,"Q_Id":54071304,"Users Score":0,"Answer":"Thanks to all for a response.\nHere is my solution to this problem:\n\nwith open('\/path\/to\/csv\/file', 'r') as f:\n text = f.read()\nconverted_text = text.replace('\"',\"'\") print(converted_text)\nwith open('\/path\/to\/csv\/file', 'w') as f:\n f.write(converted_text)","Q_Score":0,"Tags":"python,google-cloud-platform,google-bigquery,google-cloud-storage","A_Id":54090809,"CreationDate":"2019-01-07T09:06:00.000","Title":"How to fix upload csv file in bigquery using python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried to setup cx_oracle latest version for connecting to a remote oracle installation. I was able to connect to oracle and import data on my local installation on Ubuntu. But When i tried the same thing on an AWS server redhat instance it fails and throws a connection timeout issue.\nI am using python 3.5 and cx_oracle 7.0 setup oracle instant client on \/opt\/ and exported the LD_LIBRARY_PATH also tried same with sqlalchemy connector with cx_oracle.\nI even tried installing using rpm but still shows the same issue in server\nplease guide me if am doing anything wrong","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":54072172,"Users Score":0,"Answer":"I need to close this question since the issue was not with the library it was the oracle instance which has a inbound policy for a specific ip","Q_Score":0,"Tags":"database,python-3.x,oracle,amazon-web-services,cx-oracle","A_Id":54074372,"CreationDate":"2019-01-07T10:03:00.000","Title":"Cx_Oracle connection issue Redhat","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using Pentaho service to import all tables and data from a SQL database to a pgSQL database. I'm using the 'sort row' transformation for this.\nNow what I need is to sync the two databases frequently. (ie, changes occurred in SQL db needs to reflects on pgSQL db)\nHow can I do this or which transformation do I need to use?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":515,"Q_Id":54092072,"Users Score":0,"Answer":"you can create one job which executes every 2 minutes or 5 minutes depending on your new data frequency in sql db and takes the new data and dumps into pgsql db. there are various way to do it.\none of such is check for look-up,explore how it works and you will get idea.","Q_Score":2,"Tags":"database,python-3.x,postgresql,pentaho,pentaho-data-integration","A_Id":54192086,"CreationDate":"2019-01-08T12:43:00.000","Title":"How to Synchronise two database using Pentaho?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My App consists of a notification module. If the notifications are consecutively arrived from a same user, I would like to show \"n notifications from john doe\".\neg:\nThe database rows are as:\n\nid | user | notif |\n------------------------------------\n1 john doe liked your pic \n2 john doe commented on your pic\n3 james liked your pic\n4 james commented on your pic\n5 john doe pinged you\n6 john doe showed interest\n7 john doe is busy\n\nThe above notifications are to be shown as:\n\n2 notifications from john doe\n2 notification from james\n3 notofications from john doe\n\nHow would I count these consecutive rows with same value in a column using django orm?\n\nNotification.objects.all().values('user', 'notif_count').group_consecutive_by('user').as(notif_count=Sum())\n\nSomething like that. Please help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":247,"Q_Id":54123705,"Users Score":1,"Answer":"Let my model Notification model be:\n\nClass Notification(models.Model):\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL,\n related_name='notifications',\n on_delete=models.CASCADE)\n notif = models.CharField(max_length=255)\n date_created = models.DateTimeField(auto_now_add=True)\n\n\nThe database rows are as:\n\nid | user | notif |\n------------------------------------\n1 john doe liked your pic \n2 john doe commented on your pic\n3 james liked your pic\n4 james commented on your pic\n5 john doe pinged you\n6 john doe showed interest\n7 john doe is busy\n\nBasically, I am trying to join consecutive rows by user\nThe above notifications then are to be shown as:\n\n2 notifications from john doe\n2 notification from james\n3 notofications from john doe\n\ninstead of\n\n5 notifications from john doe\n2 notification from james\n\nor\n\n1 notifications from john doe\n1 notifications from john doe\n1 notification from james\n1 notification from james\n1 notofications from john doe\n1 notofications from john doe\n1 notofications from john doe\n\nIn order to achieve this, we are looking for a dictionary like:\n\n{\n\"john doe\": [\"notif1\", \"notif2\"],\n\"james\": [\"notif1\", \"notif2\"],\n\"john doe\": [\"notif1\", \"notif2\", \"notif3\"] #duplicate key.\n}\n\nBut, that's not possible as duplicate keys are not allowed. Hence I am going with array of tuples instead.\n\n[\n ('john doe', ['notif1', 'notif2']),\n ('james', ['notif1', 'notif2']),\n ('john doe', ['notif1', 'notif2', 'notif3']),\n]\n\nSo, we first sort the Notifications by date_created. Then we use itertools.groupby to make groups per user.\n\nfrom itertools import groupby\nfrom operator import attrgetter\n\nqs = Notification.objects.select_related('user').order_by('date_created')\nnotifs= [(u, list(nf)) for (u, nf) in groupby(qs, attrgetter('user'))]\n\nYou have everything sorted as needed in notifs.\nDone!","Q_Score":0,"Tags":"python,django,django-models,django-views","A_Id":54140405,"CreationDate":"2019-01-10T07:25:00.000","Title":"Count consecutive rows with same value for a column in a database using django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I received a data dump of the SQL database.\nThe data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python.\nCan anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. \nTLDR; Received an .sql file and no clue how to process\/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3296,"Q_Id":54131953,"Users Score":1,"Answer":"It would be an extraordinarily difficult process to try to construct any sort of Python program that would be capable of parsing the SQL syntax of any such of a dump-file and to try to do anything whatsoever useful with it.\n\"No. Absolutely not. Absolute nonsense.\" (And I have over 30 years of experience, including senior management.) You need to go back to your team, and\/or to your manager, and look for a credible way to achieve your business objective ... because, \"this isn't it.\"\nThe only credible thing that you can do with this file is to load it into another mySQL database ... and, well, \"couldn't you have just accessed the database from which this dump came?\" Maybe so, maybe not, but \"one wonders.\"\nAnyhow \u2013 your team and its management need to \"circle the wagons\" and talk about your credible options. Because, the task that you've been given, in my professional opinion, \"isn't one.\" Don't waste time \u2013 yours, or theirs.","Q_Score":3,"Tags":"python,mysql,sql","A_Id":54132286,"CreationDate":"2019-01-10T15:30:00.000","Title":"How to handle SQL dump with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I received a data dump of the SQL database.\nThe data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python.\nCan anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. \nTLDR; Received an .sql file and no clue how to process\/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":3296,"Q_Id":54131953,"Users Score":2,"Answer":"Eventually I had to install MAMP to create a local mysql server. I imported the SQL dump with a program like SQLyog that let's you edit SQL databases. \nThis made it possible to import the SQL database in Python using SQLAlchemy, MySQLconnector and Pandas.","Q_Score":3,"Tags":"python,mysql,sql","A_Id":54251454,"CreationDate":"2019-01-10T15:30:00.000","Title":"How to handle SQL dump with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting this error from trying to create a superuser for my Django project. Unsure what table requires a default value for its 'name' column.\nAfter successfully creating migrations for my Django project I ran python manage.py createsuperuser to create the superuser and got the following error:\ndjango.db.utils.IntegrityError: (1364, \"Field 'name' doesn't have a default value\"). I installed mysql (8.0) am using homebrew on OSX and using python 3 in a virtual env.\nI'm not sure which database the above command tries to engage, talk less of which table. In any case I have gone through all tables in the db relevant to my project as well as in the mysql database and have run this command on the only name column found:\nALTER TABLE django_migrations ALTER COLUMN name SET DEFAULT '-'\nBut I am still getting this error. I have read up on createsuperuser in the Django docs as well as looked into some of the Django code but have gleaned very little of value to solving this. Any help with this would be greatly appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2779,"Q_Id":54149866,"Users Score":0,"Answer":"By default Django has a table called auth_user serving the user authentication which doesn't contain a field called name, so my assumption is that you have a custom AUTH_USER_MODEL defined in your settings.py which contains a field called name with not set default value.","Q_Score":3,"Tags":"mysql,django,python-3.x","A_Id":54149997,"CreationDate":"2019-01-11T15:52:00.000","Title":"How to resolve django.db.utils.IntegrityError: (1364, \"Field 'name' doesn't have a default value\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have collected a large Twitter dataset (>150GB) that is stored in some text files. Currently I retrieve and manipulate the data using custom Python scripts, but I am wondering whether it would make sense to use a database technology to store and query this dataset, especially given its size. If anybody has experience handling twitter datasets of this size, please share your experiences, especially if you have any suggestions as to what database technology to use and how long the import might take. Thank you","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":729,"Q_Id":54154891,"Users Score":-2,"Answer":"you can try using any NOSql DB. Mongo DB would be a good place to start","Q_Score":1,"Tags":"python,database,twitter","A_Id":54154924,"CreationDate":"2019-01-11T22:29:00.000","Title":"Storing large dataset of tweets: Text files vs Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1762,"Q_Id":54193979,"Users Score":0,"Answer":"There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats\/ file types(csv\/json) and stages. \nIn our case we have built a generic s3 to Snowflake load using Python and Luigi and also implemented the same using SSIS but for csv\/txt file only.","Q_Score":2,"Tags":"python,amazon-s3,snowflake-cloud-data-platform","A_Id":54209309,"CreationDate":"2019-01-15T06:54:00.000","Title":"Automate File loading from s3 to snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writting such float value from Python (6.481044303797468) converted to string via StringIO '6.481044303797468' into Postgresql column of type NUMERIC(13,8). I read it back into Python which is returned as Decimal('6.48104430').\nWhy the precision is smaller?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":233,"Q_Id":54218474,"Users Score":2,"Answer":"NUMERIC(13,8) means: 8 decimal digits. So you are getting exactly what you saved.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":54218534,"CreationDate":"2019-01-16T13:48:00.000","Title":"Python writes float to Postgresql but it is precision is lesser","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writting such float value from Python (6.481044303797468) converted to string via StringIO '6.481044303797468' into Postgresql column of type NUMERIC(13,8). I read it back into Python which is returned as Decimal('6.48104430').\nWhy the precision is smaller?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":54218474,"Users Score":0,"Answer":"You asekd to remember just 8 digits after decimal point in here\nNUMERIC(13,8) change type for what your precision is desired","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":54218576,"CreationDate":"2019-01-16T13:48:00.000","Title":"Python writes float to Postgresql but it is precision is lesser","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to add a vertex to JanusGraph by GremlinPython, and I already set graph.set-vertex-id=true in config, but I always get error:GremlinServerError: 500: Not a valid vertex id: 5678\nI want to set a custom id to vertex, I only know the id should be a long type, some id set success, like:2048, 123456...; But more id set failed, it raise a error: GremlinServerError: 500: Not a valid vertex id: 5678. \nMy add vertex code is:\nvip = g.addV().property(T.id, 5678).property(\"name\", \"domain\").property(\"value\", \"www.google.com\").next()\n\nPlease tell me what is a valid id?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":950,"Q_Id":54219472,"Users Score":0,"Answer":"The type is Long, but you need to send it in a string.\n\nvip = g.addV().property(T.id, '5678').property(\"name\", \"domain\").....","Q_Score":3,"Tags":"python,gremlin,janusgraph","A_Id":72497919,"CreationDate":"2019-01-16T14:41:00.000","Title":"gremlin-python: what is a valid vertex custom id?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"As I am new in Big Data Platform, I would like like to do some feature engineering work with my data. The Database size is about 30-50 Gb. Is is possible to load the full data (30-50Gb) in a data frame like pandas data frame? \nThe Database used here is Oracle. I tried to load it but I am getting out of memory error. Furthermore I like to work in Python.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":224,"Q_Id":54232066,"Users Score":1,"Answer":"pandas is not good if you have GBS of data it would be better to use distributed architecture to improve speed and efficiency. There is a library called DASK that can load large data and use distributed architecture.","Q_Score":0,"Tags":"python-3.x,oracle,jupyter-notebook,bigdata","A_Id":68094492,"CreationDate":"2019-01-17T08:50:00.000","Title":"Big Data Load in Pandas Data Frame","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need your help please, I want to automize my everyday tasks with python code. I need to open an existing excel document, modify some information in it(ex: date) then save it as pdf and print it.\nIs it possible to do all these via python?\nI have tried to do this with openpyxl, I can open and modify the sheets, but' I can't save as pdf only one sheet of the workbook and print it then.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":418,"Q_Id":54252699,"Users Score":0,"Answer":"Try using xlwings, it allows you to use more or less any Excel feature because it's actually opening the file and working on it (you can decide if that's done in the background or you can actually see it).","Q_Score":1,"Tags":"excel,python-3.x,pdf,printing","A_Id":54254629,"CreationDate":"2019-01-18T11:05:00.000","Title":"How to Modify an excel document, save as pdf and print it with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need your help please, I want to automize my everyday tasks with python code. I need to open an existing excel document, modify some information in it(ex: date) then save it as pdf and print it.\nIs it possible to do all these via python?\nI have tried to do this with openpyxl, I can open and modify the sheets, but' I can't save as pdf only one sheet of the workbook and print it then.","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":418,"Q_Id":54252699,"Users Score":-1,"Answer":"Why you need to use Python?\nI think easiest way is write macro in VBA excel (which can updating values in your sheet) and than print it out as PDF or .","Q_Score":1,"Tags":"excel,python-3.x,pdf,printing","A_Id":54252906,"CreationDate":"2019-01-18T11:05:00.000","Title":"How to Modify an excel document, save as pdf and print it with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a python script, I need to search for the user input string in an excel sheet, find the string , then display the respective row\/column details of the cell where the user input was found.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":54285209,"Users Score":0,"Answer":"use argparser to get the input from the cmd\nand use the xlrd to read the excel sheet","Q_Score":0,"Tags":"python","A_Id":54286774,"CreationDate":"2019-01-21T07:29:00.000","Title":"Searching user input string in excel sheet and find the string respective cell information and display range of column from the sheet","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We currently are receiving reports via email (I believe they are SSRS reports) which are embedded in the email body rather than attached. The reports look like images or snapshots; however, when I copy and paste the \"image\" of a report into Excel, the column\/row format is retained and it pastes into Excel perfectly, with the columns and rows getting pasted into distinct columns and rows accordingly. So it isn't truly an image, as there is a structure to the embedded report.\nRight now, someone has to manually copy and paste each report into excel (step 1), then import the report into a table in SQL Server (step 2). There are 8 such reports every day, so the manual copy\/pasting from the email into excel is very time consuming. \nThe question is: is there a way - any way - to automate step 1 so that we don't have to manually copy and paste each report into excel? Is there some way to use python or some other language to detect the format of the reports in the emails, and extract them into .csv or excel files? \nI have no code to show as this is more of a question of - is this even possible? And if so, any hints as to how to accomplish it would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":54299206,"Users Score":0,"Answer":"The most efficient solution is to have the SSRS administrator (or you, if you have permissions) set the subscription to send as CSV. To change this in SSRS right click the report and then click manage. Select \"Subscriptions\" on the left and then click edit next to the subscription you want to change. Scroll down to Delivery Options and select CSV in the Render Format dropdown. Viola, you receive your report in the correct format and don't have to do any weird extraction.","Q_Score":0,"Tags":"python,html,csv,email","A_Id":54312283,"CreationDate":"2019-01-21T23:31:00.000","Title":"Is it possible to extract an SSRS report embedded in the body of an email and export to csv?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've installed NGINX, GUNICORN and my project properly on Ubuntu server, \nbut when I run the project using\npython manage.py runserver, I get the following error; \n\ndjango.db.utils.OperationalError: (2003, \"Can't connect to MySQL server on '127.0.0.1' (111)\"\n\nBefore, installing gunicorn, my site was running properly at my_public_ip_address:8000","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":7818,"Q_Id":54312110,"Users Score":1,"Answer":"as asked before if you are running your website on a remote server, you should make sure that you add the ip address to the ALLOWED_HOSTS-list otherwise you might get another error. \nSolutions could be:\n\nDid you try to uninstall gunicorn? gunicorn is used later for the deployment of the website so it should actually be fine, as you are starting a development server with python manage.py runserver\nAs nginx is also used to ship your website into production if would assume that this should note be related directly to the database. you might want to check if nginx is running with service nginx status if this allocates the localhost port django can maybe not access the mysql database\nDid you check the port config of your mysql database?","Q_Score":1,"Tags":"python,mysql,django","A_Id":54312442,"CreationDate":"2019-01-22T16:02:00.000","Title":"django.db.utils.OperationalError: (2003, \"Can't connect to MySQL server on '127.0.0.1' (111)\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've installed NGINX, GUNICORN and my project properly on Ubuntu server, \nbut when I run the project using\npython manage.py runserver, I get the following error; \n\ndjango.db.utils.OperationalError: (2003, \"Can't connect to MySQL server on '127.0.0.1' (111)\"\n\nBefore, installing gunicorn, my site was running properly at my_public_ip_address:8000","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":7818,"Q_Id":54312110,"Users Score":2,"Answer":"check your mysql server is running or not\nrestart the mysql server","Q_Score":1,"Tags":"python,mysql,django","A_Id":63745689,"CreationDate":"2019-01-22T16:02:00.000","Title":"django.db.utils.OperationalError: (2003, \"Can't connect to MySQL server on '127.0.0.1' (111)\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have installed python 3.5.3 installed on my Windows machine. I check the SQLite version via the command sqlite3.sqlite_version. It is version 3.8.11. \nMy question is how can I update the SQLite version to 3.26? I wasn't sure if there was a 3rd party library or if I need to update sqlite3 library. \nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":870,"Q_Id":54382087,"Users Score":5,"Answer":"Just update the sqlite in the system by a newer version. Python will use it. It is not 3rd party. It is included in Python. I am not completely sure but I think it is a dynamically loaded library installed with Python but that you can upgrade by yourself. At least in my system different Python versions report the same sqlite3 version.","Q_Score":1,"Tags":"python,sqlite","A_Id":54383650,"CreationDate":"2019-01-26T19:44:00.000","Title":"python 3.5 update sqlite3 version","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Django 2.0.7, Python 3.7 and Oracle 11g. I cannot change these configurations. Whenever I am trying to migrate my models, I always get the error, 'Missing ALWAYS' which is an error associated with Identity columns.\nI know that Identity columns were introduced from oracle 12. Is there a way to overcome this issue with the version requirements I have?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":54393556,"Users Score":0,"Answer":"I resolved the issue by downgrading django to 1.11. Identity column is not required in versions prior to 2.0.","Q_Score":0,"Tags":"python,django,oracle,oracle11g,cx-oracle","A_Id":54411219,"CreationDate":"2019-01-27T22:43:00.000","Title":"How to solve Identity column missing Always error for oracle 11g, django 2.0.7?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"As a pet project I have been writing my own ORM to help me better understand the decisions made by production grade ORMs like Peewee or the more complex sqlalchemy.\nIn line with my titles question, is it better to spawn one cursor and reuse it for multiple SQL executions or spawn a new cursor for each transaction?\nI've already guessed about avoid state issues (transactions with no commit) but is there another reason why it would be better to have one cursor for each operation (insert, update, select, delete, or create)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1295,"Q_Id":54395773,"Users Score":2,"Answer":"Have you profiled and found that the creation of cursors is a significant source of overhead?\nCursors are a DB-API 2.0 artifact, not necessarily an actual \"thing\" that exists. They are designed to provide a common interface for executing queries and handling results\/iteration. How they are implemented under-the-hood is up to the database driver. If you're aiming to support DB-API 2.0 compatible drivers, I suggest just use the cursor() method to create a cursor for every query execution. I would recommend to NEVER have a singleton or shared cursor.\nIn SQLite, for example, a cursor is essentially a wrapper around a sqlite3_stmt object, as there's no such thing as a \"sqlite3_cursor\". The stdlib sqlite3 driver maintains an internal cache of sqlite3_stmt objects to avoid the cost of compiling queries that are frequently used.","Q_Score":3,"Tags":"python,orm,sqlite","A_Id":54410755,"CreationDate":"2019-01-28T05:00:00.000","Title":"What are the side-effects of reusing a sqlite3 cursor?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to automate creating of a powerpoint ppt via linking template charts to some Excel files. Updating the excel file values changes the powerpoint slides automatically. I have created my powerpoint template and linked charts to sample excel files data.\nI want to send the folder with the powerpoint and excel files to someone else. But this will break the link to excel files due to change in the path. (As path is not relative). I can edit the paths manually by going under the \"edit links to files\" option under File Menu but this is tedious as charts are numerous with multiple files.\nI want to update the same via Python code using the Python-Pptx package. \nPlease help!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":982,"Q_Id":54420740,"Users Score":1,"Answer":"There's no API support for this in the current version of python-pptx.\nYou would need to modify the underlying XML directly, perhaps using python-pptx internals as a starting point and using lxml calls on the appropriate element objects. If you search on \"python-pptx workaround function\" you will find some examples.\nAnother thing to consider is modifying the XML by cruder but still possibly effective means by accessing the XML files in the .pptx package directly (the .pptx file is a Zip archive of largely XML files) and using regular expressions or perhaps a command line tool like sed or awk to do simple text substitution.\nEither way you're going to need to want it pretty badly, depending on your Python skill level. You'll also of course need to discover just which strings in which parts of the XML are the ones that need changing. opc-diag can be helpful for that, but it's a bit of detective work even with the best tools.","Q_Score":2,"Tags":"python-3.x,powerpoint,python-pptx","A_Id":54432815,"CreationDate":"2019-01-29T12:08:00.000","Title":"Update linked excel path in PowerPoint via Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to remotely connect to a MongoDB database but don't want to store the password for the database in plaintext in the code. What's a good method for encrypting\/decrypting the password so it's not available to anyone with the source code? The source code will be on GitHub.\nI'm working with Python and PyMongo for connecting to the database. The database has authentication enabled in the mongod.conf file. The database is hosted on a Ubunutu 18.04 instance running in AWS.\nIt would also be nice to have the IP address of the server encrypted also as i've had security issues before with people accessing the database due to the code being available on GitHub and then presumably scraped by bots. \nMy current URI looks like this \nURI = \"mongo serverip --username mongo --authenticationDatabase admin -p\"\nI would like the IP address and password to be encrypted in some way so that the password and IP aren't publicly available in the source code.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1301,"Q_Id":54422176,"Users Score":1,"Answer":"There is only and and simple way:\nIf you don't want the password and the server name to be included in your public repository don't write it into a file that is pushed into that repository. \nOne way to do so would be to create a config file for secret data and add it to the .gitignore file. At run-time open the config file, read the secret data from it and use it in your script.\nAnother way would be to provide the secret data (password an server name) as command line parameters to your script. \nAny other way that \"encrypts\" (obfuscates) the password is insecure as long as the repository contains also the obvious or hidden key. This can be decoded with a little effort.","Q_Score":2,"Tags":"python,mongodb,encryption,pymongo,password-protection","A_Id":54428514,"CreationDate":"2019-01-29T13:28:00.000","Title":"Connecting remotely to a MongoDB database without storing password in plaintext","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to write a python script which will run in AWS Lambda, back up a PostgreSQL database table which is hosted in Amazon RDS, then dump a resulting .bak file or similar to S3. \nI'm able to connect to the database and make changes to it, but I'm not quite sure how to go about the next steps. How do I actually back up the DB and write it to a backup file in the S3 bucket?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1755,"Q_Id":54431154,"Users Score":0,"Answer":"The method that worked for me was to create an AWS data pipeline to back up the database to CSV.","Q_Score":0,"Tags":"python,postgresql,amazon-s3,aws-lambda","A_Id":54791550,"CreationDate":"2019-01-29T23:25:00.000","Title":"Backing up a postgresql database table with python in lambda","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a REST to SQL server in Python and am trying to add an additional WHERE condition to all SQL queries that I receive. For example, let's say I want to filter all queries for values of b > 4. I would do the following:\nGiven a SQL query that contains a WHERE clause:\nSELECT * FROM my_table WHERE a < 5\nbecomes\nSELECT * FROM my_table WHERE a < 5 AND b > 4\nGiven a SQL query that contains no WHERE clause:\nSELECT * FROM my_table\nbecomes\nSELECT * FROM my_table WHERE b > 4\nGiven a SQL query that contains a GROUP BY and no WHERE clause:\nSELECT id FROM my_table GROUP BY id\nbecomes\nSELECT id FROM my_table WHERE b > 4 GROUP BY id\nI also need to handle queries that contain many combinations of other clauses, such as LIMIT, HAVING, etc.\nIs there a clean way in SQL to handle a substitution like this for all queries? Or do I simply have to use regexes and pattern matching in Python to achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":250,"Q_Id":54471426,"Users Score":0,"Answer":"There is no generic existing solution to your problem. The examples you've provided cover part of the possible space of queries, but SQL allows for far more complicated queries that might need different solutions still (JOINs, UNIONs, etc.). A generic solution that would cover all possible queries would be very complicated, if possible at all.\nIf you know the examples you're giving cover the entire need for your specific use case, then yes - using regex and pattern matching is probably a good approach. I would recommend encapsulating your solution in a class that holds your query and exposes operations that allow you to add conditions and perform similar operations if you need them. \nThat way, you can easily extend it later without affecting the use of the class in the rest of your code.","Q_Score":0,"Tags":"python,sql","A_Id":54471594,"CreationDate":"2019-02-01T00:52:00.000","Title":"How to add an additional WHERE clause to any generic SQL query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using latest version macOS and homebrew, brew doctor find nothing wrong. and I just use brew install python, brew install python@2 to get latest version of python.\nWhen I type python -c \"import sqlite3\", I get following error messages:\n\npython2.7 -c \"import sqlite3\"\n 130 \u21b5 Traceback (most recent call last): File \"\", line 1, in\n File\n \"\/usr\/local\/Cellar\/python@2\/2.7.15_2\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/sqlite3\/init.py\",\n line 24, in \n from dbapi2 import * File \"\/usr\/local\/Cellar\/python@2\/2.7.15_2\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/sqlite3\/dbapi2.py\",\n line 28, in \n from _sqlite3 import * ImportError: dlopen(\/usr\/local\/Cellar\/python@2\/2.7.15_2\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/lib-dynload\/_sqlite3.so,\n 2): Symbol not found: _sqlite3_enable_load_extension Referenced\n from:\n \/usr\/local\/Cellar\/python@2\/2.7.15_2\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/lib-dynload\/_sqlite3.so\n Expected in: \/usr\/lib\/libsqlite3.dylib in\n \/usr\/local\/Cellar\/python@2\/2.7.15_2\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/lib-dynload\/_sqlite3.so\npython -c \"import sqlite3\"\n 1 \u21b5 Traceback (most recent call last): File \"\", line 1, in\n File\n \"\/usr\/local\/Cellar\/python\/3.7.2_1\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/sqlite3\/init.py\",\n line 23, in \n from sqlite3.dbapi2 import * File \"\/usr\/local\/Cellar\/python\/3.7.2_1\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/sqlite3\/dbapi2.py\",\n line 27, in \n from _sqlite3 import * ImportError: dlopen(\/usr\/local\/Cellar\/python\/3.7.2_1\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/lib-dynload\/_sqlite3.cpython-37m-darwin.so,\n 2): Symbol not found: _sqlite3_enable_load_extension Referenced\n from:\n \/usr\/local\/Cellar\/python\/3.7.2_1\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/lib-dynload\/_sqlite3.cpython-37m-darwin.so\n Expected in: \/usr\/lib\/libsqlite3.dylib in\n \/usr\/local\/Cellar\/python\/3.7.2_1\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/lib-dynload\/_sqlite3.cpython-37m-darwin.so\n\nwhat may cause the problem? I tried to download python source code and compile it, and move the _sqlite3.so or _sqlite3.cpython-37m-darwin.so file into the brew installed folder, and everything works just fine. Could brew just forget something in the formula? What can I do except for compiling .so file from source and manually solve the problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":518,"Q_Id":54476008,"Users Score":0,"Answer":"I finally find out where the bug hides.\nAll my macOS devices(including 2 Pros and 1 Air) are sharing the same .zshrc file, and for some reason, I add a\nexport DYLD_LIBRARY_PATH=\"\/Users\/myname\/Library\/Developer\/Xcode\/iOS DeviceSupport\/10.0.1 (14A403)\/Symbols\/usr\/lib\/:\/usr\/lib\/\"\nwhich ruins the build of python sqlite shared library file, for sqlite recently add a feature called '_sqlite3_enable_load_extension'.\nwhen I removed the DYLD_LIBRARY_PATH to the outdated dir, and brew reinstall python everything is fine.\nBy the way, brew config and brew doctor provides no information about DYLD_LIBRARY_PATH. Next time I'll follow the rules and provide these informations.\nProblem solved!","Q_Score":1,"Tags":"python,homebrew,homebrew-cask","A_Id":54684608,"CreationDate":"2019-02-01T08:56:00.000","Title":"homebrew python@2 and python provides broken sqlite3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have developing kind of chat app. \nThere are python&postgresql in server side, and xcode, android(java) side are client side(Web will be next phase).\nServer program is always runing on ubuntu linux. and I create thread for every client connection in server(server program developed by python). I didnt decide how should be db operations?. \n\nShould i create general DB connection and i should use this\nconnection for every client's DB\noperation(Insert,update,delete..etc). In that case If i create\ngeneral connection, I guess i got some lock issue in future. (When i try to get chat message list while other user inserting)\nIF I create DB connection when each client connected to my server. In that case, Is there too many connection. and it gaves me performance issue in future.\nIf i create DB connection on before each db operation, then there is so much db connection open and close operation. \n\nWhats your opinion? Whats the best way?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":54494695,"Users Score":1,"Answer":"The best way would be to maintain a pool of database connections in the server side.\nFor each request, use the available connection from the pool to do database operations and release it back to the pool once you're done.\nThis way you will not be creating new db connections for each request, which would be a costly operation.","Q_Score":0,"Tags":"python,postgresql,webserver","A_Id":54495609,"CreationDate":"2019-02-02T15:52:00.000","Title":"should i open db connection for every thread?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a table on Hive and I am trying to insert data in that table. I am taking data from SQL but I don't want to insert id which already exists in the Hive table. I am trying to use the same condition like where not exists. I am using PySpark on Airflow.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":4869,"Q_Id":54550161,"Users Score":-1,"Answer":"IMHO I don't think exists such a property in Spark. I think you can use 2 approaches:\n\nA workaround with the UNIQUE condition (typical of relational DB): in this way when you try to insert (in append mode) an already existing record you'll get an exception that you can properly handle.\nRead the table in which you want to write, outer join it with the data that you want to add to the aforementioned table and then write the result in overwrite mode (but I think that the first solution may be better in performance).\n\nFor more details feel free to ask","Q_Score":2,"Tags":"python,hive,pyspark,airflow,pyspark-sql","A_Id":54550984,"CreationDate":"2019-02-06T09:21:00.000","Title":"How can I use \"where not exists\" SQL condition in pyspark?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that gathers data from an API, and running this manually on my local machine I can save the data to a CSV or SQLite .db file.\nIf I put this on AWS lambda how can I store and retrieve data?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6322,"Q_Id":54602932,"Users Score":0,"Answer":"with aws lambda you can use database like dynamo db which is not sql database and from there you can download csv file.\nwith lambda to dynamo bd integration is so easy lambda is serverless and dynamo db is nosql database.\nso you can save data into dynamo db also you can use RDS(Mysql) and use man other service but best way will be dynamo db.","Q_Score":2,"Tags":"python,amazon-web-services,sqlite,aws-lambda","A_Id":54603114,"CreationDate":"2019-02-09T03:39:00.000","Title":"Using AWS Lambda to run Python script, how can I save data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that gathers data from an API, and running this manually on my local machine I can save the data to a CSV or SQLite .db file.\nIf I put this on AWS lambda how can I store and retrieve data?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6322,"Q_Id":54602932,"Users Score":0,"Answer":"It really depends on what you want to do with the information afterwards.\nIf you want to keep it in a file, then simply copy it to Amazon S3. It can store as much data as you like.\nIf you intend to query the information, you might choose to put it into a database instead. There are a number of different database options available, depending on your needs.","Q_Score":2,"Tags":"python,amazon-web-services,sqlite,aws-lambda","A_Id":54603135,"CreationDate":"2019-02-09T03:39:00.000","Title":"Using AWS Lambda to run Python script, how can I save data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building social network platform where users can register to their profile, fill it up and create events for the others. My problem is that I don't know what is the best approach to create tables.\nOne guy told me I should normalize table, meaning - he wants me to create separated tables for city, country, university, company and later connect those information with an SQL Query, which makes sense for me. If I will get 100 of students to sign up from the same University it makes sense for me to call only one University Name from University table instead of having rows and rows with university name filled in - it's data redundancy. \nHowever, the other guy told me, it's a bad practice, and I should put all information inside one user table - firstName, lastName, profileIMG, universityName, CompanyName, cityName, CountryName and so on. He says more tables create more problems. \nFrom my part, I do understand the logic of the first guy, but here is my other problem. As I mentioned, users fill up their resume in their profile and I want them to be allowed to add up to 3 universities they had been attending - bachelor degree, master degree, and postdoc. The same I want to allow them with companies - they can add up to three previous companies they worked for. \nI thought I will create University table where I will have this: universityName_1, universityName_2, universityName_3. The same I want to do with the company table. \nIs this a good practice? \nMaybe, I just should create the university Table with an UniversityName column, and when it comes to retrieving data from database, I would just use SQL query inside my Django project to call a specific University for the specific position? Like I call Columbia University for 2nd position (universityName_2)? \nI am very new to this topic! I hope that I presented you my problem clearly!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1006,"Q_Id":54621204,"Users Score":0,"Answer":"The \u201csecond guy\u201d probably doesn't understand relational database very well.\nIf a person has relationships with universitys and companys, add a table person_university that has foreign keys to both person and university and contains the details of the relationship. The primary key of that table would be a composite one, consisting of the two foreign keys. The same for companies.\nThat is the canonical way to store such relationships in a database. What you cannot model that way is the limit of three, but that can be handled by your application.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":54626857,"CreationDate":"2019-02-10T21:24:00.000","Title":"How to normalize the tables inside PostgreSQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Background of our environment:\n\nData Warehouse system is running with SQL Server 2012. \nData Sources are Excel files and other APIs\n\nIssue:\nThe business metrics are changing frequently and source file is changing frequently and data load failing for multiple reasons.\n\nColumn mismatch\nData type mismatch\nWrong files\nOld or same file, updated twice\n\nSome of the above issues are managed via process guidelines and others at SQL level.\nBut, whenever, there is a new file \/ column added, developer has to manually add the Column \/ table for that change to be impacted.\nMost of the times, the changes came to light only after the job failed or huge data quality \/ mismatch issue identified.\nQuestion:\nIs there any way, this can automated using Python \/ Powershell \/ Any Other scripting languages? In a way, whenever source files are ready, it can read and do the below steps:\n\nRead the column headers.\nGenerate SQL for table structure with identified column headers and create temporary (Staging) table.\nLoad the data into the newly created temporary table.\nAfter some basic data processing, load data into main table (presentation area) mostly through SQL.\n\nChallenges:\n\nThere are 18 unique files, and each file columns are different and it may modified or added anytime according to the business requirement.\nWhen there is an addition of column, how do add that column on main table - altering a table is a good idea here? is it okay to done via script?\n\nNote:\n\nWe have control only from source data file, we cannot do anything with how source file is generated or when can be new column added to source file.\nI am not sure, whether to ask this question on SO OR DBA SE, so if it is not fit here, please move it appropriate forum.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":555,"Q_Id":54692916,"Users Score":0,"Answer":"1.I'm guessing you can identify the file types based on file_names or header.You could create a SSIS package with a Source Script within a foreach loop , for the script define input and output columns manually and give Generic Names and fixed string length , ColumnNr1,ColumnNr2,ColumnNrN (Where N is max number of Columns from your files +10 for safety) .Create a staging table using the same logic as above,ColumnNr1,2... this will be used for all the files, if the file load is sequencial(As i have assumed), in your script you will read the header and insert it into a data table or list , compare the numbers of columns between file header and Final Table, create Alter Table statements for new columns based on differences and execute it , send column data from file to OutputBuffer columns .\n2. Create dynamic SQL procedure based on data processing needs .","Q_Score":0,"Tags":"python,sql-server,excel,etl,data-warehouse","A_Id":55093265,"CreationDate":"2019-02-14T14:37:00.000","Title":"Create tables from Excel column headers using Python and load data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am storing multiple time-series in a MongoDB with sub-second granularity. The DB is updated by a bunch of Python scripts, and the data stored serve two main purposes:\n(1) It's a central information source for the latest data from all series. Multiple scripts access it every second or so to read the latest datapoint in each collection.\n(2) It's a long-term data store. I often load the whole DB into Python to analyse trends in the data.\nTo keep the DB as efficient as possible, I want to bucket my data (ideally holding one document per day in each collection). Because of (1), however, the bigger the buckets, the more expensive the sorting required to access the last datapoint.\nI can think of two solutions here, but I'm not sure what alternatives there are, or which is the best way:\na) Store the latest timestamp in a one-line document in a separate db\/collection. No sorting required on read, but an additional write required every time a any series gets a new datapoint.\nb) Keep the buckets smaller (say 1-hour each) and sort.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":54763401,"Users Score":0,"Answer":"With a) you write smallish documents to a separate collection, which is performance wise preferable to updating large documents. You could write all new datapoints in this collection and aggregate them for the hour or day, depending on your preference. But as you said this requires an additional write operation.\nWith b) you need to keep the index size for the sort field in mind. Does the index size fit in memory? That's crucial for the performance of the sort, as you do not want to do any in memory sorting of a large collection.\nI recommend exploring the hybrid approach, of storing individual datapoints for a limited time in an 'incoming' collection. Once your bucketing interval of hour or day approaches, you can aggregate the datapoints into buckets and store them in a different collection. Of course there is now some additional complexity in the application, that needs to be able to read bucketed and datapoint collections and merge them.","Q_Score":0,"Tags":"python,mongodb,time-series","A_Id":54764199,"CreationDate":"2019-02-19T09:57:00.000","Title":"How to optimally access the latest datapoint in MongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build a machine learning system with large amount of historical trading data for machine learning purpose (Python program). \nTrading company has an API to grab their historical data and real time data. Data volume is about 100G for historical data and about 200M for daily data. \nTrading data is typical time series data like price, name, region, timeline, etc. The format of data could be retrieved as large files or stored in relational DB. \nSo my question is, what is the best way to store these data on AWS and what'sthe best way to add new data everyday (like through a cron job, or ETL job)? Possible solutions include storing them in relational database like Or NoSQL databases like DynamoDB or Redis, or store the data in a file system and read by Python program directly. I just need to find a solution to persist the data in AWS so multiple team can grab the data for research. \nAlso, since it's a research project, I don't want to spend too much time on exploring new systems or emerging technologies. I know there are Time Series Databases like InfluxDB or new Amazon Timestream. Considering the learning curve and deadline requirement, I don't incline to learn and use them for now. \nI'm familiar with MySQL. If really needed, i can pick up NoSQL, like Redis\/DynamoDB. \nAny advice? Many thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":54770900,"Users Score":0,"Answer":"If you want to use AWS EMR, then the simplest solution is probably just to run a daily job that dumps data into a file in S3. However, if you want to use something a little more SQL-ey, you could load everything into Redshift. \nIf your goal is to make it available in some form to other people, then you should definitely put the data in S3. AWS has ETL and data migration tools that can move data from S3 to a variety of destinations, so the other people will not be restricted in their use of the data just because of it being stored in S3. \nOn top of that, S3 is the cheapest (warm) storage option available in AWS, and for all practical purposes, its throughout is unlimited. If you store the data in a SQL database, you significantly limit the rate at which the data can be retrieved. If you store the data in a NoSQL database, you may be able to support more traffic (maybe) but it will be at significant cost. \nJust to further illustrate my point, I recently did an experiment to test certain properties of one of the S3 APIs, and part of my experiment involved uploading ~100GB of data to S3 from an EC2 instance. I was able to upload all of that data in just a few minutes, and it cost next to nothing. \nThe only thing you need to decide is the format of your data files. You should talk to some of the other people and find out if Json, CSV, or something else is preferred. \nAs for adding new data, I would set up a lambda function that is triggered by a CloudWatch event. The lambda function can get the data from your data source and put it into S3. The CloudWatch event trigger is cron based, so it\u2019s easy enough to switch between hourly, daily, or whatever frequency meets your needs.","Q_Score":0,"Tags":"mysql,database,amazon-web-services,amazon-dynamodb,mysql-python","A_Id":54779427,"CreationDate":"2019-02-19T16:30:00.000","Title":"What is a good AWS solution (DB, ETL, Batch Job) to store large historical trading data (with daily refresh) for machine learning analysis?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database that contains md5 hashs, i want to convert them to another type of hash so that the users can login to the new website.\nI am using the werkzeug.security library to generate the hashs.\nI there is any way to do that ??","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":683,"Q_Id":54828377,"Users Score":3,"Answer":"No. Hashes are not reversible, so you can't do that directly.\nThe way you solve this is that when an old user log in, you validate their password against the md5 hash, and if it matches, you create the SHA256 hash from the plain text password, sets the new SHA256 hash in the database (either as a separate field or by using a hash type identifier in front of the hash itself) and then remove the MD5 hash value.\nAfter a while (for example a year), you remove all the existing MD5 hashes and make people that attempt to log in without a valid hash reset their password through existing means and then only populate the SHA256 field.","Q_Score":3,"Tags":"python,werkzeug","A_Id":54828443,"CreationDate":"2019-02-22T13:40:00.000","Title":"How to convert md5 32 bytes hash to corresponding sha256 in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Flask and web development in general. I have a Flask web-application that is using SQLAlchemy, is it ok to put session.rollback at the beginning of the app in order to keep it running even after a transaction fails?\nI had a problem with my website when it stopped working after I was attempting to delete records of one table. The error log showed that the deletion failed due to entries in another table still referencing these records as their foreign key. The error log suggested using session.rollback to rollback this change, so I put it at the beginning of my app just after binding my database and creating the session and my website worked. This gave me the hint to leave that line there. Is my move right, safe and ok? Can anyone tell me what is the correct thing to do if this is somewhat endangering the functionality or logic of my website by any chance?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":68,"Q_Id":54861925,"Users Score":1,"Answer":"You should not use the rollback at the beginning but when a database operation fails.\nThe error is due to an integrity condition in your database. Some rows in your table are being referenced by another table. So, you have to remove referencing rows first.","Q_Score":0,"Tags":"python,session,flask,sqlalchemy,rollback","A_Id":54868874,"CreationDate":"2019-02-25T08:10:00.000","Title":"In a Flask application that is using SQLAlchemy, is it ok to permanently put `session.rollback` at the beginning of the app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am sending data to sql server and the data absolutly perfect , but all of sudden i was encountered with following error , can any one please suggestes me whats the problem is:\nERROR\n\n\"Column must be constructed with a non-blank name or \"\nArgumentError: Column must be constructed with a non-blank name or assign a non-blank .name before adding to a Table.\n\nI am currently using:\nServer: SQL 2012","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":211,"Q_Id":54912698,"Users Score":0,"Answer":"The column that you are trying to send data to cannot contain NULL values is what I am assuming. However if you check the column in your dataframe that you are sending to your MySQL server I bet that there are NULL or empty values within that column. \nYou need to make sure that if your column in your MySQL server cannot contain null values, the dataframe column that you are sending to the server cannot have null values also. You will need to insert a value into the dataframe for every null value in that column for the MySQL server to accept the data, or change your database design to allow null values in that column. \nThat would explain why it was working and why it stopped working. It worked because the server was not receiving NULL values but broke the second you tried to send it a NULL value into the database.","Q_Score":1,"Tags":"python,sql-server","A_Id":54912810,"CreationDate":"2019-02-27T19:01:00.000","Title":"Sending data by using python to sql server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. \nShe has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste.\nI can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do.\nI don't know what .asp is.\nCould you please give me some tips, pointers, about how to get the data with Python? \nCan I automate this task? \nIs this a case for MySQL? (About which I know nothing.)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":60,"Q_Id":54943792,"Users Score":1,"Answer":"This is a really broad question and not really in the style of Stack Overflow. To give you some pointers anyway. In the end .asp files, as far as I know, behave like normal websites. Normal websites are interpreted in the browser like HTML, CSS etc. This can be parsed with Python. There are two approaches to this that I have used in the past that work. One is to use a library like requests to get the HTML of a page and then read it using the BeautifulSoup library. This gets more complex if you need to visit authenticated pages. The other option is to use Selenium for python. This module is more a tool to automate browsing itself. You can use this to automate visiting the website and entering login credentials and then read content on the page. There are probably more options which is why this question is too broad. Good luck with your project though! \nEDIT: You do not need MySql for this. Especially not if the required output is an Excel file, which I would generate as a CSV instead because standard Python works better with CSV files than Excel.","Q_Score":0,"Tags":"python","A_Id":54944088,"CreationDate":"2019-03-01T11:32:00.000","Title":"Get data from an .asp file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. \nShe has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste.\nI can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do.\nI don't know what .asp is.\nCould you please give me some tips, pointers, about how to get the data with Python? \nCan I automate this task? \nIs this a case for MySQL? (About which I know nothing.)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":60,"Q_Id":54943792,"Users Score":1,"Answer":"Try using the tool called Octoparse.\nDisclaimer: I've never used it myself, but only came close to using it. So, from my knowledge of its features, I think it would be useful for your need.","Q_Score":0,"Tags":"python","A_Id":54945063,"CreationDate":"2019-03-01T11:32:00.000","Title":"Get data from an .asp file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Python 3.7.2\nI write the strings from my Python code into my database. My strings contain Latin and Cyrillic characters, so in the database I use 1-byte encoding koi8-r. The miracle is that my strings without distortion are written to the database, although utf8 and koi8r have completely different sequence of characters (for example, as in ascii and utf8). Sometimes characters of other layouts appear in the text and then write errors appear.\nTherefore, the question appears:\n\nWho converts strings: the database or the aiomysql library, that I use to write to the database.\nHow quickly in Python \/ MariaDB to remove non-koi8-r characters to avoid errors.\nIs there a multibyte encoding that stores the Latin and Cyrillic characters in the first byte, and other layouts in other bytes.\n\nThank you in advance for participating in the conversation.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":76,"Q_Id":54977938,"Users Score":2,"Answer":"Here's the processing when INSERTing:\n\nThe Client has the characters encoded with charset-1.\nYou told MySQL that that was the case when you connected or via SET NAMES.\nThe column that the characters will be inserted into is declared to be charset-2.\nThe INSERT converts from charset-1 to charset-2. So, all is well.\n\nUpon SELECTing, the same thing happens, except that the conversion is in the other direction.\nWhat you are doing is OK. But, going forward, everyone 'should' use UTF-8 characters in clients and CHARACTER SET utf8mb4 for columns. You will essentially have to change to such if you ever branch out beyond what your character sets allow, which may be nothing more than Russian and English.","Q_Score":1,"Tags":"python,mysql,python-3.x,character-encoding","A_Id":55033080,"CreationDate":"2019-03-04T06:49:00.000","Title":"Writing from Python to a database with an encoding different from utf8","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"my problem is the following: I have an application in ruby \u200b\u200bon rails that I would like to update in realtime, second by second. I would not, however, overload the database unnecessarily (because too many users and small server). I would like the ruby \u200b\u200bon rails application to be notified in some way by the mysql database when an update occurred in some datqabase table. It's possible?\nI have a python script that in realtime could populate with new data the mysql tables.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":55060125,"Users Score":0,"Answer":"I'm not sure what problem you're trying to solve. Won't the rails app get the updated data from the db on each request anyway? Are you caching the data? If that's the case just have the python script invalidate the cache.","Q_Score":0,"Tags":"python,mysql,ruby-on-rails,web-applications,real-time","A_Id":55074611,"CreationDate":"2019-03-08T09:22:00.000","Title":"How to correctly update webapp when cron brings new data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a scenario to migrate SQL Server tables(30- 40 tables) to Oracle. I Cannot depend on SSIS as the no of tables to be migrated to Oracle will change regularly and I cannot always create or update a DFT when ever there is a change in schema.\nIs there any other way where the movement of data can be handled dynamically and can work effectively ? Like using Python or any other Programming languages ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":862,"Q_Id":55104895,"Users Score":0,"Answer":"Here is the approach I have decided to go considering the time constraint( using C# is taking more time).For 8 GB table it is taking 11 minutes to move the data SQL to Oracle.\nSteps:\n\nDump the SQL tables data into flat files.(Used BIML for automating\nthe DFT creation) \nTransfer these flat files to the Destination server.\nUsing SQL*Loader to load data from flat files to Oracle.","Q_Score":1,"Tags":"python,sql-server,oracle,ssis,etl","A_Id":55210016,"CreationDate":"2019-03-11T15:08:00.000","Title":"Migrating multiple tables data from SQL Server to Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the error\ndjango.db.utils.OperationError: FATAL:database \"\/path\/to\/current\/project\/projectname\/databasename\" does not exist.\nI have accessed the database both manually through psql, as well as through pgadmin4, and have verified in both instances that the database does exist, and I have verified that the port is correct.\nIm not sure why I cant access the database, or why it would say the database cannot be found.\nAccording to pgAdmin4, the database is healthy, and it is receiving at least 1 I\/O per second, so it can be read and written to by...something?\nI have installed both the psycopg2 and the psycopg2-binary just to be safe.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":55109206,"Users Score":0,"Answer":"I figured out the answer, or at least I do believe I did. It was a two part problem.\nPart of it was I left os.path.join(base_dir...) included as part of the '' name section.\nThe other was I used an \"@\" character as part of my password. Once I changed the password, and I removed the os.path.join(base_dir...) portion, it worked.","Q_Score":0,"Tags":"python,django,web","A_Id":55109207,"CreationDate":"2019-03-11T19:42:00.000","Title":"Having issues connecting to PostgreSQL Database in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I looked at google for a solution based on python, but did not find any... \nMy python script is trying to edit an xlsx that might be opened by another user from MS excel.\nIf I try to overwrite the .xlsx file or the ~$*.xlsx file, I get a winError 32:\n 'process cannot access the file because it is being used by another process'\nMy problem is that users around me use MS excel to look at this output... And MS excel always lock the files that are open, by default.\nIt there a way to 'steal' the access from the other users. (As they are not editing it anyway).\nI cannot not change the user permission (I think) as I am not admin of the files.\nI am using windows 10.\nThanks for your advices.\ncheers.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":619,"Q_Id":55114265,"Users Score":0,"Answer":"There really is no way around this - it is Excel preventing any other process on the system from obtaining write access.\nIf it were running on the same machine, you could consider connecting to the running Excel instance and getting it to close and reopen the document after opening it for writing yourself, but in your example it would likely be opened by someone on another machine.\nThe only solution here is to instruct your users to open the worksheet as read-only, which is an option every version of Excel allows, in which case you might be able to open it for writing. Whether that will allow you to update it while they are looking at it, is doubtful - you likely may want to look into connecting to an Excel sheet on OneDrive or SharePoint (or Teams etc. that use SharePoint as a back-end).","Q_Score":0,"Tags":"python,excel,xlsx","A_Id":55114378,"CreationDate":"2019-03-12T04:37:00.000","Title":"Python: Edit a xlsx file open by another user","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id ? And why original array is updated with _id? Please explain with example, if anybody knows? Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":55150525,"Users Score":0,"Answer":"Pymongo driver explicitly inserts _id of type ObjectId into the original array and hence original array gets updated before inserting into mongo. This is the expected behaviour of pymongo for insertmany query as per my previous experiences. Hope this answers your question.","Q_Score":0,"Tags":"python,pymongo","A_Id":55150655,"CreationDate":"2019-03-13T20:16:00.000","Title":"Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import nltk in my python file but i am getting this error\n\nFile \"mycode.py\", line 5, in \nfrom utilities import TextCleaner, TF_IDF_FeatureExtraction File\n\"\/home\/myhome\/Baseline\/utilities.py\", line 1, in import nltk\nFile\n\"\/home\/myhome\/.local\/lib64\/python3.5\/site-packages\/nltk\/init.py\",\nline 152, in from nltk.stem import * File\n\"\/home\/myhome\/.local\/lib64\/python3.5\/site-packages\/nltk\/stem\/init.py\",\nline 29, in from nltk.corpus.reader.panlex_lite import *\nFile\n\"\/home\/myhome\/.local\/lib64\/python3.5\/site-packages\/nltk\/corpus\/reader\/panlex_lite.py\",\nline 15, in \nimport sqlite3 ImportError: No module named\n'sqlite3'\n\nThe python version on server is 3.5.3 and i have sqlite version 3.13.0 installed\ni am currently running code on remote server and i cant use sudo command since its restricted for remote users. is there any thing i can do without sudo command to solve this problem?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":10077,"Q_Id":55170966,"Users Score":2,"Answer":"I Solved this issue by commenting out \nimport sqlite3 in the panlex_lite.py file present inside nltk library folder and also commented out sqlite3 connection string present inside this file and the code works now. This solution will only work if you are intented to use nltk only but not sqlite3","Q_Score":2,"Tags":"python,linux,sqlite,nltk","A_Id":55188552,"CreationDate":"2019-03-14T19:53:00.000","Title":"ImportError: No module named 'sqlite3'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import nltk in my python file but i am getting this error\n\nFile \"mycode.py\", line 5, in \nfrom utilities import TextCleaner, TF_IDF_FeatureExtraction File\n\"\/home\/myhome\/Baseline\/utilities.py\", line 1, in import nltk\nFile\n\"\/home\/myhome\/.local\/lib64\/python3.5\/site-packages\/nltk\/init.py\",\nline 152, in from nltk.stem import * File\n\"\/home\/myhome\/.local\/lib64\/python3.5\/site-packages\/nltk\/stem\/init.py\",\nline 29, in from nltk.corpus.reader.panlex_lite import *\nFile\n\"\/home\/myhome\/.local\/lib64\/python3.5\/site-packages\/nltk\/corpus\/reader\/panlex_lite.py\",\nline 15, in \nimport sqlite3 ImportError: No module named\n'sqlite3'\n\nThe python version on server is 3.5.3 and i have sqlite version 3.13.0 installed\ni am currently running code on remote server and i cant use sudo command since its restricted for remote users. is there any thing i can do without sudo command to solve this problem?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":10077,"Q_Id":55170966,"Users Score":0,"Answer":"What you have installed on server, is not a python module, but the sqlite3 utility. If you have pip3 installed, you can run pip3 install pysqlite3 from user, so it will install the module sqlite3 in your home directory.","Q_Score":2,"Tags":"python,linux,sqlite,nltk","A_Id":55171646,"CreationDate":"2019-03-14T19:53:00.000","Title":"ImportError: No module named 'sqlite3'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I spinup postgres container and its data path \/var\/lib\/postgresql\/data is mapped to my local using volumes. As soon as container is up and database is setup the local path populates with all db data. I need to some how check programatically (using Python) if local location is proper postgres db data. This is needed if I need to create tables or not. I create if local directory is blank or invalid postgres data and I don't if it is valid. The reason I am trying to achieve this is if I want to hook up local db created due to postgers_container_1 to postgres_container_2","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":381,"Q_Id":55255883,"Users Score":1,"Answer":"If the file \/var\/lib\/postgresql\/data\/PG_VERSION exists, then it's probably a valid data directory. This is the first thing Postgres will check when you try to start the server.\nOf course, there are many, many other things required to make it a valid data directory - too many to check by yourself. If you need to be 100% sure, the only practical way is to start the Postgres server and try to connect to it.","Q_Score":0,"Tags":"python,database,postgresql","A_Id":55257430,"CreationDate":"2019-03-20T07:54:00.000","Title":"How to check if directory is valid Postgres database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a snippet of code from my python 2.7 program:\ncur.execute(\"UPDATE echo SET ? = ? WHERE ID = ?\", (cur_class, fdate, ID,))\nthat when run, keeps throwing the following error:\nsqlite3.OperationalError: near \"?\": syntax error\nThe program is supposed to insert today's date, into the class column that matches the student ID supplied. If I remove the first \"?\" like so and hard code the parameter:\ncur.execute(\"UPDATE echo SET math = ? WHERE ID = ?\", (fdate, ID,))\neverything works just fine. I've googled all over the place and haven't found anything that works yet so I'm throwing out a lifeline.\nI've tried single quotes, double quotes, with and without parenthesis and a few other things I can't remember now. So far nothing works other than hard coding that first parameter which is really inconvenient.\nAs a troubleshooting step I had my program print the type() of each of the variables and they're all strings. The data type of the the cur_class field is VARCHAR, fdate is DATE, and ID is VARCHAR.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":55310428,"Users Score":0,"Answer":"Thanks to the tip from @Shawn earlier I solved the issue with the following code and it works great:\nsqlcommand = \"UPDATE echo SET \" + cur_class + \" = \" + fdate + \" WHERE ID = \" + ID\ncur.execute(sqlcommand)\nThis way python does the heavy lifting and constructs my string with all the variables expanded, then has the db execute the properly formatted SQL command.","Q_Score":0,"Tags":"python-2.7,sqlite","A_Id":55313778,"CreationDate":"2019-03-23T03:45:00.000","Title":"SQLite 3 Error when using parameter string, but not when implicitly typed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code which I want to deploy on firebase as my Application database is firebase realtimeDB.\nA part of this APP is developed on python so I want to integrate in with my App. Which can be done by deploying python piece of code on firebase.\nI am unable to find a way to deploy a python code via firebase hosting.\nAnyone have any solution I would really appreciate it.\nI have tried to deploy it with firebase CLI tools. But I think it supports Javascript","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1824,"Q_Id":55369357,"Users Score":1,"Answer":"You can't deploy any backend code with Firebase Hosting. It only serves static content. You will have to look into other ways of running your backend, such as Cloud Functions or App Engine.","Q_Score":0,"Tags":"python,firebase,firebase-hosting","A_Id":55369376,"CreationDate":"2019-03-27T03:31:00.000","Title":"How to deploy a python code on firebase server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to execute a large select query (about 50 000 000 from 200 000 000 rows, 15 columns) and fetch all of this data to pandas data frame using psycopg2. In pgadmin server status tool i can see, that my query is active for about half an hour and then become idle. I read it means that server is waiting for a new command. On the other hand, my python script still don't have data and it waiting for them too (there is no errors, it looks like data are downloading). \nTo sum up, database is waiting, python is waiting, should I still waiting? Is there a chance for happy ending? Or python is not able to process that big amount od data?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":677,"Q_Id":55432663,"Users Score":1,"Answer":"Holy smokes, Batman! If your query takes more than a few minutes to execute, you ought to think of a different way to process your data! If you are returning 200 000 000 rows of 15 single-byte columns, this is already 3 gigabytes of raw data, assuming not a single byte of overhead, which is very unlikely. If those columns are 64-bit integers instead, that is already 24 gigabytes. This is a lot of in-memory data to handle for Python.\nHave you considered what happens if your process fails during execution, or if the connection is interrupted? Your program will benefit from processing rows of data in chunks, if it is possible for your process. If it really is not possible, consider approaches that operate on the database itself, such as using PL\/pgSQL.","Q_Score":1,"Tags":"python,postgresql,psycopg2","A_Id":55432769,"CreationDate":"2019-03-30T14:58:00.000","Title":"Executing a large query with psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a task to compare data of two tables in two different oracle databases. We have access of views in both of db. Using SQLAlchemy ,am able to fetch rows from views but unable to parse it. \nIn one db the type of ID column is : Raw \nIn db where column type is \"Raw\", below is the row am getting from resultset . \n(b'\\x0b\\x975z\\x9d\\xdaF\\x0e\\x96>[Ig\\xe0\/', 1, datetime.datetime(2011, 6, 7, 12, 11, 1), None, datetime.datetime(2011, 6, 7, 12, 11, 1), b'\\xf2X\\x8b\\x86\\x03\\x00K|\\x99(\\xbc\\x81n\\xc6\\xd3', None, 'I', 'Inactive')\nID Column data: b'\\x0b\\x975z\\x9d\\xdaF\\x0e\\x96>[_Ig\\xe0\/'\nActual data in ID column in database: F2588B8603004B7C9928BC816EC65FD3\nThis data is not complete hexadecimal format as it has some speical symbols like >|[_ etc. I want to know that how can I parse the data in ID column and get it as a string.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":55471457,"Users Score":0,"Answer":"bytes.hex() solved the problem","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":55590150,"CreationDate":"2019-04-02T09:36:00.000","Title":"Unable to parse the rows in ResultSet returned by connection.execute(), Python and SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The table has two keys: filename (partition key) and eventTime (sort key). \nI want to update eventTime for certain filename. Tried put_item() and update_item() sending the same filename with new eventTime but those functions add a new item instead of update. \nWhat should I use for that purpose?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":16115,"Q_Id":55474664,"Users Score":5,"Answer":"According to DynamoDB\/AttributeValueUpdate aws docs:\n\nYou cannot use UpdateItem to update any primary key attributes.\n Instead, you will need to delete the item, and then use PutItem to\n create a new item with new attributes.","Q_Score":6,"Tags":"python,amazon-web-services,amazon-dynamodb","A_Id":58924104,"CreationDate":"2019-04-02T12:20:00.000","Title":"DynamodDB: How to update sort key?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using cx_Oracle to fetch tables with ROWID ends up crashing the python. I read that the solution was to set the environemtn variable 'ORA_OCI_NO_OPTIMIZED_FETCH' to 1. But using os.environ (in python) or Get-ChildItem Env: (in powershell), I don't see this particular variable. Then what should I do?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":134,"Q_Id":55487001,"Users Score":1,"Answer":"Then what should I do?\n\nCreate it. $env:ORA_OCI_NO_OPTIMIZED_FETCH = 1 in PowerShell, just before you run Python in the same shell.","Q_Score":0,"Tags":"python-3.x,powershell,windows-10,environment-variables,cx-oracle","A_Id":55487085,"CreationDate":"2019-04-03T04:17:00.000","Title":"How to set the environment \" ORA_OCI_NO_OPTIMIZED_FETCH\"?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to access the data in google clound bucket to my VM instance (jupyter notebook). I got this error and also getting problems related to this.\nERROR: Python 3 and later is not compatible with the Google Cloud SDK. Please use Python version 2.7.x.\nIf you have a compatible Python interpreter installed, you can use it by setting\nthe CLOUDSDK_PYTHON environment variable to point to it.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":74,"Q_Id":55490289,"Users Score":1,"Answer":"The Google cloud SDK is not compatible with Python3 for now. You will have to default to a better version as such 2.7.9 or later to work with the SDK.","Q_Score":0,"Tags":"python,google-cloud-platform,jupyter-notebook","A_Id":55490697,"CreationDate":"2019-04-03T08:18:00.000","Title":"Google cloud Bucket access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to import a load data from an Excel file to Abaqus Amplitude object by using python script, so that I can automate the preprocess to realize a large number of load conditions. But there is an Error: \"ValueError: File 'load.xlsx' is already in use. Close the file before importing the data.\"\nHowever, I have never opened this Excel file. I have reboot the computer to make sure the file is not opened. This error still appears. Below is what I entered in the Kernel Command Line Interface in Abaqus and the response: \n\n\n\nimport abq_ExcelUtilities\nabq_ExcelUtilities.excelUtilities.ExcelToAmplitude(inputFileForAmp='load.xlsx',sheetNameForAmp='Sheet1', ampStartCell='A1', ampEndCell='B34972', ampNameStr='Amp-1', amplitudeType=0)\n Importing file \"load.xlsx\"...\n File \"SMAPyaModules\\SMAPyaPluginsPy.m\\src\\abaqus_plugins\\excelUtilities\\abq_ExcelUtilities\\excelUtilities.py\", line 465, in ExcelToAmplitude\n File \"SMAPyaModules\\SMAPyaPluginsPy.m\\src\\abaqus_plugins\\excelUtilities\\abq_ExcelUtilities\\excelUtilities.py\", line 682, in CreateObject\n File \"SMAPyaModules\\SMAPyaPluginsPy.m\\src\\abaqus_plugins\\excelUtilities\\abq_ExcelUtilities\\excelUtilities.py\", line 512, in ExtractDataFromExcel\n ValueError: File 'load.xlsx'\n is already in use. Close the file before importing the data.\n\n\n\nI have no idea where to begin to address this problem. Any help would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":55499586,"Users Score":0,"Answer":"I solved this problem by close all software in the Windows tray. One of the softwares that startup along with Windows may obstructs my script. But when I reboot my PC again to trial out which one caused this trouble, the problem totally disappeared, and never show up. \nThanks to @Tom for his concern! Any good insight is welcomed.","Q_Score":0,"Tags":"python,excel,abaqus","A_Id":55511552,"CreationDate":"2019-04-03T16:05:00.000","Title":"How to solve \"ValueError: File 'load.xlsx' is already in use.\" error encountered when use 'abq_ExcelUtilities.excelUtilities.ExcelToAmplitude' method?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How are you today?\nI'm a newbie in Python. I'm working with SQL server 2014 and Python 3.7. So, my issue is: When any change occurs in a table on DB, I want to receive a message (or event, or something like that) on my server (Web API - if you like this name). \nI don't know how to do that with Python. \nI have an practice (an exp. maybe). I worked with C# and SQL Server, and in this case, I used \"SQL Dependency\" method in C# to solve that. It's really good!\nHave something like that in Python? Many thank for any idea, please!\nThank you so much.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":681,"Q_Id":55507064,"Users Score":0,"Answer":"I do not know many things about SQL. But I guess there are tools for SQL to detect those changes. And then you could create an everlasting loop thread using multithreading package to capture that change. (Remember to use time.sleep() to block your thread so that It wouldn't occupy the CPU for too long.) Once you capture the change, you could call the function that you want to use. (Actually, you could design a simple event engine to do that). I am a newbie in Computer Science and I hope my answer is correct and helpful. :)","Q_Score":0,"Tags":"python,sql-server,change-tracking","A_Id":55507101,"CreationDate":"2019-04-04T02:39:00.000","Title":"Tracking any change in an table on SQL Server With Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use a python script (docker container) to write to Redis db (docker container). The script main objective is to write to Redis db. But there are also other scripts that write to the same Redis db. So where should i make the connection to redis db inside a function in script or globally ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":55528464,"Users Score":0,"Answer":"If your python project is long running (e.g. a web app or a daemon script that runs forever) and making repeated calls, open a single connection and reuse it.\nIf your python code is short lived script (e.g. it runs for a few seconds then exits) then it doesn't matter so much. Even then, if it's making multiple reads\/writes it's better to open one connection and reuse it in the script.\nBy the wording of your question, it sounds as though you might be thinking of opening the connection outside the script? I'm not really sure where you're going with that, so I can't answer there.","Q_Score":0,"Tags":"python,redis","A_Id":55528560,"CreationDate":"2019-04-05T04:55:00.000","Title":"Is it a good practice to open connection to Redis database inside the function that writes to it or outside globally?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Google Form that dumps responses into a Google Sheets file. I need to pipeline these responses to MongoDB. Can someone give me some info on where I should start? I want the responses to be taken from the Google Sheet and put into MongoDB. I'm looking to do this in Python (Although I'm new to it). I've never had a task like this and I'm super eager to conquer it! Thanks for any insight you can give!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":4696,"Q_Id":55603692,"Users Score":1,"Answer":"If you download the Google Sheet files and use a library called openpyxl, you can interact directly with the .xlsx files from a Python script. \nFrom there, you should be able to convert rows in the spreadsheet to Python dictionary objects, and pipe those objects right into MongoDB using pymongo or the like. \nSounds like a useful tool! Best of luck.","Q_Score":1,"Tags":"python,mongodb,google-sheets","A_Id":55603957,"CreationDate":"2019-04-10T02:02:00.000","Title":"How to connect Google Sheets to MongoDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Reading the documentation it's pretty much clear how to build queries, but I couldn't find any tutorial how to connect to the database - i.e. instruct pypika which DB to use, which credentials etc. How can I connect to the database with pypika?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1475,"Q_Id":55608307,"Users Score":6,"Answer":"As far as I know, you can't connect to a database with pypika. Pypika is only a tool that makes SQL query strings. It doesn't have the functionality you are looking for.\nYou make the query string with the help of pypika, and you throw that string to your database system with packages that can do that, like pymysql for MySQL or psycopg for PostgreSQL.","Q_Score":6,"Tags":"python,query-builder,pypika","A_Id":55608640,"CreationDate":"2019-04-10T08:44:00.000","Title":"How can I connect to the database with pypika?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Data:\n\nTransaction table with region field (Oracle database-Only read access).\nUsers table in excel with region field. (I can add this table in secondary database)\n\nLDAP Authentication is used.\nUsing views(raw SQL query is used), I am showing aggregated data of transaction table.\nAll users are seeing same data, as there is no filter on the region.\nNow, I want to aggregate only those records which login user's region belongs to.\nHow can these implemented?\nMy approach:\n\nCreate region model\nImplement a Foreign Key with transaction tables(Is it possible?)\n\nWhat is best approach to this scenario?\nPlease explain clearly in steps.\nNOTE: I have solved my problem. Please look my answer. Is there any drawbacks of my approach? (New best approach is appreciated)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":189,"Q_Id":55616807,"Users Score":0,"Answer":"I solved this by following below steps.\nSteps:\n\nCreated a DataFrame of User Table present in excel using Pandas in my views.py.\nCaptured a login user ID into a variable.\nFiltered this User ID in DataFrame and got his\/her region and then I have passed this variable to final aggregating query and filtered there.\n\nShort and simple.","Q_Score":1,"Tags":"python,django,authentication,authorization","A_Id":55672098,"CreationDate":"2019-04-10T15:58:00.000","Title":"Django: how to fetch records only that matches the region belongs to login user?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:\ndb = db.drop(db.columns[[12:22]], axis = 1)\nThis works but it seems clumsy...\ndb = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)\nHow do I refer to a range of columns?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":55623798,"Users Score":0,"Answer":"Steven Burnap's explanation is correct, but the solution can be simplified - just remove the internal parentheses:\ndb = db.drop(db.columns[12:22], axis = 1)\nthis way, db.columns[12:22] is a 'slice' of the columns array (actually index, but doesn't matter here), which goes into the drop method.","Q_Score":0,"Tags":"python,pandas","A_Id":55625183,"CreationDate":"2019-04-11T02:25:00.000","Title":"Syntax Error In Python When Trying To Refer To Range Of Columns","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:\ndb = db.drop(db.columns[[12:22]], axis = 1)\nThis works but it seems clumsy...\ndb = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)\nHow do I refer to a range of columns?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":99,"Q_Id":55623798,"Users Score":1,"Answer":"The first example uses [12:22] is a \"slice\" of nothing. It's not a meaningful statement, so as you say, it gives a syntax error. It seems that what you want is a list containing the numbers 12 through 22. You need to either write it out fully as you did, or use some generator function to create it.\nThe simplest is range, which is a generator that creates a list of sequential values. So you can rewrite your example like:\ndb = db.drop(db.columns[list(range(12,23)]], axis = 1)\nThough it looks like you are using some sort of library. If you want more detailed control, you need to look the documentation of that library. It seems that db.columns is an object of a class that has defined an array operator. Perhaps that class's documentation shows a way of specifying ranges in a way other than a list.","Q_Score":0,"Tags":"python,pandas","A_Id":55623865,"CreationDate":"2019-04-11T02:25:00.000","Title":"Syntax Error In Python When Trying To Refer To Range Of Columns","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to save a file to a NoSQL Database using python. Which libraries could be useful\/ how should I go about this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":55699455,"Users Score":0,"Answer":"There is no single standard driver for NoSql databases. Most NoSql database have native python driver and other 3rd party driver. \nFor MongoDB : Pymongo is the native python library\/driver. \nMonogoengine is a good option if you are looking for something like an orm.","Q_Score":0,"Tags":"python,database,nosql,save","A_Id":55700027,"CreationDate":"2019-04-16T01:49:00.000","Title":"Importing & saving a file to a NOSQL Database with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I tried to import psycopg2, the following errors occured. \n\n\n\nimport psycopg2\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/site-packages\/psycopg2\/init.py\", line 50, in \n from psycopg2._psycopg import ( # noqa\n ImportError: dlopen(\/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/site-packages\/psycopg2\/_psycopg.cpython-37m-darwin.so, 2): Library not loaded: @rpath\/libssl.1.1.dylib\n Referenced from: \/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/site-packages\/psycopg2\/_psycopg.cpython-37m-darwin.so\n Reason: image not found","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3954,"Q_Id":55699874,"Users Score":0,"Answer":"I've had the same issue. After digging a little into the thread provided by @singingstone, the solution that worked for me was to pip uninstall psycopg2, then pip install psycopg2-binary.","Q_Score":0,"Tags":"python-3.x,terminal,psycopg2,importerror,dlopen","A_Id":70896283,"CreationDate":"2019-04-16T02:51:00.000","Title":"After installing psycopg2, I cannot import it properly","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Flask-Migrate (Alembic) to manage SQLAlchemy database migrations. I'm working on two different branches with different migrations. \n\nIf I switch branches, I get an error that the migration is not found.\nIf i merge this branches into the parent branch, I need downgrade migration's on both multiple branches and create new one. If I will not, I get migration's conflict error. \n\nHow can I do it easier? Maybe another tool that more like Django's migrations, but for Flask?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2068,"Q_Id":55715129,"Users Score":16,"Answer":"Alembic requires the chain of migrations to match the database marker for the current migration. If you create and run some migrations on a branch, then switch to another branch, the database is marked as being on a migration that no longer exists.\nTo work on multiple branches while using migrations, you'll need to figure out what the latest common migration on the branch you're switching to is, then downgrade to that version first. Then checkout the branch, and run any migrations that are unique to it.\nFor example, assume you created two branches off the \"dev\" branch called \"feature1\" and \"feature2\", and each have one new migration since \"dev\". To switch from \"feature1\" to \"feature2\":\n\nDowngrade the migrations added to the branch, in this case 1: flask db downgrade -1.\nCheckout the branch: git checkout feature2\nApply any upgrades for the new branch: flask db upgrade\n\nIf you don't want to lose data due to downgrades that remove columns or tables, you'll need to dump and restore the database for each branch instead.\n\nIf you're working on \"feature1\" and merge it into \"dev\", you need to update \"feature2\" so it knows about the new migrations that were merged in. Alembic will support having multiple branches as long as all the migrations are present. Once you merge \"feature2\", you can generate a merge migration to consolidate the two migration branches back to one.\n\nMerge \"feature1\" into \"dev\": git checkout dev, git merge feature1\nSwitch to \"feature2\" and merge \"dev\": git checkout feature2, git merge dev\nRun the migrations from \"dev\" and \"feature2\": flask db upgrade\nContinue working on \"feature2\".\nMerge \"feature2\" into \"dev\": git checkout dev, git merge feature2\nFind the migration ids that need to be merged: flask db heads\nflask db merge id1 id2, substituting the ids from the previous step.\nUpdate to the merge, note that there is only one head: flask db upgrade, flask db heads\n\n\nUnfortunately, this is a manual process. Alembic requires the migration chain to match the database marker, and there is currently no way around that. You may be able to write a git hook to help with this, but it's not something that already exists.","Q_Score":9,"Tags":"python,git,sqlalchemy,alembic,flask-migrate","A_Id":55715541,"CreationDate":"2019-04-16T19:11:00.000","Title":"Work on multiple branches with Flask-Migrate","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a MongoDB running on my machine locally with a few collections of data. I want to migrate it to Atlas, but the Live Migration Services are not available for the Free Tier. Is there another way to move the data I current have on my machine to Atlas?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":1995,"Q_Id":55785533,"Users Score":2,"Answer":"You can use mongodump and mongorestore option.\nmongodump --host x.x.x.x --port 27017 --db dbname --gzip --out \/data\/\nmongorestore --host x.x.x.x --port 27017 --db dbname --gzip \/data\/\nTake mongoDB dump from your machine and restore it to the atlas.","Q_Score":3,"Tags":"python-3.x,mongodb,mongodb-atlas","A_Id":55967685,"CreationDate":"2019-04-21T18:25:00.000","Title":"Migrating a MongoDB on a local machine to Mongo Atlas","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Looking for suggestions on how to store Price data using MAN AHL's Arctic Library for 5000 stocks EOD data as well as 1 minute data. Separate solutions for EOD and 1-minute data are also welcome. Once the data is stored, I want to perform the following operations:\n\nFetch data for a subset of stocks (lets say around 500-1000 out of the entire universe of 5000 stocks) between a given datetime range.\nAny update to historical data (data once stored in database) should have versioning. Data prior to the update should not be discarded. I should be able to fetch data as of a particular version\/timestamp.\n\nExample format of data:\n\n Date Stock Price\n0 d1 s1 100\n1 d2 s1 110\n2 d3 s1 105\n3 d1 s2 50\n4 d2 s2 45\n5 d3 s2 40","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":55785898,"Users Score":0,"Answer":"Arctic supports a few different storage engines. The only one that will do what you're looking for is VersionStore. It keeps versions of data, so any update you make to the data will be versioned, and you can retrieve data by timestamp ranges and by version. \nHowever it does not let you do a subsetting of stock like you want to do. I'd recommend subsetting your universe (say into US, EMEA, EUR, etc) or into whatever other organization makes sense for your use case.","Q_Score":0,"Tags":"python,pandas,finance","A_Id":60627318,"CreationDate":"2019-04-21T19:14:00.000","Title":"Storing and fetching multiple stocks in Arctic Library","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to find a better way to push data to sql db using python. I have tried \ndataframe.to_sql() method and cursor.fast_executemany()\nbut they don't seem to increase the speed with that data(the data is in csv files) i'm working with right now. Someone suggested that i could use named tuples and generators to load data much faster than pandas can do. \n[Generally the csv files are atleast 1GB in size and it takes around 10-17 minutes to push one file]\nI'm fairly new to much of concepts of python,so please suggest some method or atleast a reference any article that shows any info. Thanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":678,"Q_Id":55852550,"Users Score":0,"Answer":"If you are trying to insert the csv as is into the database (i.e. without doing any processing in pandas), you could use sqlalchemy in python to execute a \"BULK INSERT [params, file, etc.]\". Alternatively, I've found that reading the csvs, processing, writing to csv, and then bulk inserting can be an option.\nOtherwise, feel free to specify a bit more what you want to accomplish, how you need to process the data before inserting to the db, etc.","Q_Score":1,"Tags":"python,python-3.x,pandas,sqlalchemy,pyodbc","A_Id":55852914,"CreationDate":"2019-04-25T15:12:00.000","Title":"How to improve the write speed to sql database using python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I am doing a etl process in which I use Apache NiFi as an etl tool along with a postgresql database from google cloud sql to read csv file from GCS. As a part of the process, I need to write a query to transform data read from csv file and insert to the table in the cloud sql database. So, based on NIFi, I need to write a python to execute a sql queries automatically on a daily basis. But the question here is that how can I write a python to connect with the cloud sql database? What config that should be done? I have read something about cloud sql proxy but can I just use an cloud sql instance's internal ip address and put it in some config file and creating some dbconnector out of it? \nThank you\nEdit: I can connect to cloud sql database from my vm using psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres but I need to run python script for the etl process and there's a part of the process that need to execute sql. What I am trying to ask is that how can I write a python file that use for executing the sql \ne.g. In python, query = 'select * from table ....' and then run\npostgres.run_sql(query) which will execute the query. So how can I create this kind of executor?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":526,"Q_Id":55891829,"Users Score":0,"Answer":"I don't understand why you need to write any code in Python? I've done a similar process where I used GetFile (locally) to read a CSV file, parse and transform it, and then used ExecuteSQLRecord to insert the rows into a SQL server (running on a cloud provider). The DBCPConnectionPool needs to reference your cloud provider as per their connection instructions. This means the URL likely reference something.google.com and you may need to open firewall rules using your cloud provider administration.","Q_Score":0,"Tags":"python,google-cloud-storage,etl,google-cloud-sql,apache-nifi","A_Id":55894286,"CreationDate":"2019-04-28T15:35:00.000","Title":"Cloud SQL\/NiFi: Connect to cloud sql database with python and NiFi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to access RDS Instance from AWS Glue, I have a few python scripts running in EC2 instances and I currently use PYODBC to connect, but while trying to schedule jobs for glue, I cannot import PYODBC as it is not natively supported by AWS Glue, not sure how drivers will work in glue shell as well.","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":7319,"Q_Id":55936816,"Users Score":0,"Answer":"If anyone needs a postgres connection with sqlalchemy using python shell, it is possible by referencing the sqlalchemy, scramp, pg8000 wheel files, it's important to reconstruct the wheel from pg8000 by eliminating the scramp dependency on the setup.py.","Q_Score":5,"Tags":"python,amazon-web-services,amazon-rds,aws-glue","A_Id":63319983,"CreationDate":"2019-05-01T13:12:00.000","Title":"How to Connect to RDS Instance from AWS Glue Python Shell?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to access RDS Instance from AWS Glue, I have a few python scripts running in EC2 instances and I currently use PYODBC to connect, but while trying to schedule jobs for glue, I cannot import PYODBC as it is not natively supported by AWS Glue, not sure how drivers will work in glue shell as well.","AnswerCount":6,"Available Count":3,"Score":-0.0333209931,"is_accepted":false,"ViewCount":7319,"Q_Id":55936816,"Users Score":-1,"Answer":"These are the steps that I used to connect to an RDS from glue python shell job:\n\nPackage up your dependency package into an egg file (these package must be pure python if I remember correctly). Put it in S3.\nSet your job to reference that egg file under the job configuration > Python library path\nVerify that your job can import the package\/module\nCreate a glue connection to your RDS (it's in Database > Tables, Connections), test the connection make sure it can hit your RDS\nNow in your job, you must set it to reference\/use this connection. It's in the require connection as you configure your job or edit your job.\n\nOnce those steps are done and verify, you should be able to connect. In my sample I used pymysql.","Q_Score":5,"Tags":"python,amazon-web-services,amazon-rds,aws-glue","A_Id":58142147,"CreationDate":"2019-05-01T13:12:00.000","Title":"How to Connect to RDS Instance from AWS Glue Python Shell?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to access RDS Instance from AWS Glue, I have a few python scripts running in EC2 instances and I currently use PYODBC to connect, but while trying to schedule jobs for glue, I cannot import PYODBC as it is not natively supported by AWS Glue, not sure how drivers will work in glue shell as well.","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":7319,"Q_Id":55936816,"Users Score":2,"Answer":"For AWS Glue use either Dataframe\/DynamicFrame and specify the SQL Server JDBC driver. AWS Glue already contain JDBC Driver for SQL Server in its environment so you don't need to add any additional driver jar with glue job. \ndf1=spark.read.format(\"jdbc\").option(\"driver\", \"com.microsoft.sqlserver.jdbc.SQLServerDriver\").option(\"url\", url_src).option(\"dbtable\", dbtable_src).option(\"user\", userID_src).option(\"password\", password_src).load()\nif you are using a SQL instead of table:\ndf1=spark.read.format(\"jdbc\").option(\"driver\", \"com.microsoft.sqlserver.jdbc.SQLServerDriver\").option(\"url\", url_src).option(\"dbtable\", (\"your select statement here\") A).option(\"user\", userID_src).option(\"password\", password_src).load()\nAs an alternate solution you can also use jtds driver for SQL server in your python script running in AWS Glue","Q_Score":5,"Tags":"python,amazon-web-services,amazon-rds,aws-glue","A_Id":55957809,"CreationDate":"2019-05-01T13:12:00.000","Title":"How to Connect to RDS Instance from AWS Glue Python Shell?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When a number is long in my excel document, Excel formats the cell value to scientific notation (ex 1.234567e+5) while the true number still exists in the formula bar at the top of the document (ex 123456789012).\nI want to convert this number to a string for my own purposes, but when I do, the scientific notation is captured, rather than the true number. How can I assure that it's the true number that is being converted to a string?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":696,"Q_Id":55959512,"Users Score":0,"Answer":"Python will ignore the formatting that Excel uses for anything other than dates and times, so you should just be able to convert the number to a string. You will, however, be limited by Excel's precision. The OOXML file format is not suitable for some tasks notably those with historical dates or high precision times.","Q_Score":0,"Tags":"python,openpyxl","A_Id":55969896,"CreationDate":"2019-05-02T20:06:00.000","Title":"Removing scientific-notation from number in openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing web applications, APIs, and backends using the Django MVC framework. A major aspect of Django is its implementation of an ORM for models. It is an exceptionally good ORM. Typically when using Django, one utilizes an existing interface that maps one's Django model to a specific DBMS like Postgres, MySQL, or Oracle for example.\nI have some specific needs, requirements regarding performance and scalability, so I really want to use AWS's Dynamo DB because it is highly cost efficient, very performant, and scales really well.\nWhile I think Django allows one to implement their own interface for a DBMS if one wishes to do so, it is clearly advantageous to be able to use an existing DBMS interface when constructing one's Django models if one exists.\nCan someone recommend a Django model interface to use so I can construct a model in Django that uses AWS's Dynamo DB?\nHow about one using MongoDB?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":17688,"Q_Id":55976471,"Users Score":1,"Answer":"DynamoDB is non-relational which I think makes it architecturally incompatible with an ORM like Django's.","Q_Score":13,"Tags":"python,django,orm,nosql,amazon-dynamodb","A_Id":61207563,"CreationDate":"2019-05-03T20:08:00.000","Title":"How can I use AWS's Dynamo Db with Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am developing web applications, APIs, and backends using the Django MVC framework. A major aspect of Django is its implementation of an ORM for models. It is an exceptionally good ORM. Typically when using Django, one utilizes an existing interface that maps one's Django model to a specific DBMS like Postgres, MySQL, or Oracle for example.\nI have some specific needs, requirements regarding performance and scalability, so I really want to use AWS's Dynamo DB because it is highly cost efficient, very performant, and scales really well.\nWhile I think Django allows one to implement their own interface for a DBMS if one wishes to do so, it is clearly advantageous to be able to use an existing DBMS interface when constructing one's Django models if one exists.\nCan someone recommend a Django model interface to use so I can construct a model in Django that uses AWS's Dynamo DB?\nHow about one using MongoDB?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":17688,"Q_Id":55976471,"Users Score":0,"Answer":"There is no Django model interface for AWS DynamoDB, but you may retrieve data from that kind of db using boto3 software provided by AWS.","Q_Score":13,"Tags":"python,django,orm,nosql,amazon-dynamodb","A_Id":58988747,"CreationDate":"2019-05-03T20:08:00.000","Title":"How can I use AWS's Dynamo Db with Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am developing web applications, APIs, and backends using the Django MVC framework. A major aspect of Django is its implementation of an ORM for models. It is an exceptionally good ORM. Typically when using Django, one utilizes an existing interface that maps one's Django model to a specific DBMS like Postgres, MySQL, or Oracle for example.\nI have some specific needs, requirements regarding performance and scalability, so I really want to use AWS's Dynamo DB because it is highly cost efficient, very performant, and scales really well.\nWhile I think Django allows one to implement their own interface for a DBMS if one wishes to do so, it is clearly advantageous to be able to use an existing DBMS interface when constructing one's Django models if one exists.\nCan someone recommend a Django model interface to use so I can construct a model in Django that uses AWS's Dynamo DB?\nHow about one using MongoDB?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":17688,"Q_Id":55976471,"Users Score":2,"Answer":"You can try Dynamorm or pynamoDB. I haven't tried them maybe they can help.","Q_Score":13,"Tags":"python,django,orm,nosql,amazon-dynamodb","A_Id":62535835,"CreationDate":"2019-05-03T20:08:00.000","Title":"How can I use AWS's Dynamo Db with Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to Python and to train myself, I would like to use Python build a database that would store information about wine - bottle, date, rating etc. The idea is that:\n\nI could use to database to add a new wine entries\nI could use the database to browse wines I have previously entered\nI could run some small analyses\n\nThe design of my Python I am thinking of is: \n\nDesign database with Python package sqlite3\nMake a GUI built on top of the database with the package Tkinter, so that I can both enter new data and query the database if I want.\n\nMy question is: would your recommend this design and these packages? Is it possible to build a GUI on top of a database? I know StackOverflow is more for specific questions rather than \"project design\" questions so I would appreciate if anyone could point me to forums that discuss project design ideas.\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2745,"Q_Id":55985778,"Users Score":0,"Answer":"If it's just for you, sure there is no problem with that stack.\nIf I were doing it, I would skip Tkinter, and build something using Flask (or Django.) Doing a web page as a GUI yields faster results, is less fiddly, and more applicable to the job market.","Q_Score":0,"Tags":"python,database,sqlite,user-interface","A_Id":55985903,"CreationDate":"2019-05-04T18:47:00.000","Title":"Python: how to create database and a GUI together?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building an automation in Python which fetches some data from a database table and populates an excel sheet. I'm using cx_Oracle module for setting up a connection. There are around 44 queries, and around 2 million rows of data are fetched for each query, which makes this script run for an hour. So I'm planning to use threading module to speed up the process. Although I'm confused whether to use multiple connections (around 4) or have less connections (say, 2) and multiple cursors per connection.\nThe queries are independent of each other. They are select statements to fetch the data and are not manipulating the table in any way.\nI just need some pros and cons of using both approaches so that I can decide how to go about the script. I tried searching for it a lot, but curiously I'm not able to find any relevant piece of information at all. If you point me to any kind of blog post, even that will be really helpful.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1060,"Q_Id":56034411,"Users Score":2,"Answer":"An Oracle connection can really do just one thing at a time. Specifically while a database session can have multiple open cursors at any one time, it can only be executing one of them.\nAs such, you won't see any improvement by having multiple cursors in a single connection. \nThat said, depending on the bottleneck, you MIGHT not see any improvement from going with multiple connections either. It might be choked on bandwidth in returning the data, disk access etc. If you can code in such a way as to keep the number of threads \/ connections variable, then you can tweak until you find the best result.","Q_Score":0,"Tags":"python-3.x,oracle,cx-oracle","A_Id":56034544,"CreationDate":"2019-05-08T06:00:00.000","Title":"Multiple Cursors versus Multiple Connections","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm starting to work with Django, already done some models, but always done that with 'code-first' approach, so Django handled the table creations etc. Right now I'm integrating an already existing database with ORM and I encountered some problems. \nDatabase has a lot of many-to-many relationships so there are quite a few tables linking two other tables. I ran inspectdb command to let Django prepare some models for me. I revised them, it did rather good job guessing the fields and relations, but the thing is, I think I don't need those link tables in my models, because Django handles many-to-many relationships with ManyToManyField fields, but I want Django to use that link tables under the hood.\nSo my question is: Should I delete the models for link tables and add ManyToManyFields to corresponding models, or should I somehow use this models?\nI don't want to somehow mess-up database structure, it's quite heavy populated.\nI'm using Postgres 9.5, Django 2.2.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":56045891,"Users Score":0,"Answer":"In many cases it doesn't matter. If you would like to keep the code minimal then m2m fields are a good way to go. If you don't control the database structure it might be worth keeping the inspectdb schema in case you have to do it again after schema changes that you don't control. If the m2m link tables can grow properties of their own then you need to keep them as models.","Q_Score":0,"Tags":"python,django,many-to-many,django-orm","A_Id":56046079,"CreationDate":"2019-05-08T17:12:00.000","Title":"Handling many-to-many relationship from existing database using Django ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Currently our system is in live proving phase. So, we need to check whether the set of tables populated in production are matching with the tables populated in sandbox (test). At the moment we have written a query for each table comparison and then run it in sql client to check it. There will be few more tables to check in future. I thought of automating the process in python by supplying the table names to a function which can then load the two tables in dataframes and then do a comparison which could highlight the differences.\nSome of the tables have 2.7 millions rows for a day and are wide having 400 columns. When I tried to load the data (2.7 m rows * 400 columns) into dataframe, I get an error as it runs out of memory as I run my query in Jupyter where I have only 20 GB limit. what are the options here? Is Pandas dataframes only way to compare this large dataset? or are there any other library to achieve the same?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":218,"Q_Id":56129032,"Users Score":0,"Answer":"For handling this kind of data I would recommend using something like Hadoop rather than pandas\/python. This isn't much of an answer but I can't comment yet.","Q_Score":0,"Tags":"python,python-3.x,pandas,pandasql","A_Id":56129689,"CreationDate":"2019-05-14T11:08:00.000","Title":"Python comparing millions of rows and hundreds of columns between two tables from relational DB","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking for ages to find a suitable module to interact with excel, which needs to do the following:\n\nCheck a column of cells for an \"incorrect\" value and change it\nCheck for empty cells, and if so, replace it\nCheck a cell value is consistent with the contents of another cell(for example, if called Datasheet, the code in another cell = DS)and if not, change it.\n\nI've looked at openpxyl but I am running Python 3 and I can only seem to find it working for 2. \nI've seen a few others but they seem to be mainly focusing creating a new spreadsheet and simple writing\/reading.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":56168115,"Users Score":1,"Answer":"The Pandas library is amazing to work with excel files. It can read excel files easily and you then have access to a lot of tools. You can do all the operations you mentionned above. You can also save your result in the excel format","Q_Score":1,"Tags":"python,excel,module","A_Id":56168145,"CreationDate":"2019-05-16T12:03:00.000","Title":"Python 3 and Excel, Finding complex module to use","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know that the file db.sqlite3 in Django holds the entire database and the entire content within it. Is it safe to keep all the project files, the *.py files, the migrations files, but replace the db.sqlite3 file with a different one. If both these db.sqlite3 files work on the same database model, with the same tables, rows, columns, and everything, then if I swap out that file it should work seamlessly.\nI want to copy the original db.sqlite3 file into a different directory. Then I want to create a new db.sqlite3 file in my project. Then I want to work with the new database file, and give it other data to test how the project would work with it. Then I want to delete the new db.sqlite3 file, and I want to restore the old one, which I've saved into another directory.\nWould that work? And how can I create a new db.sqlite3 file, a clean state to put test data into?\nAlso, what if I build my project on another sever, can I copy my old db.sqlite3 file there too, and have the database with all it's saved data restored?\nBasically, the main idea of my question is: are we to treat the db.sqlite3 file as a simple \"text file\" with input\/output data for our program, something that is freely interchangeable?","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":631,"Q_Id":56196650,"Users Score":3,"Answer":"The whole of the database lives in the .db file. You can safely copy it in either direction when there is no process running against the database. If there is a process running against the database then you might see rollback journal files or write-ahead log files in the same directory (if the database is in WAL mode) and if you leave these behind you might risk losing some pending transactions. Closing the database properly generally causes these files to disappear.","Q_Score":1,"Tags":"python,django,sqlite,django-models","A_Id":56196702,"CreationDate":"2019-05-18T06:49:00.000","Title":"Can I arbitrarily replace and restore the db.sqlite3 file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I know that the file db.sqlite3 in Django holds the entire database and the entire content within it. Is it safe to keep all the project files, the *.py files, the migrations files, but replace the db.sqlite3 file with a different one. If both these db.sqlite3 files work on the same database model, with the same tables, rows, columns, and everything, then if I swap out that file it should work seamlessly.\nI want to copy the original db.sqlite3 file into a different directory. Then I want to create a new db.sqlite3 file in my project. Then I want to work with the new database file, and give it other data to test how the project would work with it. Then I want to delete the new db.sqlite3 file, and I want to restore the old one, which I've saved into another directory.\nWould that work? And how can I create a new db.sqlite3 file, a clean state to put test data into?\nAlso, what if I build my project on another sever, can I copy my old db.sqlite3 file there too, and have the database with all it's saved data restored?\nBasically, the main idea of my question is: are we to treat the db.sqlite3 file as a simple \"text file\" with input\/output data for our program, something that is freely interchangeable?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":631,"Q_Id":56196650,"Users Score":1,"Answer":"Yes, whole sqlite database is contained in one file, so you can freely move, replace, push it with django project (althought it's not recommended) and it will work fine.\nEven if you have 2 different projects with same apps, same model structure and migrations, you can swap them.\nIf you remove your db.sqlite3 and want to create new one, just run python manage.py migrate and it will create new database and apply all migrations.","Q_Score":1,"Tags":"python,django,sqlite,django-models","A_Id":56196701,"CreationDate":"2019-05-18T06:49:00.000","Title":"Can I arbitrarily replace and restore the db.sqlite3 file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am programmatically creating csv files using Python. Many end users open and interact with those files using excel. The problem is that Excel by default mutates many of the string values within the file. For example, Excel converts 0123 > 123.\nThe values being written to the csv are correct and display correctly if I open them with some other program, such as Notepad. If I open a file with Excel, save it, then open it with Notepad, the file now contains incorrect values.\nI know that there are ways for an end user to change their Excel settings to disable this behavior, but asking every single user to do so is not possible for my situation.\nIs there a way to generate a csv file using Python that a default copy of Excel will NOT mutate the values of?\nEdit: Although these files are often opened in Excel, they are not only opened in Excel and must be output as .csv, not .xlsx.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":190,"Q_Id":56227867,"Users Score":1,"Answer":"Have you tried expressly formatting the relevant column(s) to 'str' before exporting?\ndf['column_ex'] = df['column_ex'].astype('str')\ndf.to_csv('df_ex.csv')\nAnother workaround may be to open Excel program (not file), go to Data menu, then Import form Text. Excel's import utility will give you options to define each column's data type. I believe Apache's Liibre office defaults to keep the leading 0s but Excel doesn't.","Q_Score":1,"Tags":"python,excel,python-3.x,string,csv","A_Id":56228359,"CreationDate":"2019-05-20T20:37:00.000","Title":"Create a csv file that Excel will not mutate the data of when opening","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On the limited Azure Machine Learning Studio, one can import data from an On-Premises SQL Server Database.\nWhat about the ability to do the exact same thing on a python jupyter notebook on a virtual machine from the Azure Machine Learning Services workspace ?\nIt does not seem possible from what I've found in the documentation.\nData sources would be limited in Azure ML Services : \"Currently, the list of supported Azure storage services that can be registered as datastores are Azure Blob Container, Azure File Share, Azure Data Lake, Azure Data Lake Gen2, Azure SQL Database, Azure PostgreSQL, and Databricks File System\"\nThank you in advance for your assistance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1048,"Q_Id":56240481,"Users Score":0,"Answer":"You can always push the data to a supported source using a data movement\/orchestration service. Remember that all Azure services are not going to have every source option like Power BI, Logic Apps or Data Factory...this is why data orchestration\/movement services exist.","Q_Score":0,"Tags":"python,sql,azure,jupyter-notebook,azure-machine-learning-service","A_Id":56327491,"CreationDate":"2019-05-21T14:26:00.000","Title":"Can I import data from On-Premises SQL Server Database to Azure Machine Learning virtual machine?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having some text files in S3 location. I am trying to compress and zip each text files in it. I was able to zip and compress it in Jupyter notebook by selecting the file from my local. While trying the same code in S3, its throwing error as file is missing. Could someone please help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":56252619,"Users Score":0,"Answer":"Amazon S3 does not have a zip\/compress function.\nYou will need to download the files, zip them on an Amazon EC2 instance or your own computer, then upload the result.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,databricks","A_Id":56254221,"CreationDate":"2019-05-22T08:37:00.000","Title":"Zipping the files in S3","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run python 2.7 code (Queries) on Postgres DB. Regarding the version of pyodbc installed either it crashes python or I got problem relative to UTF8. Si I cannot use my python code.\nI installed python 2.7, pyodbc 3.0.7 on MAC Mojave 10.14 (Then I get UTF8 error). \n\npyodbc.DataError: ('22021', '[22021] ERROR: invalid byte sequence for\n encoding \"UTF8\": 0xe0 0x81 0xa9;\\nError while executing the query (1)\n (SQLExecDirectW)')\n\nI installed python 2.7 pyodbc > 3.0.7 on MAC Mojave 10.14 (Then Python is crashing)\nI should be able to connect to my database using ODBC driver.\nAny help?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":56255969,"Users Score":0,"Answer":"Would recommend installing Python 3.X and see if that works. It is more updated and many new libraries are utilizing it more and more - will provide better use in the future.","Q_Score":0,"Tags":"python,postgresql,macos,odbc,macos-mojave","A_Id":56256264,"CreationDate":"2019-05-22T11:35:00.000","Title":"Cannot run postgres request from python 2.7 on MACOS mojave","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When we are writing a pyspark dataframe to s3 from EC2 instance using pyspark code the time taken to complete write operation is longer than usual time. Earlier it used to take 30 min to complete the write operation for 1000 records, but now it is taking more than an hour. Also after completion of the write operation the context switch to next lines of code is taking longer time(20-30min). We are not sure whether this is AWS-s3 issue or else because of lazy computation of Pyspark. Could anybody throw some light on this quesion.\nThanking in advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":75,"Q_Id":56256999,"Users Score":1,"Answer":"It seems an issue with the cloud environment. Four things coming to my mind, which you may check:\n\nSpark version: For some older version of spark, one gets S3 issues.\nData size being written in S3, and also the format of data while storing\nMemory\/Computation issue: The memory or CPU might be getting utilized to maximum levels.\nTemporary memory storage issue- Spark stores some intermediate data in temporary storage, and that might be getting full. \n\nSo, with more details, it may become clear on the solution.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,amazon-ec2,pyspark","A_Id":56291641,"CreationDate":"2019-05-22T12:38:00.000","Title":"writing a pyspark dataframe to AWS - s3 from EC2 instance using pyspark code the time taken to complete write operation is longer than usual time","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a large table, and I wish to update only a single column. The values for that column is present in a CSV file. I want to avoid a single insert to event column because it would take a long time. I would prefer something like a COPY so that I can directly dump the new values over the older values. But dumping a specific using copy appends it to the end of the table rather than overwriting it. \nAny suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":142,"Q_Id":56290182,"Users Score":0,"Answer":"Suggestion: Insertion is faster than update in DB so can follow below steps:\n\nLoad CSV to a temp table.Not all columns are required just primary\nkey and the column which need to be updated in main table.\nRename main table to main_temp\nrecreate main table (no records now)\nJoin main_temp and temp table based on primary key and insert into\nmain(select particular column from temp table instead from main table)\ndrop main_temp","Q_Score":0,"Tags":"python,python-3.x,database,postgresql,postgresql-9.3","A_Id":56292646,"CreationDate":"2019-05-24T09:54:00.000","Title":"Overwrite a column in Postgres from a column in a csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to connect to a customer's DB via ODBC. The ODBC DSN was setup on the the Windows machine. I don't know the path to the DB or the DB name so I have to rely on the DSN.\nFrom what I've seen in general this does not seem to be possible, but I know Pervasive ODBC allows it and seems like MSSQL also allows it.\nQuestion is, does Firebird 2.5 allow this? If it does could you please help me with the connection string\nI've set up a Firebird DB on a local windows machine, created an ODBC DSN (and tested the connection locally).\nI then tested connections from unixODBC (isql) and python pyodbc and they all connect fine, but I have to specify DB location and name and credentials.\nI need to connect to the remote (windows) Firebird ODBC DSN from python 3.6 pyodbc (linux)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":307,"Q_Id":56323397,"Users Score":2,"Answer":"What you want isn't possible*. An ODBC DSN exists only on the machine where it is defined. It is basically a connection configuration that is decoupled from your application, and your application references the configuration by a name.\nYou can't use a DSN remotely (if that were possible, that would be a pretty big security leak). You will need to define a DSN (or use a DSN-less connection string), on your specific machine to be able to use it from that machine.\nGiven you're using Python, consider using FDB or pyfirebirdsql instead of using pyODBC.\n\n* Or at least, not possible without some middleware service on the remote machine that mediates between your application and the ODBC DSN on the remote machine.","Q_Score":0,"Tags":"python-3.x,odbc,firebird,pyodbc","A_Id":56325957,"CreationDate":"2019-05-27T09:36:00.000","Title":"Connect from python3 - pyodbc (linux) to a firebird 2.5 (windows) ODBC DSN on remote machine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using openpyxl to create charts. For some reason, I do not want to insert row names when adding data. So, I want to edit the legend entries manually. I am wondering if anyone know how to do this.\nMore specifically \nclass openpyxl.chart.legend.Legend(legendPos='r', legendEntry=(), \n layout=None, overlay=None, spPr=None, txPr=None, extLst=None). I want to edit the legendEntry field","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":621,"Q_Id":56333244,"Users Score":0,"Answer":"You cannot do that. You need to set the rows when creating the plots. That will create the titles for your charts","Q_Score":0,"Tags":"python,excel,openpyxl","A_Id":56371107,"CreationDate":"2019-05-27T23:00:00.000","Title":"Setting legend entries manually","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a BigQuery table 'A' with schema {'UUID': 'String'}. I want to join this table with a ADH(Ads Data Hub) table 'B' having schema {'UUID': 'String', 'USER_ID': 'INT'} on UUID and fetch all user_ids to a new table. \nI am having trouble in joining ADH table with BigQuery table.\nCan someone please help me out?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":774,"Q_Id":56339554,"Users Score":0,"Answer":"If your table A locates in the same region as your ADH account, you should be able to run cross table queries in ADH.\nYou cannot query user_id as part of the output by using ADH, it's not allowed due to privacy protection. However, if you had your UUID passed to Google Marketing Platform using floodlight with custom-variable enabled, you can use this UUID as join key to map both tables, and do analysis from there.","Q_Score":0,"Tags":"google-bigquery,google-ads-data-hub,ads-data-hub,python-bigquery","A_Id":68575038,"CreationDate":"2019-05-28T09:54:00.000","Title":"How to join BigQuery table with ADH(Ads Data Hub) table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have two related problems. I'm working on Arabic dataset using Excel. I think that Excel somehow reads the contents as \u061f\u061f\u061f\u061f\u061f , because when I tried to replace this character '\u061f' with this '?' it replaces the whole text in the sheet. But when I replace or search for another letter it works.\nSecond, I'm trying to edit the sheet using python, but I'm unable to write Arabic letters (I'm using jGRASP). For example when I write the letter '\u0644' it appears as 0644, and when I run the code this message appears : \"\u064fError encoding text. Unable to encode text using charset windows-1252 \".","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":224,"Q_Id":56358762,"Users Score":0,"Answer":"0644 is the character code of the character in hex. jGRASP displays that when the font does not contain the character. You can use \"Settings\" > \"Font\" in jGRASP to choose a CSD font that contains the characters you need. Finding one that has those characters and also works well as a coding font might not be possible, so you may need to switch between two fonts.\njGRASP uses the system character encoding for loading and saving files by default. Windows-1252 is an 8-bit encoding used on English language Windows systems. You can use \"File\" > \"Save As\" to save the file with the same name but a different encoding (charset). Once you do that, jGRASP will remember it (per file) and you can load and save normally. Alternately, you can use \"Settings\" > \"CSD Windows Settings\" > \"Workspace\" and change the \"Default Charset\" setting to make the default something other than the system default.","Q_Score":0,"Tags":"excel,python-3.x,utf-8,character-encoding,arabic","A_Id":56370930,"CreationDate":"2019-05-29T10:47:00.000","Title":"How to enable my python code to read from Arabic content in Excel?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"pip install MySQL-python-1.2.4b4.tar.gz returns this error on Python 2.7 (ubuntu 18.04):\nCan you help me?\n\n # pip install MySQL-python-1.2.5.zip\n Processing .\/MySQL-python-1.2.5.zip\n Building wheels for collected packages: MySQL-python\n Running setup.py bdist_wheel for MySQL-python ... error\n Complete output from command \/usr\/bin\/python -u -c \"import setuptools, tokenize;__file__='\/tmp\/pip-00mbCK-build\/setup.py';f=getattr(tokeni ze, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" bdist_wheel -d \/tmp\/tmpePf4 ITpip-wheel- --python-tag cp27:\n running bdist_wheel\n running build\n running build_py\n creating build\n creating build\/lib.linux-x86_64-2.7\n copying _mysql_exceptions.py -> build\/lib.linux-x86_64-2.7\n creating build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/__init__.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/converters.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/connections.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/cursors.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/release.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/times.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n creating build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/__init__.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/CR.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/FIELD_TYPE.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/ER.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/FLAG.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/REFRESH.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/CLIENT.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n running build_ext\n building '_mysql' extension\n creating build\/temp.linux-x86_64-2.7\n x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -f debug-prefix-map=\/build\/python2.7-3hk45v\/python2.7-2.7.15~rc1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Dversion_in fo=(1,2,5,'final',1) -D__version__=1.2.5 -I\/usr\/include\/mysql -I\/usr\/include\/python2.7 -c _mysql.c -o build\/temp.linux-x86_64-2.7\/_mysql.o\n _mysql.c:44:10: fatal error: my_config.h: No such file or directory\n #include \"my_config.h\"\n ^~~~~~~~~~~~~\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\n ----------------------------------------\n Failed building wheel for MySQL-python\n Running setup.py clean for MySQL-python\n Failed to build MySQL-python\n Installing collected packages: MySQL-python\n Running setup.py install for MySQL-python ... error\n Complete output from command \/usr\/bin\/python -u -c \"import setuptools, tokenize;__file__='\/tmp\/pip-00mbCK-build\/setup.py';f=getattr(toke nize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record \/tmp\/pip -vhjCMt-record\/install-record.txt --single-version-externally-managed --compile:\n running install\n running build\n running build_py\n creating build\n creating build\/lib.linux-x86_64-2.7\n copying _mysql_exceptions.py -> build\/lib.linux-x86_64-2.7\n creating build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/__init__.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/converters.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/connections.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/cursors.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/release.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n copying MySQLdb\/times.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\n creating build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/__init__.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/CR.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/FIELD_TYPE.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/ER.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/FLAG.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/REFRESH.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n copying MySQLdb\/constants\/CLIENT.py -> build\/lib.linux-x86_64-2.7\/MySQLdb\/constants\n running build_ext\n building '_mysql' extension\n creating build\/temp.linux-x86_64-2.7\n x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=\/build\/python2.7-3hk45v\/python2.7-2.7.15~rc1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Dversion_ info=(1,2,5,'final',1) -D__version__=1.2.5 -I\/usr\/include\/mysql -I\/usr\/include\/python2.7 -c _mysql.c -o build\/temp.linux-x86_64-2.7\/_mysql.o\n _mysql.c:44:10: fatal error: my_config.h: No such file or directory\n #include \"my_config.h\"\n ^~~~~~~~~~~~~\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\n ----------------------------------------\n Command \"\/usr\/bin\/python -u -c \"import setuptools, tokenize;__file__='\/tmp\/pip-00mbCK-build\/setup.py';f=getattr(tokenize, 'open', open)(__fi le__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record \/tmp\/pip-vhjCMt-record\/install-r ecord.txt --single-version-externally-managed --compile\" failed with error code 1 in \/tmp\/pip-00mbCK-build\/\n\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":443,"Q_Id":56361162,"Users Score":0,"Answer":"If you use: \n\npip install mysqlclient==1.4.6\n\ninstead, then you'll find it works. The mariadb package has changed the way it stores some header files, and the MySQL-python pip package hasn't been updated for many years.\nI had the same problem and switching worked for me.","Q_Score":2,"Tags":"mysql-python","A_Id":60338668,"CreationDate":"2019-05-29T12:52:00.000","Title":"Installing MySQLdb for python 2.7 returns error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Formulas in the excel sheet are getting removed when it is saved through an openpyxl python script.\nIs there any way to save excel file without removing formulas using a python script\nExpected: Formulas should not be removed and data should be read through openpyxl lib\nActual: Data is read, but formulas are getting removed","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":589,"Q_Id":56390764,"Users Score":0,"Answer":"Though xlswings, this issue is resolved","Q_Score":0,"Tags":"python,openpyxl","A_Id":56521506,"CreationDate":"2019-05-31T07:40:00.000","Title":"Unable to save formulas under excel file when it is saved using openpyxl lib","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2000 inserts in a loop\nI do execute sql\nMy question is: should i do commit after every execute or after loop in order to minimise my affect on locking table and don't care about buffer?\nthe problem: my script sending to much queries and they stand in wait\nI don't ask about limits of sql for x rows in commit and not about rollback in code. My question is about queue that standing half day in oracle server inactive and some waiting and prevent to new one proccess to run.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":623,"Q_Id":56393924,"Users Score":2,"Answer":"2000 inserts is close to nothing. Though, it would be far better if you could insert them at once, using a single INSERT statement, than doing it in a loop.\nGenerally speaking, commit once you're done with the transaction. It is most probably not ended at every turn of the loop, is it? Besides, committing in a loop (frequently) leads to ORA-01555 snapshot too old error.\nSaying that \"your script sends many queries\" - what kind of them? SELECTs aren't blocked by anything. INSERTs aren't blocked either (I guess you don't lock the whole table, do you)? If you're trying to update rows locked by other user(s), that - obviously - won't work until they are released. The question is: why do those queries wait half a day? Smells like bad management.","Q_Score":0,"Tags":"python,oracle","A_Id":56394192,"CreationDate":"2019-05-31T11:05:00.000","Title":"should i do commit after every execute or after loop","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to figure out the logic for processing multiple files in S3 at once as files are added randomly. For discussion sake here's an example:\n\nFiles are added randomly to S3 bucket; an by bursty or at random intervals\nLambda function is triggered once 9 files are in the S3 bucket; the lambda function post processes or combines these files together.\nOnce processed, the files will be moved to another bucket or deleted.\n\nHere's what I've tried:\n\nI have S3 triggers for all S3 puts\nIn my lambda function I ignore the filename itself and list the S3 bucket based on the key to count how many files exist\nproblem is when traffic is bursty or arrives steady but at a rapid pace it is difficult to identify unique groups of 9 files\nI have uuid prefixes on file names for performance reasons so sequential filenames don't exist.\nI've considered writing meta data to a nosql db but haven't gone down that route yet.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":4988,"Q_Id":56437033,"Users Score":1,"Answer":"One possible solution is to use a scheduled lambda (can be as often or as sparse as you want based on your traffic) that pulls events from a SQS queue populated by S3 put events. The assumes that you're focused on batch processing n files at a time and the order does not matter (given the uuid naming). \nTo create this workflow would be something like this:\n\nCreate SQS queue for holding S3 PUT events\nAdd trigger to S3 bucket on PUTs to create event in SQS queue from 1.\nCreate Lambda with env variables (for bucket and queue)\n\n\nThe lambda should check the queue if there are any in-flight messages and use just the bucket\nIf there are, stop run (to prevent a file from being processed multiple times)\nIf no in-flight messages, list objects from S3 with limit of n (your batch size)\nRun your process logic if enough objects are returned (could be less than n) \nDelete files\n\nCreate CloudWatch rule for running lambda every n seconds\/minutes\/hours\n\nSome other things to keep in mind based on your situation's specifics:\n\nIf there are a lot of files rapidly being sent and n is significantly small, single-tracking processing (step 3.2 would result in long processing times). This also depends on the length of processing time, whether data can be processed multiple times, etc...\nListObjectsV2 could return less than the MaxKeys parameter, if this is an issue, could have a larger MaxKeys and just process the first n.","Q_Score":2,"Tags":"python-3.x,aws-lambda,amazon-sqs","A_Id":56439707,"CreationDate":"2019-06-04T03:38:00.000","Title":"Execute lambda function for multiple files in S3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to figure out the logic for processing multiple files in S3 at once as files are added randomly. For discussion sake here's an example:\n\nFiles are added randomly to S3 bucket; an by bursty or at random intervals\nLambda function is triggered once 9 files are in the S3 bucket; the lambda function post processes or combines these files together.\nOnce processed, the files will be moved to another bucket or deleted.\n\nHere's what I've tried:\n\nI have S3 triggers for all S3 puts\nIn my lambda function I ignore the filename itself and list the S3 bucket based on the key to count how many files exist\nproblem is when traffic is bursty or arrives steady but at a rapid pace it is difficult to identify unique groups of 9 files\nI have uuid prefixes on file names for performance reasons so sequential filenames don't exist.\nI've considered writing meta data to a nosql db but haven't gone down that route yet.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4988,"Q_Id":56437033,"Users Score":0,"Answer":"You could also think of using the step function that triggers lambda\/glue job to copy the files further to Redshift\/s3, introduce some file counts logic (assuming fix number of files are arriving)\/a wait time (e.g., 30 minutes assuming all files have landed). This is not the perfect solution, but if you fix the flow of files, it may work perfectly fine.","Q_Score":2,"Tags":"python-3.x,aws-lambda,amazon-sqs","A_Id":66265134,"CreationDate":"2019-06-04T03:38:00.000","Title":"Execute lambda function for multiple files in S3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to save a dataset using partitionBy on S3 using pyspark. I am partitioning by on a date column. Spark job is taking more than hour to execute it. If i run the code without partitionBy it just takes 3-4 mints. \nCould somebody help me in fining tune the parititonby?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1717,"Q_Id":56496387,"Users Score":1,"Answer":"Use version 2 of the FileOutputCommiter\n.set(\"mapreduce.fileoutputcommitter.algorithm.version\", \"2\")","Q_Score":0,"Tags":"python,apache-spark,amazon-s3,pyspark,amazon-emr","A_Id":57731033,"CreationDate":"2019-06-07T14:37:00.000","Title":"partitionBy taking too long while saving a dataset on S3 using Pyspark","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded postgresql as well as django and python but when I try running the command \"python manage.py runserver\" it gives me an error saying \"Fatal: password authentication failed for user\" . I am trying to run it locally but am unable to figure out how to get past this issue. \nI was able to connect to the server in pgAdmin but am still getting password authentication error message","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":227,"Q_Id":56548140,"Users Score":0,"Answer":"You need to change the password used to connect to your local Database, and this can be done, modifying your setting.py file in \"DATABASES\" object","Q_Score":0,"Tags":"python,django,postgresql","A_Id":63626428,"CreationDate":"2019-06-11T16:32:00.000","Title":"Password authentication failed when trying to run django application on server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm wondering how sqlite3 works when working in something like repl.it? I've been working on learning chatterbot on my own computer through Jupiter notebook. I'm a pretty amateur coder, and I have never worked with databases or SQL. When working from my own computer, I pretty much get the concept that when setting up a new bot with chatterbot, it creates a sqlite3 file, and then saves conversations to it to improve the chatbot. However, if I create a chatbot the same way only through repl.it and give lots of people the link, is the sqlite3 file saved online somewhere? Is it big enough to save lots of conversations from many people to really improve the bot well?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":56571509,"Users Score":0,"Answer":"I am not familiar with repl.it, but for all the answers you have asked the answer is yes. For example, I have made a simple web page that uses the chatterbot library. Then I used my own computer as a server using ngrok and gather training data from users.","Q_Score":0,"Tags":"python,sqlite,chatterbot,repl.it","A_Id":56579507,"CreationDate":"2019-06-12T23:20:00.000","Title":"Chatterbot sqlite store in repl.it","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently trying to develop macros\/programs to help me edit a big database in Excel.\nJust recently I successfully wrote a custom macro in VBA, which stores two big arrays into memory, in memory it compares both arrays by only one column in each (for example by names), then the common items that reside in both arrays are copied into another temporary arrays TOGETHER with other entries in the same row of the array. So if row(11) name was \"Tom\", and it is common for both arrays, and next to Tom was his salary of 10,000 and his phone number, the entire row would be copied.\nThis was not easy, but I got to it somehow.\nNow, this works like a charm for arrays as big as 10,000 rows x 5 columns + another array of the same size 10,000 rows x 5 columns. It compares and writes back to a new sheet in a few seconds. Great!\nBut now I tried a much bigger array with this method, say 200,000 rows x 10 columns + second array to be compared 10,000 rows x 10 columns...and it took a lot of time.\nProblem is that Excel is only running at 25% CPU - I checked that online it is normal.\nThus, I am assuming that to get a better performance I would need to use another 'tool', in this case another programming language.\nI heard that Python is great, Python is easy etc. but I am no programmer, I just learned a few dozen object names and I know some logic so I got around in VBA.\nIs it Python? Or perhaps changing the programming language won't help? It is really important to me that the language is not too complicated - I've seen C++ and it stings my eyes, I literally have no idea what is going on in those codes.\nIf indeed python, what libraries should I start with? Perhaps learn some easy things first and then go into those arrays etc.?\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":56607263,"Users Score":0,"Answer":"I have no intention of condescending but anything I say would sound like condescending, so so be it.\nThe operation you are doing is called join. It's a common operation in any kind of database. Unfortunately, Excel is not a database. \nI suspect that you are doing NxM operation in Excel. 200,000 rows x 10,000 rows operation quickly explodes. Pick a key in N, search a row in M, and produce result. When you do this, regardless of computer language, the computation order becomes so large that there is no way to finish the task in reasonable amount of time.\nIn this case, 200,000 rows x 10,000 rows require about 5,000 lookup per every row on average in 200,000 rows. That's 1,000,000,000 times.\nSo, how do the real databases do this in reasonable amount of time? Use index. When you look into this 10,000 rows of table, what you are looking for is indexed so searching a row becomes log2(10,000). The total order of computation becomes N * log2(M) which is far more manageable. If you hash the key, the search cost is almost O(1) - meaning it's constant. So, the computation order becomes N.\nWhat you are doing probably is, in real database term, full table scan. It is something to avoid for real database because it is slow.\nIf you use any real (SQL) database, or programming language that provides a key based search in dataset, your join will become really fast. It's nothing to do with any programming language. It is really a 101 of computer science.\nI do not know anything about what Excel can do. If Excel provides some facility to lookup a row based on indexing or hashing, you may be able to speed it up drastically.","Q_Score":0,"Tags":"python,arrays,excel,performance","A_Id":56608095,"CreationDate":"2019-06-15T04:10:00.000","Title":"Editing a big database in Excel - any easy to learn language that provides array manipulation except for VBA? Python? Which library?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"thanks for hearing me out.\nI have a dataset that is a matrix of shape 75000x10000 filled with float values. Think of it like heatmap\/correlation matrix. I want to store this in a SQLite database (SQLite because I am modifying an existing Django project). The source data file is 8 GB in size and I am trying to use python to carry out my task.\nI have tried to use pandas chunking to read the file into python and transform it into unstacked pairwise indexed data and write it out onto a json file. But this method is eating up my computational cost. For a chunk of size 100x10000 it generates a 200 MB json file.\nThis json file will be used as a fixture to form the SQLite database in Django backend.\nIs there a better way to do this? Faster\/Smarter way. I don't think a 90 GB odd json file written out taking a full day is the way to go. Not even sure if Django databases can take this load.\nAny help is appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":124,"Q_Id":56633576,"Users Score":2,"Answer":"SQLite is quite impressive for what it is, but it's probably not going to give you the performance you are looking for at that scale, so even though your existing project is Django on SQLite I would recommend simply writing a Python wrapper for a different data backend and just using that from within Django.\nMore importantly, forget about using Django models for something like this; they are an abstraction layer built for convenience (mapping database records to Python objects), not for performance. Django would very quickly choke trying to build 100s of millions of objects since it doesn't understand what you're trying to achieve.\nInstead, you'll want to use a database type \/ engine that's suited to the type of queries you want to make; if a typical query consists of a hundred point queries to get the data in particular 'cells', a key-value store might be ideal; if you're typically pulling ranges of values in individual 'rows' or 'columns' then that's something to optimize for; if your queries typically involve taking sub-matrices and performing predictable operations on them then you might improve the performance significantly by precalculating certain cumulative values; and if you want to use the full dataset to train machine learning models, you're probably better off not using a database for your primary storage at all (since databases by nature sacrifice fast-retrieval-of-full-raw-data for fast-calculations-on-interesting-subsets), especially if your ML models can be parallelised using something like Spark.\nNo DB will handle everything well, so it would be useful if you could elaborate on the workload you'll be running on top of that data -- the kind of questions you want to ask of it?","Q_Score":0,"Tags":"python,sql,django,pandas,bigdata","A_Id":56656088,"CreationDate":"2019-06-17T14:39:00.000","Title":"I want to write a 75000x10000 matrix with float values effectively into a database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to connect to an Oracle database within a python script, I'm not allowed to use any 3rd party imports\/downloads, only the python standard library, like cx_oracle, which is the only solution to this I've found. I'm not super familiar with oracle databases, could someone explain how to connect and query without using cx_oracle and things like it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":963,"Q_Id":56669750,"Users Score":0,"Answer":"Oracle's network protocol isn't public so you need either (i) some Oracle technology installed on your computer that knows that protocol - this is cx_Oracle and Oracle Instant Client (ii) or something like Oracle's ORDS product running on the database which will let you use REST calls.\nIf you need to interact with an Oracle Database you could make a very strong argument that you need to install cx_Oracle and Oracle Instant Client. cx_Oracle is on PyPI so it can be installed like any other Python package you need. Instant Client needs to be installed separately, but is the Oracle product that you could be expected to require to connect to Oracle DB.","Q_Score":1,"Tags":"python,sql,database,oracle","A_Id":56676662,"CreationDate":"2019-06-19T14:10:00.000","Title":"Connecting to Oracle DB in python without using 3rd parties","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm pretty new to Postgres and very new to SQLAlchemy, so I apologise if this is a silly question.\nI've spent sometime googling this and reading the documentation on SQLAlchemy but I cant seem to find a straight answer.\nSo, my question is this..\nAs relationships are defined in code when using ORM, providing the database table structures define the necessary column fields, do you actually need to define ForeignKey constraints in the database itself as well?\nI know that the constraints can help with enforcing integrity but do they need to be there for a successful ORM implementation?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":136,"Q_Id":56673047,"Users Score":0,"Answer":"Foreign key constraints don't need to be there in order to have a successful ORM implementation. I think it's probably most common for the ORM to try to manage that kind of thing itself rather than follow database best practices. \nPersonally, I have a problem with that approach. I usually deal with enterprise databases that have many programs written in many different languages accessing the database. The other programs are simply not going to delete rows and cascade the deletes by calling the ORM, even if that's possible. \nSome ORMs support \"legacy\" databases, meaning the ORM can be configured to deal with databases that already implemented arbitrary constraints, primary keys that have multiple columns rather than an ID number per table, cascading updates and deletes, and so on.\nIn any case, the database belongs to the business, not to the ORM. Support for that idea varies, too.","Q_Score":0,"Tags":"python,postgresql,sqlalchemy","A_Id":56674138,"CreationDate":"2019-06-19T17:28:00.000","Title":"Do you need to define relationship in the DB if you use ORM?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am repeatedly doing an operation which creates a record in my Cassandra table at every iteration. However, for my purposes I only need a limited number of the most recent results stored. Stale rows are not interesting, and also the database would quickly inflate in size because the operation is meant to run many times a second over many days.\nI am essentially using the Cassandra table as a buffer. That is by design.\nIs there a way to set Cassandra to have a limit on how many rows a table can have, and drop old rows automatically with minimum performance impact when new rows are pushed?\nMy codebase is in Python so I'd prefer a Python solution.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":265,"Q_Id":56732357,"Users Score":3,"Answer":"No, there is no such method built-in. \nThe traditional approach in Cassandra is for removing old information is not by count, but rather by date: When you insert a row (or even modify a single cell), you can put an expiration time (a.k.a. TTL) on this data. E.g., you write a row that is set to expire on one day. Cassandra will then take care of dropping the expired data from disk - automatically and efficiently (actually dropping the data happens during compaction). \nThis is of course not the same as saying you always want to keep exactly the newest 1000 rows, but maybe this is good enough for your use case, if your main intention is to keep your database size from exploding, and not really to keep a specific number of rows.","Q_Score":1,"Tags":"python,cassandra","A_Id":56733645,"CreationDate":"2019-06-24T08:10:00.000","Title":"Keep only newest N rows in Cassandra","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am repeatedly doing an operation which creates a record in my Cassandra table at every iteration. However, for my purposes I only need a limited number of the most recent results stored. Stale rows are not interesting, and also the database would quickly inflate in size because the operation is meant to run many times a second over many days.\nI am essentially using the Cassandra table as a buffer. That is by design.\nIs there a way to set Cassandra to have a limit on how many rows a table can have, and drop old rows automatically with minimum performance impact when new rows are pushed?\nMy codebase is in Python so I'd prefer a Python solution.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":265,"Q_Id":56732357,"Users Score":0,"Answer":"Use can use TTL. It will automatically delete the rows as the time mentioned in TTL","Q_Score":1,"Tags":"python,cassandra","A_Id":56752884,"CreationDate":"2019-06-24T08:10:00.000","Title":"Keep only newest N rows in Cassandra","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to run some PySpark code whenever a BigQuery table is updated?\nI have something similar running whenever a file is uploaded to Google Cloud Storage using Cloud Functions but I can't find anything in the BigQuery documentation that offers similar functionality.\nWould appreciate any help, thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":56740463,"Users Score":0,"Answer":"There is currently no BigQuery trigger for Google Cloud Functions, however this feature is currently in progress and should be launched soon (as of June 2019).","Q_Score":0,"Tags":"python,google-cloud-platform,google-bigquery,google-cloud-functions","A_Id":56781347,"CreationDate":"2019-06-24T16:15:00.000","Title":"How do I run some code whenever a BigQuery table is updated?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a .sql file containing insert statement from a table. Basically, I get the data from one table, do some modification, then create insert statement from a list of dict and write it to a file. However, the issue is with the text field. The values are not escaped. Is there any utility\/helper function that can help me get the insert statement with the handling of escape characters.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":56756004,"Users Score":0,"Answer":"Maybe you should use text.replace('', '') to escape characters that do not fit in the SQL query.","Q_Score":0,"Tags":"python,mysql,cursor","A_Id":56756765,"CreationDate":"2019-06-25T14:09:00.000","Title":"Creating insert into mysql script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a way to get the size of a database with SQL Alchemy. Ideally, it will be agnostic to which underlying type of database is used. Is this possible?\nEdit:\nBy size, I mean total number of bytes that the database uses.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1584,"Q_Id":56761389,"Users Score":1,"Answer":"The way I would do is to find out if you can run a SQL query to get the answer. Then, you can just run this query via SQLAlchemy and get the result.","Q_Score":3,"Tags":"python,sqlalchemy","A_Id":57019257,"CreationDate":"2019-06-25T20:15:00.000","Title":"How to get database size in SQL Alchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a website to get to know aws and Django better. The idea is to let a user upload an excel file, convert it to csv and then let the user download the converted csv file. \nI am using amazon s3 for file storage. My question is, what is the best way to make the conversion? Is there any way to access the excel file once it is stored in the s3 bucket and convert it to csv via Django? Sorry if my question is silly but I haven\u2019t been able to find much information on that online. Thanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1128,"Q_Id":56777595,"Users Score":0,"Answer":"On every put event of Bucket you can trigger a AWS Lambda function which will convert your File format and save in desired bucket location.","Q_Score":0,"Tags":"python,django,amazon-web-services,amazon-s3,file-conversion","A_Id":56777922,"CreationDate":"2019-06-26T16:45:00.000","Title":"What is the best way to convert a file in amazon s3 with Django\/python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to make a temporary table a create on pyspark available via Thrift. My final goal is to be able to access that from a database client like DBeaver using JDBC.\nI'm testing first using beeline.\nThis is what i'm doing.\n\nStarted a cluster with one worker in my own machine using docker and added spark.sql.hive.thriftServer.singleSession true on spark-defaults.conf\nStarted Pyspark shell (for testing sake) and ran the following code:\nfrom pyspark.sql import Row\nl = [('Ankit',25),('Jalfaizy',22),('saurabh',20),('Bala',26)]\nrdd = sc.parallelize(l)\npeople = rdd.map(lambda x: Row(name=x[0], age=int(x[1])))\npeople = people.toDF().cache()\npeebs = people.createOrReplaceTempView('peebs')\nresult = sqlContext.sql('select * from peebs')\nSo far so good, everything works fine.\nOn a different terminal I initialize spark thrift server:\n.\/sbin\/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10001 --conf spark.executor.cores=1 --master spark:\/\/172.18.0.2:7077\nThe server appears to start normally and I'm able to see both pyspark and thrift server jobs running on my spark cluster master UI.\nI then connect to the cluster using beeline\n.\/bin\/beeline\nbeeline> !connect jdbc:hive2:\/\/172.18.0.2:10001\nThis is what I got\n\nConnecting to jdbc:hive2:\/\/172.18.0.2:10001\n Enter username for jdbc:hive2:\/\/172.18.0.2:10001: \n Enter password for jdbc:hive2:\/\/172.18.0.2:10001: \n 2019-06-29 20:14:25 INFO Utils:310 - Supplied authorities: 172.18.0.2:10001\n 2019-06-29 20:14:25 INFO Utils:397 - Resolved authority: 172.18.0.2:10001\n 2019-06-29 20:14:25 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2:\/\/172.18.0.2:10001\n Connected to: Spark SQL (version 2.3.3)\n Driver: Hive JDBC (version 1.2.1.spark2)\n Transaction isolation: TRANSACTION_REPEATABLE_READ\n\nSeems to be ok.\nWhen I list show tables; I can't see anything.\n\nTwo interesting things I'd like to highlight is:\n\nWhen I start pyspark I get these warnings\n\nWARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0\nWARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException\nWARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException\n\nWhen I start the thrift server I get these:\n\nrsync from spark:\/\/172.18.0.2:7077\n ssh: Could not resolve hostname spark: Name or service not known\n rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\n rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to ...\n\n\nI've been through several posts and discussions. I see people saying we can't have temporary tables exposed via thrift unless you start the server from within the same code. If that's true how can I do that in python (pyspark)?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1684,"Q_Id":56820752,"Users Score":0,"Answer":"createOrReplaceTempView creates an in-memory table. The Spark thrift server needs to be started on the same driver JVM where we created the in-memory table.\nIn the above example, the driver on which the table is created and the driver running STS(Spark Thrift server) are different.\nTwo options\n1. Create the table using createOrReplaceTempView in the same JVM where the STS is started.\n2. Use a backing metastore, and create tables using org.apache.spark.sql.DataFrameWriter#saveAsTable so that tables are accessible independent of the JVM(in fact without any Spark driver. \nRegarding the errors:\n1. Relates to client and server metastore version.\n2. Seems like some rsync script trying to decode spark:\\\\ url\nBoth doesnt seems to be related to the issue.","Q_Score":1,"Tags":"python,apache-spark,pyspark,thrift,spark-thriftserver","A_Id":56824231,"CreationDate":"2019-06-29T20:41:00.000","Title":"How to view pyspark temporary tables on Thrift server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I just tried to find out how to install additional Python packages for the standalone installation of Orange3.\nI work on MacOS and want to use the \"SQL Table\" widget which needs pymysql installed. After installing all add-ons, including Prototypes, the said widget still tells me to \"Please install a backend to this widget\". This issue remains when pymysql is installed system wide - which by itself is less than ideal and apparently also not the way Orange is intended to work.\nI was expecting some click and play install for packages similar to the one for add-ons or some prominently displayed information in the documentation which I failed to find (if it is there).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":832,"Q_Id":56847107,"Users Score":2,"Answer":"You can install any pip-installable package in the Add-on dialog, if you use the \"Add more...\" button.\nYou can also install packages into the Orange app from a terminal if you run python or pip from the app. On my Mac, I would call \/Applications\/Orange3.app\/Contents\/MacOS\/pip + any arguments. In fact, this is what the Add-on dialog does.","Q_Score":2,"Tags":"python,orange","A_Id":56851044,"CreationDate":"2019-07-02T07:10:00.000","Title":"How to install Python packages for Orange3 standalone installation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to join two bigquery tables in such a way that the data is read from table using query and inner join should be performed by beam coGroupBY key. How can I pass the primary key to join both tables?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":287,"Q_Id":56934861,"Users Score":1,"Answer":"Read data from two different bigquery tables in two different pcollection.\nThen create tuples with your join key using lamda or map function from beam.\nOnce you have these two tuples for tables , go ahead and use coGroupBY key to join these two pcollections.","Q_Score":0,"Tags":"python-2.7,google-cloud-platform,google-bigquery","A_Id":56951506,"CreationDate":"2019-07-08T12:22:00.000","Title":"How to join two bigquery tables using python and beam coGroupby concept without directly passing the join condition in the query?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is going to be more of an abstract question, since there's no code I can provide related to this question. I'm a bit new to working with databases, so I'm not familiar with conventional designs (yet).\nI have these tables: users and servers. \nI currently have a method of keeping score for each user by incrementing the score field in the users table. However, this results in global scores, which is fine, but I would like to be able to track server-specific scores as well.\nWhat would be the best approach for keeping a user's score for each server they use?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":56960555,"Users Score":0,"Answer":"To track server specific scores you can just add a table with servers ids by using guild.id.","Q_Score":0,"Tags":"python-3.x,sqlite,discord.py","A_Id":57065038,"CreationDate":"2019-07-09T21:10:00.000","Title":"How to store values for each user for each server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I want to add a button in my text area in Spotfire which will open an excel file (that is connected to my spotfire visualisation) or at least to a network folder with that file. \nI believe I can write an ironpython script just to open that file and make changes. How will I do that? \nUpdate: \nAfter some googling I have tried to run a simple script smth like: \nt=open('D:\/data\/folderA\/folderB\/file.xlsx','w')\nTo avoid problems with \"\/\" or \"\\\", I also tried importing os\nimport os\nt=open('D:','data', 'folderA', 'folderB', 'file.xlsx', 'w')\nNeither of these work.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":827,"Q_Id":56970160,"Users Score":0,"Answer":"For those who is still struggling to find the solution, it turned out to be simpler than I thought it would. \nfrom System.Diagnostics import Process\nProcess.Start(r 'start c:\\test\\abc.xlsx')","Q_Score":0,"Tags":"python,ironpython,spotfire","A_Id":58213112,"CreationDate":"2019-07-10T11:51:00.000","Title":"Python script to open a file in Spotfire","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Connect to mongodb 4 server with pymongo 3.8 but get serverselection timeout error\npymongo.errors.ServerSelectionTimeoutError: IP:host: [Errno 104] Connection reset by peer","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":56982791,"Users Score":0,"Answer":"I have similar issue. \nI could not connect at first then changed to master host. \nAfter that the connection problem fixed. However, I still cannot CRUD or list collections. It throws \n OperationFailure: Authentication failed.","Q_Score":1,"Tags":"python-3.x,pymongo","A_Id":58973297,"CreationDate":"2019-07-11T06:24:00.000","Title":"pymongo 3.8 not work with mondob 4 and python 3.5","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pivot table in excel that I want to read the raw data from that table into python. Is it possible to do this? I do not see anything in the documentation on it or on Stack Overflow.\nIf the community could be provided some examples on how to read the raw data that drives pivot tables, this could greatly assist in routine analytical tasks.\nEDIT: \nIn this scenario there are no raw data tabs. I want to know how to ping the pivot table get the raw data and read it into python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":888,"Q_Id":57078501,"Users Score":0,"Answer":"First, recreate raw data from the pivot table. The pivot table has full information to rebuild the raw data.\n\nMake sure that none of the items in the pivot table fields are hidden -- clear all the filters and Slicers that have been applied.\nThe pivot table does not need to contain all the fields -- just make sure that there is at least one field in the Values area.\nShow the grand totals for rows and columns. If the totals aren't visible, select a cell in the pivot table, and on the Ribbon, under PivotTable Tools, click the Analyze tab. In the Layout group, click Grand totals, then click On for Rows and Columns.\nDouble-click the grand total cell at the bottom right of the pivot table. This should create a new sheet with the related records from the original source data.\n\nThen, you could read the raw data from the source.","Q_Score":5,"Tags":"python,excel,pandas,pivot-table","A_Id":71120859,"CreationDate":"2019-07-17T14:40:00.000","Title":"Getting the Raw Data Out of an Excel Pivot Table in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Red Hat, apache 2.4.6, worker mpm, mod_wsgi 4.6.5, and Python 3.7 When I start httpd I get the above error and: \nModuleNotFoundError: No module named 'encodings'\nIn the httpd error_log.\nI'm using a python virtual environment created from a python installed from source under my home directory. I installed mod_wsgi from source using --with-python= option pointing to the python binary in my virtual environment, then I copied the mod_wsgi.so file into my apache modules directory as mod_wsgi37.so\nI ran ldd on this file, and have a .conf file loading it into httpd like this:\nLoadFile \/home\/myUser\/pythonbuild\/lib\/libpython3.7m.so.1.0\n LoadModule wsgi_module modules\/mod_wsgi37.so\nThen within my VirtualHost I have:\nWSGIDaemonProcess wsgi group=www threads=12 processes=2 python-path=\/var\/\n www\/wsgi-scripts python-home=\/var\/www\/wsgi-scripts\/wsgi_env3\nWSGIProcessGroup wsgi\nWSGIScriptAlias \/test \/var\/www\/wsgi-scripts\/test.py\n\nfrom my virtual environment:\nsys.prefix:'\/var\/www\/wsgi-scripts\/wsgi_env3'\nsys.real_prefix:'\/home\/myUser\/pythonbuild'\nWhen I switch to the system-installed mod_wsgi\/python combo (remove python-home line from WSGIDaemonProcess, and change the .conf file to load the original mod_wsgi.so) it works fine. It seems like some path variables aren't getting set properly. Is there another way to set variables like PYTHONHOME that I'm missing? How can I fix my install?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2265,"Q_Id":57082540,"Users Score":1,"Answer":"I had a very similar issue and I found that my manually specified LoadModule wsgi_module \"\/path_to_conda\/\" was being ignored because the previously apache-wide wsgi mod was being loaded. You can check if wsgi.* is present in \/etc\/apache2\/mods-enabled.\nIf that is the case, consider a2dismod wsgi to disable the apache wsgi that loads the wrong python.","Q_Score":2,"Tags":"python,apache,redhat,wsgi","A_Id":68673425,"CreationDate":"2019-07-17T18:56:00.000","Title":"mod_wsgi - Fatal Python error: initfsencoding: unable to load the file system codec","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"import mysql.connector\nModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package\npip install mysql-connector-python-rf\npython version-3.7.2\npip install mysql-connector-python-rf\npip install mysql-connector-python \nsuccesfully installed\nimport mysql.connector\nModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package\nwhereas when I import mysql gives no error message","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":6971,"Q_Id":57130245,"Users Score":5,"Answer":"resolved by renaming the file to anything else than MySQL.py as it just tries to load itself I guess","Q_Score":3,"Tags":"python,mysql","A_Id":61525699,"CreationDate":"2019-07-21T04:06:00.000","Title":"import mysql.connector ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Loading the excel file using read_excel takes quite long. Each Excel file has several sheets. The first sheet is pretty small and is the sheet I'm interested in but the other sheets are quite large and have graphs in them. Generally this wouldn't be a problem if it was one file, but I need to do this for potentially thousands of files and pick and combine the necessary data together to analyze. If somebody knows a way to efficiently load in the file directly or somehow quickly make a copy of the Excel data as text that would be helpful!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":768,"Q_Id":57173573,"Users Score":-1,"Answer":"The method read_excel() reads the data into a Pandas Data Frame, where the first parameter is the filename and the second parameter is the sheet.\ndf = pd.read_excel('File.xlsx', sheetname='Sheet1')","Q_Score":0,"Tags":"python,pandas,python-2.7","A_Id":61889824,"CreationDate":"2019-07-23T23:56:00.000","Title":"How to best(most efficiently) read the first sheet in Excel file into Pandas Dataframe?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python script not running from SQL Server Agent\nthe python script makes use of the requests library (installed)\nwhen i run from CMD :\npython \"C:\\Program Files (x86)\\Python37-32\\main.py\"\nthis works fine.\nwhen i run it from SQL agent as an Operating System (CmdExec) all i get is \"System cannot find the file specified\"\ni have set the enviorment paths.\nI have created a proxy account (sys admin)\ni have copied the requests library so its in the same folder as Python.\nnothing is working\n37 failed attempts today , and counting !\ncan anyone assist>\npython \"C:\\Program Files (x86)\\Python37-32\\main.py\"","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":57187789,"Users Score":-1,"Answer":"i managed to get it working.\nmanually moved some libraries around in python.","Q_Score":0,"Tags":"python-2.7","A_Id":57218865,"CreationDate":"2019-07-24T16:48:00.000","Title":"Executing a python script from SQL Server Agent","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a \"priority\" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes.\nI've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey..\nThus creating the field like\n\nexample_column = IntegerField(null=False, db_column='PriorityQueue',default=1)\n\nThis will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1')\nSo, is it possible to do the above somehow and get the column to auto increment?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":252,"Q_Id":57209258,"Users Score":1,"Answer":"It should definitely be possible, especially outside of peewee. You can definitely make a counter that starts at 1 and increments to the stop and at the interval of your choice with range(). You can then write each incremented variable to the desired field in each row as you iterate through.","Q_Score":1,"Tags":"python,python-3.x,peewee","A_Id":57209625,"CreationDate":"2019-07-25T19:50:00.000","Title":"Peewee incrementing an integer field without the use of primary key during migration","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a \"priority\" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes.\nI've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey..\nThus creating the field like\n\nexample_column = IntegerField(null=False, db_column='PriorityQueue',default=1)\n\nThis will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1')\nSo, is it possible to do the above somehow and get the column to auto increment?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":57209258,"Users Score":0,"Answer":"Depends on your database, but postgres uses sequences to handle this kind of thing. Peewee fields accept a sequence name as an initialization parameter, so you could pass it in that manner.","Q_Score":1,"Tags":"python,python-3.x,peewee","A_Id":57210489,"CreationDate":"2019-07-25T19:50:00.000","Title":"Peewee incrementing an integer field without the use of primary key during migration","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Using pyarrow I can write parquet files of version 2.0.\npyarrow.parquet.write_table method has parameter 'version'. But there is no parameter 'version' for pyarrow.parquet.read_table method. And seems like it only can read parquet files of version 1.0.\nHow to read parquet files of version 2.0 with pyarrow?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":57219611,"Users Score":0,"Answer":"pyarrow.parquet.read_table can read files written for Parquet version 2.0 automatically. No need to set parameter, this can be detected by reading the metadata of the given Parquet file.\nIn your specific case it is hard to give you an exact answer on why it seems that the read isn't working as you did not include any tracebacks in your question.","Q_Score":0,"Tags":"python,pandas,parquet,pyarrow","A_Id":57219950,"CreationDate":"2019-07-26T11:58:00.000","Title":"pyarrow read_table has no 'parquet version' parameter","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a user installable application the takes a 2-5 MB JSON file and then queries the data for metrics. It will pull metrics like the number of unique items, or the number of items with a field set to a certain value, etc. Sometimes, it pulls metrics that are more tabular like returning all items with certain properties and all their fields from the JSON.\nI need help making a technology choice. I am between using either Pandas or SQLite with peewee as an ORM. I am not concerned about converting the JSON file to a SQLite database, I already have this prototyped. I want help evaluating the pros and cons of a SQLite database versus Pandas.\nOther factors to consider are that my application may require analyzing metrics across multiple JSON files of the same structure. For example, how many unique items are there across 3 selected JSON files.\nI am news to Pandas so I can't make a strong argument for or against it yet. I am comfortable with SQLite with an ORM, but don't want to settle if this technology choice would be restrictive for future development. I don't want to factor in a learning curve. I just want an evaluation on the technologies head-to-head for my application.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":158,"Q_Id":57257180,"Users Score":1,"Answer":"You are comparing a database to an in-memory processing library. They are two seperate ideas. Do you need persistent storage over multiple runs of code? Use SQLite (since you're using metrics I would guess this is the path you need). You could use Pandas to write CSV's\/TSV's and use those as permanent storage but you'll eventually start to bottleneck having to load multiple CSV's into one Dataframe for processing.\nYour use case sounds better suited to using SQLite, in my opinion.","Q_Score":0,"Tags":"python,json,pandas,sqlite","A_Id":57257267,"CreationDate":"2019-07-29T15:39:00.000","Title":"Should I use a SQLite database or Pandas for my application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using read_sql function to pull data from a postgresql table. As I store that data in a dataframe, I could find that some integer dtype column is automatically getting converted to float, is there any way to prevent that while using read_sql functiononly","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":57290281,"Users Score":0,"Answer":"Since your column contains NaN values, which are floating point numbers, I don't think you can avoid this 'issue' loading from the Database without changing the query. \nIf you wish to change the query, you can insert a WHERE clause that would exclude None values, or check if the row contains such a column value.\nWhat I suggest would be to use .fillna(), and then to cast as integers using .astype('int')\nEdit : Just in case, your question is wrong, you are saying \n\nIs there any way to change columns datatype that should be int became a float while using read_sql from table\n\nBut since it includes NaN, it is not expected to be an int, but a float.","Q_Score":0,"Tags":"python,pandas","A_Id":57290466,"CreationDate":"2019-07-31T12:01:00.000","Title":"Is there any way to change columns datatype that should be int became a float while using read_sql from table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on Django app on branch A with appdb database in settings file. Now I need to work on another branch(B) which has some new DB changes(eg. new columns, etc). The easiest for me is to point branch B to a different DB by changing the settings.py and then apply the migrations. I did the migrations but I am getting error like 1146, Table 'appdb_b.django_site' doesn't exist. So how can I use a different DB for my branchB code without dropping database appdb?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":43,"Q_Id":57294233,"Users Score":1,"Answer":"The existing migration files have information that causes the migrate command to believe that the tables should exist and so it complains about them not existing.\nYou need to MOVE the migration files out of the migrations directory (everything except init.py) and then do a makemigrations and then migrate.","Q_Score":0,"Tags":"python,django","A_Id":57296935,"CreationDate":"2019-07-31T15:27:00.000","Title":"How to point Django app to new DB without dropping the previous DB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have some .xls datas in my Google Cloud Storage and want to use airflow to store it to GCP. Can I export it directly to BigQuery or can i use additional library (such a pandas and xlrd) to convert the files and store it into BigQuery? \nThanks","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":542,"Q_Id":57367921,"Users Score":2,"Answer":"Bigquery don't support xls format. The easiest way is to transform the file in CSV and to load it into big query.\nHowever, I don't know your xls format. If it's multisheet you have to work on the file.","Q_Score":0,"Tags":"excel,google-cloud-storage,airflow,xls,python-bigquery","A_Id":57369728,"CreationDate":"2019-08-06T01:50:00.000","Title":"Import XLS file from GCS to BigQuery","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm syncing a large amount of data and I'm getting this error back: A string literal cannot contain NUL (0x00) characters. Obviously this is a postgres problem, but I'm not quite sure how to solve it. Is there a way to strip null characters out at the Django model level? I have a large set of fields that I'm syncing.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":9755,"Q_Id":57371164,"Users Score":2,"Answer":"Unless you definitely do want to store NUL characters, you should sanitize your text so it does not contain them. At the model level, you'd define a clean_fieldname method to do that.\nIf you do want to store them, you need to store them in a binary-compatible field in the database. Django 1.6+ has BinaryField which should work.","Q_Score":11,"Tags":"python,django,postgresql","A_Id":57372080,"CreationDate":"2019-08-06T07:41:00.000","Title":"Django + Postgres: A string literal cannot contain NUL (0x00) characters","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is the environment:\n\nAWS Aurora database compatible with MySql.\nDjango 2.0.3 (Python 3.6)\nPip-Mysql dependencies: django-mysql==2.2.2, mysqlclient==1.3.12.\nMaster-Slave database configuration.\n\nIt seems that django or mysql engine always fails on certain queries resulting in this specific error:\n\nTraceback (most recent call last): File\n \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/django\/db\/models\/fields\/related_descriptors.py\",\n line 158, in get\n rel_obj = self.field.get_cached_value(instance) File \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/django\/db\/models\/fields\/mixins.py\",\n line 13, in get_cached_value\n return instance._state.fields_cache[cache_name] KeyError: 'assigned_to'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last): File\n \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/django\/db\/backends\/utils.py\",\n line 85, in _execute\n return self.cursor.execute(sql, params) File \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/django\/db\/backends\/mysql\/base.py\",\n line 71, in execute\n return self.cursor.execute(query, args) File \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/MySQLdb\/cursors.py\",\n line 253, in execute\n self._warning_check() File \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/MySQLdb\/cursors.py\",\n line 148, in _warning_check\n warnings = db.show_warnings() File \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/MySQLdb\/connections.py\",\n line 381, in show_warnings\n self.query(\"SHOW WARNINGS\") File \"\/home\/ubuntu\/ivs\/vpython\/lib\/python3.6\/site-packages\/MySQLdb\/connections.py\",\n line 277, in query\n _mysql.connection.query(self, query)\n _mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL server during query')\n\nYes, one of my models have \"assigend_to\" field which is a foreign key. But why does it fail with a KeyError?\nDid anyone have any similar KeyErrors and MySql lost connections as a result?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":629,"Q_Id":57377238,"Users Score":0,"Answer":"Wow, what actually was happening is this:\n\nI was making queries with reverse-foreign keys.\nObjects returned with with reverse-foreign keys contained some other foreign keys.\nWhen I tried to access them e.g. 'assigned_to' i got this exception every time.","Q_Score":0,"Tags":"mysql,django,python-3.x,django-models,amazon-aurora","A_Id":57509740,"CreationDate":"2019-08-06T13:29:00.000","Title":"Django ORM key error with lost MySql connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using IntelliJ for my Python project. I have created a database connection under Database where I can run queries with no problem. So I have a list of .sql files that I would like to run from within python using the existing connection. How do I go about it? \nI can import various packages to pass the queries across to the external database, but since I already have this connection, I was wondering if it was possible to use it to pull data by simply just referring to the connection.\nThanks,","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":57392208,"Users Score":0,"Answer":"Rightclick on needed database in InteliJ Database tab and click on \"Open Console\".\nThere you can paste your sql and run it.","Q_Score":0,"Tags":"python,intellij-idea,jdbc","A_Id":57392749,"CreationDate":"2019-08-07T10:22:00.000","Title":"Accessing IntelliJ Database from Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with AWS Lambda functions (in Python), that process new files that appear in the same Amazon S3 bucket and folders.\nWhen new file appears in s3:\/folder1\/folderA, B, C, an event s3:ObjectCreated:* is generated and it goes into sqs1, then processed by Lambda1 (and then deleted from sqs1 after successful processing).\nI need the same event related to the same new file that appears in s3:\/folder1\/folderA (but not folderB, or C) to go also into sqs2, to be processed by Lambda2. Lambda1 modifies that file and saves it somewhere, Lambda2 gets that file into DB, for example.\nBut AWS docs says that:\n\nNotification configurations that use Filter cannot define filtering rules with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping.\n\nSo question is how to bypass this limitation? Are there any known recommended or standard solutions?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1130,"Q_Id":57399129,"Users Score":1,"Answer":"Instead of set up the S3 object notification of (S3 -> SQS), you should set up a notification of (S3 -> Lambda).\nIn your lambda function, you parse the S3 event and then you write your own logic to send whatever content about the S3 event to whatever SQS queues you like.","Q_Score":2,"Tags":"python-3.x,amazon-web-services,amazon-s3,amazon-sqs","A_Id":57399292,"CreationDate":"2019-08-07T16:55:00.000","Title":"How to direct the same Amazon S3 events into several different SQS queues?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a job which copies a large file to a table temp_a and also creates an index idx_temp_a_j on a column j. Now once the job finishes copying all the data, I have to rename this table to a table prod_a which is production facing and queries are always running against it with very less idle time. But once I run the rename queries, the queries coming in and the queries which are already running, are backed up producing high API error rates. I want to know what are the possible strategies I can implement so the renaming of the table happens with less downtime.\nSo far, below are the strategies I came up with:\n\nFirst, just rename the table and allow queries to be backed up. This approach seems unreliable as rename table query acquires the EXCLUSIVE LOCK and all other queries are backed up, I am getting high level of API error rates.\nSecond, write a polling function which checks if there any queries running now if not then rename the table and index. In this approach the polling function will check periodically to see if any query is running, any queries are running, then wait , if not then run the alter table query. This approach will only queue up queries which are coming after the alter table rename query has placed an EXCLUSIVE LOCK on the table. Once the renaming finishes, the queued up queries will get executed. I still need to find database APIs which will help me in writing this function.\n\nWhat are the other strategies which can allow this \"seamless\" renaming of the table? I am using postgres (PostgreSQL) 11.4 and the job which does all this is in Python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":57403626,"Users Score":0,"Answer":"You cannot avoid blocking concurrent queries while a table is renamed.\nThe operation itself is blazingly fast, so any delay you experience must be because the ALTER TABLE itself is blocked by long running transactions using the table. All later operations on the table then have to queue behind the ALTER TABLE.\nThe solution for painless renaming is to keep database transactions very short (which is always desirable, since it also reduces the danger of deadlocks).","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":57406173,"CreationDate":"2019-08-08T00:05:00.000","Title":"Strategies for renaming table and indices while select queries are running","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file in S3. I want to run a python script using data present in S3. The S3 file will change once in a week. I need to pass an input argument to my python script which loads my S3 file into Pandas and do some calculation to return the result.\nCurrently I am loading this S3 file using Boto3 in my server for each input argument. This process takes more time to return the result, and my nginx returns with 504 Gateway timeout.\nI am expecting some AWS service to do it in cloud. Can anyone point me in a right direction which AWS service is suitable to use here","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":389,"Q_Id":57426946,"Users Score":0,"Answer":"You have several options:\n\nUse AWS Lambda, but Lambda has limited local storage (500mb) and memory (3gb) with 15 run time.\nSince you mentioned Pandas I recommend using AWS Glue which has ability:\n\n\nDetect new file\nLarge Mem, CPU supported\nVisual data flow\nSupport Spark DF\nAbility to query data from your CSV files\nConnect to different database engines.\n\n\nWe currently use AWS Glue for our data parser processes","Q_Score":0,"Tags":"python-3.x,amazon-s3,aws-lambda,job-scheduling","A_Id":57427988,"CreationDate":"2019-08-09T09:16:00.000","Title":"How to run python script using S3 data in AWS","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Apologize if this is simple, but documentation on using Python with Microsoft BI is sparse at best. I'm curious if there is a command that imports Microsoft BI files similar to the read_excel function in pandas.\nI have a Microsoft BI file that has 175 worksheets, each of which is currently being exported to excel and saved by hand each day. Looking for some way to automate, and thought that if I could essentially read each file into Python and then save as an excel file it would save a tremendous amount of time.\nAlso adding a note that I prefer to save as csv as opposed to xlsx.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1918,"Q_Id":57430217,"Users Score":1,"Answer":"From within PowerBI you basically have two options. I don't think there is any possibility to import a PowerBI file into Python.\n\nInside PowerQuery you add Run Python Script as step to your transformation (Transform menu > Run Python Script). This allows you to use Python in the way you are used to and makes sure all data before this step is moved into a dataframe called dataset. You can simply use dataset.to_excel() to store this dataset as Excel file. Be sure to change the working directory with something like os.chdir() becasue by default it is running in a temporary directory.\nInside PowerBI you can add a script visual. Although it says it requires visual output of your script, the reality is that any code is executed even if the script does not result in a rendered image. The same principal hold as mentioned for the step from PowerQuery. Use os.chdir() to specify your directory and use dataset.to_excel() to export.\n\nFor both make sure Python scripting is enabled in the options, should be by default when you have python installed on your machine, else have a look through the menu, you'll easily find it.","Q_Score":1,"Tags":"python,excel,pandas,export,powerbi","A_Id":57431270,"CreationDate":"2019-08-09T12:36:00.000","Title":"Python Solution - Exporting Microsoft BI sheets to excel using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem and I am looking for a solution. I want to save the number of users registered in mongodb. For example, in django, the admin page has the number of registered users, and all other data is saved there. I want it to be saved in mongodb database instead of showing it on admin page, because my other data is also saved in mongodb. How do I do this? Should I make separate a class in models.py or something else.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":46,"Q_Id":57431089,"Users Score":1,"Answer":"You are asking a wrong question, because you should not do that.\nThe number of users is User.objects.count() maybe with a filter to count only active users. \nNever save data that can be calculated\/derived from other data, as this will just lead to inconsistencies. Why do you want to save it? You'd have to make sure the number is updated every time a new user is added\/deleted and it's so easy to forget places in your code where this might happen.","Q_Score":1,"Tags":"python,html,django,mongodb","A_Id":57445831,"CreationDate":"2019-08-09T13:26:00.000","Title":"How to save the number of registered users in mongodb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a database with some tables in it. I want now on my website has the dropdown and the choices are the names of people from a column of the table from my database and every time I click on a name it will show me a corresponding ID\u00a0also from a column from this table. how I can do that? or maybe a guide where should I find an answer !\nmany thanks!!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":57439500,"Users Score":0,"Answer":"You have to do that in python(if that's what you are using in the backend). \nYou can create functions in python that gets the list of name of tables which then you can pass to your front-end code. Similarly, you can setup functions where you get the specific table name from HTML and pass it to python and do all sort of database queries.\nIf all these sounds confusing to you. I suggest you take a Full stack course on udemy, youtube, etc because it can't really be explained in one simple answer.\nI hope it was helpful. Feel free to ask me more","Q_Score":0,"Tags":"javascript,python,html,flask","A_Id":57439549,"CreationDate":"2019-08-10T05:15:00.000","Title":"SelectField to create dropdown menu","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The documentation just says\n\nTo save an object back to the database, call save()\n\nThat does not make it clear. Exprimenting, I found that if I include an id, it updates existing entry, while, if I don't, it creates a new row. Does the documentation specify what happens?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2418,"Q_Id":57459259,"Users Score":-1,"Answer":"Depends on how the Model object was created. If it was queried from the database, UPDATE. If it's a new object and has not been saved before, INSERT.","Q_Score":0,"Tags":"python,django","A_Id":57459281,"CreationDate":"2019-08-12T10:10:00.000","Title":"Does django's `save()` create or update?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I see an option for MySql and Postgres, and have read help messages for sqlite, but I don't see anyway to use it or to install it. So it appears that it's available or else there wouldn't be any help messages, but I can't find it. I can't do any 'sudo', so no 'apt install', so don't know how to invoke and use it!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":19,"Q_Id":57467554,"Users Score":1,"Answer":"sqlite is already installed. You don't need to invoke anything to install it. Just configure your web app to use it.","Q_Score":0,"Tags":"pythonanywhere","A_Id":57476932,"CreationDate":"2019-08-12T19:59:00.000","Title":"pythonanwhere newbie: I don't see sqlite option","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"iam new to cassandra,\ni want to do get query using cassandra python client? iam not able to escape special characters.can anyone help \nBelow is the query which iam trying, but getting syntax error\n\nSELECT pmid FROM chemical WHERE mentions=$$\n N,N'-((1Z,3Z)-1,4-bis(4-methoxyphenyl)buta-1,3-diene-2,3-diyl)diformamide\n $$ AND pmid=31134000 ALLOW FILTERING;\n\nit is giving me error\nError from server: code=2000 [Syntax error in CQL query] message=\"line 1:118 mismatched input '-' expecting ')' (...,source) VALUES ('be75372a-c311-11e9-ac2c-0a0df85af938','N,N'[-]...)\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":57567690,"Users Score":0,"Answer":"Based on the Syntax Provided as i see there is a Single quotes missing in your Query . \nSuggestion \nNote to use ALLOW FILTERING as it will Scan your Table which will be a performance issue.","Q_Score":0,"Tags":"python,cassandra,cassandra-cluster","A_Id":57604403,"CreationDate":"2019-08-20T06:20:00.000","Title":"Not able to do escape query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently helping with some NLP code and in the code we have to access a database to get the papers. I have fun the code successfully before but every time I try to run the code again I get the error sqlite3.DatabaseError: file is not a database. I am not sure what is happening here because the database is still in the same exact position and the path doesn't change. \nI've tried looking up this problem but haven't found similar issues. \nI am hoping that someone can explain what is happening here because I don't even know how to start with this issue because it runs once but not again.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":131,"Q_Id":57580912,"Users Score":0,"Answer":"I got the same issue. I have a program that print some information from my database but after running it again and again, I got an error that my database was unable to load. For me I think it may be because I have tried to be connected to my database that this problem occurs. And what I suggest you is to reboot your computer or to research the way of being connected several times to the database","Q_Score":0,"Tags":"python-3.x,sqlite","A_Id":57581507,"CreationDate":"2019-08-20T20:09:00.000","Title":"Getting error 'file is not a database' after already accessing the database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script which creates some objects. \nI would like to be able to save these objects into my postgres database for use later. \nMy thinking was I could pickle an object, then store that in a field in the db. \nBut I'm going round in circles about how to store and retrieve and use the data.\nI've tried storing the pickle binary string as text but I can't work out how to encode \/ escape it. Then how to load the string as a binary string to unpickle.\nI've tried storing the data as bytea both with psycopg2.Binary(data) and without.\nThen reading into buffer and encoding with base64.b64encode(result) but it's not coming out the same and cannot be unpickled. \nIs there a simple way to store and retrieve python objects in a SQL (postgres) database?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":5865,"Q_Id":57642165,"Users Score":4,"Answer":"Following the comment from @SergioPulgarin I tried the following which worked!\nN.B Edit2 following comment by @Tomalak\nStoring:\n\nPickle the object to a binary string\npickle_string = pickle.dumps(object)\nStore the pickle string in a bytea (binary) field in postgres. Use simple INSERT query in Psycopg2\n\nRetrieval:\n\nSelect the field in Psycopg2. (simple SELECT query)\nUnpickle the decoded result\nretrieved_pickle_string = pickle.loads(decoded_result)\n\nHope that helps anybody trying to do something similar!","Q_Score":5,"Tags":"python,postgresql,pickle","A_Id":57644761,"CreationDate":"2019-08-24T23:23:00.000","Title":"saving python object in postgres table with pickle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Postgres and sqlalchemy.\nI have a job scans a large table and for each row does some calculation and updates some related tables. I am told that I should issue periodic commits inside the loop in order not to keep a large amount of in-memory data. I wonder such commits have a performance penalty, e.g. restarting a transaction, taking db snapshot perhaps etc. \nWould using a flush() be better in this case?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":37,"Q_Id":57654621,"Users Score":2,"Answer":"An open transaction won't keep a lot of data in memory.\nThe advice you got was probably from somebody who is used to Oracle, where large transactions cause problems with UNDO.\nThe question is how you scan the large table:\n\nIf you snarf the large table to the client and then update the related tables, it won't matter much if you commit in between or not.\nIf you use a cursor to scan the large table (which is normally better), you'd have to create a WITH HOLD cursor if you want the cursor to work across transactions. Such a cursor is materialized on the database server side and so will use more resources on the database.\nThe alternative would be to use queries for the large table that fetch only part of the table and chunk the operation that way.\n\nThat said, there are reasons why one big transaction might be better or worse than many smaller ones:\nReasons speaking for a big transaction:\n\nYou can use a normal cursor to scan the big table and don't have to bother with WITH HOLD cursors or the alternative as indicated above.\nYou'd have transactional guarantees for the whole operation. For example, you can simply restart the operation after an error and rollback.\n\nReasons speaking for operation in batches:\n\nShorter transactions reduce the risk of deadlocks.\nShorter transactions allow autovacuum to clean up the effects of previous batches while later batches are being processed. This is a notable advantage if there is a lot of data churn due to the updates, as it will help keep table bloat small.\n\nThe best choice depends on the actual situation.","Q_Score":0,"Tags":"python,database,postgresql,performance,sqlalchemy","A_Id":57655178,"CreationDate":"2019-08-26T08:49:00.000","Title":"is there a performance penalty for issuing periodic commits during a long DB scan?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using celery to do some distributed tasks and want to override celery_taskmeta and add some more columns. I use Postgres as DB and SQLAlchemy as ORM. I looked up celery docs but could not find out how to do it.\nHelp would be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1097,"Q_Id":57688644,"Users Score":2,"Answer":"I would suggest a different approach - add an extra table with your extended data. This table would have a foreign-key constraint that would ensure each record is related to the particular entry in the celery_taskmeta. Why this approach? - It separates your domain (domain of your application), from the Celery domain. Also it does not involve modifying the table structure that may (in theory it should not) cause trouble.","Q_Score":1,"Tags":"python,postgresql,sqlalchemy,celery","A_Id":57689713,"CreationDate":"2019-08-28T08:59:00.000","Title":"Overriding celery result table (celery_taskmeta) for Postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When creating an mlflow tracking server and specifying that a SQL Server database is to be used as a backend store, mlflow creates a bunch of table within the dbo schema. Does anyone know if it is possible to specify a different schema in which to create these tables?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":670,"Q_Id":57693162,"Users Score":1,"Answer":"It is possible to alter mlflow\/mlflow\/store\/sqlalchemy_store.py to change the schema of the tables that are stored. \nIt is very likely that this is the wrong solution for you, since you will go out of sync with the open source and lose newer features that alter this, unless you maintain the fork yourself. Could you maybe reply with your use case?","Q_Score":1,"Tags":"python,sqlalchemy,mlflow","A_Id":57862088,"CreationDate":"2019-08-28T13:06:00.000","Title":"Specify database backend store creation in specific schema","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When creating an mlflow tracking server and specifying that a SQL Server database is to be used as a backend store, mlflow creates a bunch of table within the dbo schema. Does anyone know if it is possible to specify a different schema in which to create these tables?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":670,"Q_Id":57693162,"Users Score":0,"Answer":"I'm using MSSQLServer as the backend store. I could use a different schema than dbo by specifying the default schema for the SQLServer user being used by MLFlow.\nIn my case, if the MLFlow tables (e.g: experiences) exist in dbo, then those tables will be used. If not, MLFlow will create those tables in the default schema.","Q_Score":1,"Tags":"python,sqlalchemy,mlflow","A_Id":63085202,"CreationDate":"2019-08-28T13:06:00.000","Title":"Specify database backend store creation in specific schema","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use Raspberry Pis to communicate collected Data via Python (Modbus TCP and RTU) scripts to a Database. These scripts are constantly running on the Pi and are connected to the Products where the data is coming from.\nConsequently, we have to ship the already set up Raspberry Pi to the Customer. Now the Problem occurs, that the Database Credentials are stored in the Python Scripts running on the Raspberry Pi.\nIs there a possibility to overcome this Problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":57705953,"Users Score":1,"Answer":"Naive solution: Store database credentials on your server (or somewhere on internet) so every time Raspberry Pi run the script, it connect to the server to get the credentials first.\nMy recommended solution: Create an API (may be web API) to communicate with database and Rasp Pi only work with this API. By this way, the client side doesn't know about database's credentials and some private things you want to hide also.","Q_Score":0,"Tags":"python,raspberry-pi3,raspbian","A_Id":57706159,"CreationDate":"2019-08-29T08:19:00.000","Title":"Hide Database Credentials in Python Code on Raspberry Pi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a django website with PostgreSQL database hosted on one server with a different company and a mirror of that django website is hosted on another server with another company which also have the same exact copy of the PostgreSQL database . How can i sync or update that in real time or interval","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":103,"Q_Id":57750703,"Users Score":0,"Answer":"Postgresql has master-slave replication. Try That!","Q_Score":1,"Tags":"django,python-3.x,postgresql","A_Id":57750738,"CreationDate":"2019-09-02T01:58:00.000","Title":"How can i update one PostgreSQL database and sync changes\/updates to another PostgreSQL database on another server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two lamp stacks that are remote to one another. I have to send the results of a query from stack one to a table on stack 2. I'm not sure what the best method is to use.\nI have considered setting up an API but am not sure if this is the right application for an API. I have considered, having stack one export a sql dump and the other server download then import, but this feels very insecure. Any advice would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":57764658,"Users Score":0,"Answer":"I ended up solving this problem with rsync. I have the first lamp stack dump the data to a file then open an rsync connection to the second server and have a cron running the import 2 hours after the rsync is set to connect. The cron also unlinks the file once the import is complete. Not perfect but the job is complete and I feel I didn't open up any security issues.","Q_Score":1,"Tags":"php,python,sql,lamp","A_Id":58984860,"CreationDate":"2019-09-03T03:21:00.000","Title":"Transfering the results of a query from lamp stack to another","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine.\nIs it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed?\nLike in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python.\nAny help is appreciated\nInstalling oracle client, connect is possible through cx_Oracle module.\nBut in systems where the client is not installed, how can we connect to the DB.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":6490,"Q_Id":57789704,"Users Score":2,"Answer":"It is not correct that java can connect to oracle without any oracle provided software.\nIt needs a compatible version of ojdbc*.jar to connect. Similarly python's cx_oracle library needs oracle instant-client software from oracle to be installed.\nInstant client is free software and has a small footprint.","Q_Score":2,"Tags":"python,database,oracle,connect,cx-oracle","A_Id":63163648,"CreationDate":"2019-09-04T13:40:00.000","Title":"Python Oracle DB Connect without Oracle Client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine.\nIs it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed?\nLike in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python.\nAny help is appreciated\nInstalling oracle client, connect is possible through cx_Oracle module.\nBut in systems where the client is not installed, how can we connect to the DB.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6490,"Q_Id":57789704,"Users Score":0,"Answer":"Installing Oracle client is a huge pain. Could you instead create a Webservice to a system that does have OCI and then connect to it that way? This might end being a better solution rather than direct access.","Q_Score":2,"Tags":"python,database,oracle,connect,cx-oracle","A_Id":70981244,"CreationDate":"2019-09-04T13:40:00.000","Title":"Python Oracle DB Connect without Oracle Client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am encountering a task and I am not entirely sure what the best solution is.\nI currently have one data set in mongo that I use to display user data on a website, backend is in Python. A different team in the company recently created an API that has additional data that I would let to show along side the user data, and the data from the newly created API is paired to my user data (Shows specific data per user) that I will need to sync up.\nI had initially thought of creating a cron job that runs weekly (as the \"other\" API data does not update often) and then taking the information and putting it directly into my data after pairing it up.\nA coworker has suggested caching the \"other\" API data and then just returning the \"mixed\" data to display on the website.\nWhat is the best course of action here? Actually adding the data to our data set would allow us to have 1 source of truth and not rely on the other end point, as well as doing less work each time we need the data. Also if we end up needing that information somewhere else in the project, we already have the data in our DB and can just use it directly without needing to re-organize\/pair it. \nJust looking for general pro's and cons for each solution. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":62,"Q_Id":57854727,"Users Score":2,"Answer":"Synchronization will always cost more than federation. I would either A) embrace CORS and integrate it in the front-end, or B) create a thin proxy in your Python App.\nWhich you choose depends on how quickly this API changes, whether you can respond to those changes, and whether you need graceful degradation in case of remote API failure. If it is not mission-critical data, and the API is reliable, just integrate it in the browser. If they support things like HTTP cache-control, all the better, the user's browser will handle it. \nIf the API is not scalable\/reliable, then consider putting in a proxy server-side so that you can catch errors and provide graceful degradation.","Q_Score":0,"Tags":"python,database,architecture","A_Id":57857492,"CreationDate":"2019-09-09T13:09:00.000","Title":"What is the best way to combine two data sets that depend on each other?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to connect to and read an on-premise data source using an AWS Glue Python Shell job. I am using Pygresql (which comes bundled on Glue) and Pandas. Everything works locally.\nBut when I push this job up to Glue, the database connections all timeout. Why is this happening? Do I need to do something magic with VPCs?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":226,"Q_Id":57861188,"Users Score":1,"Answer":"I guess you need to create a Glue connection with your VPC settings and attach it to the Glue job.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,aws-glue,amazon-vpc,pygresql","A_Id":57869524,"CreationDate":"2019-09-09T21:05:00.000","Title":"AWS Glue Python Shell script timing out","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to connect to and read an on-premise data source using an AWS Glue Python Shell job. I am using Pygresql (which comes bundled on Glue) and Pandas. Everything works locally.\nBut when I push this job up to Glue, the database connections all timeout. Why is this happening? Do I need to do something magic with VPCs?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":226,"Q_Id":57861188,"Users Score":0,"Answer":"Note that pygresql SQL query doesn't work in python shell. Recommended is postgresql","Q_Score":0,"Tags":"python-3.x,amazon-web-services,aws-glue,amazon-vpc,pygresql","A_Id":57875570,"CreationDate":"2019-09-09T21:05:00.000","Title":"AWS Glue Python Shell script timing out","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've made a file crawler using python to read all my pictures in my computer (around 250 K files) and saving this information on a MySQL Database. I also save all the EXIF metadata for each file. \nNext, I'll like to add tags to them associating them to an unique ID but that is always the same for the same picture, just in case I run my crawler again and the file changes it's location or it's name.\nFor that purpuse I created a hash using string with relevant Exif information.\nI've taken into consideration that over the years I've taken pictures with different camaras or phones, and some Exif tags are not present in all cameras. I've also have seen that most tags don't have many different values that can make the string unique.\nIm using: \nExif_Image_Length * Exif_Image_Width (area of the picture)\n+ Image_DateTime + Image_Make + Image_Model\nand making a hash out of that string. I still get duplicates hashes instead of unique hashes.\nI'll love if someone has a better approach for what I'm trying to do.\nThanks in advance,\nPablo\nEDIT: I need to get an unique ID for images that every time I proccess that filename \/ exif metadata I get the same ID considering the filename and location might change (but EXIF data will remain intact)","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":1001,"Q_Id":57898960,"Users Score":-1,"Answer":"did you get any new ideas? In photos with exif data there are sometimes some unique ids. One should be always the same even if your for example convert raw => psd => jpg => psd => jpg .\nDid you really get duplicates when all the data fields you mentioned are set?\nFrom the view of an photographer:\nWidth + Height is pretty useless (always the same for on camera, except its an cropped image)\nYou could use serialnumber of the camera including lens serial\nThank's to creation time the only duplicates should exist because of an short burst of photos with an short intervall.\nPossible Errors:\nModification of date\/time (for example sommer time conversion)\nManually created files or using clipboard or some weird stuff.\nIts just really an duplicate.","Q_Score":0,"Tags":"python,hash,exif","A_Id":59585404,"CreationDate":"2019-09-12T02:27:00.000","Title":"Get unique ID for pictures using Exif metada","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem where user chooses a month and on the basis of that i have to choose starting and ending date of that month.(e.g -> If user chooses January the output should come 01\/01\/2019 and 31\/01\/2019)\nI am able to fetch the current months starting and ending date by using postgresql query.\nThis gives current months starting date - \n @api.model\n def get_start_date(self):\n self.env.cr.execute(\"\"\" select date(date_trunc('month', \n current_date));\"\"\")\n first_date = self.env.cr.dictfetchall()\n for f in first_date:\n first_new_date = f['date']\n return first_new_date\nThis gives ending date -\n@api.model\n def get_end_date(self):\n self.env.cr.execute(\"\"\" select date(date_trunc('month', \n current_date) + interval '1 month - 1 day'); \"\"\")\n end_date = self.env.cr.dictfetchall()\n for f in end_date:\n end_last_date = f['date']\n return end_last_date\nI want if user select January for selection field it should give January's starting and ending date.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":57917255,"Users Score":0,"Answer":"if you want to store that data in db use store=True: \noutput_field = fields.Char(compute='_get_data', store=True) (or fields.Data)\nand then use onchange method:\n@api.onchange('selection_field_name')\ndef _get_data(self):\n if self.selection_field_name:\n self.output_field = **do some calculations**","Q_Score":0,"Tags":"python-3.x,postgresql,odoo-11","A_Id":57925859,"CreationDate":"2019-09-13T04:22:00.000","Title":"I need to fetch months starting date and ending date on the basis of the month chosen by the user in odoo","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python script (pymongo) to export collection from MongoDB and ingesting to other database. This workflow is scheduled to run once a day using Apache Airflow. Every time script run its exports whole collection and overwrite the whole data at target but I want to fetch only the changes made to collection in subsequent execution of script, especially new documents added to collection.\nI have read other related questions but there \"change streams\" is suggested as solution but \"change stream\" is for real time. I want periodic updates for examples fetch the new documents added since the last execution of script. \nDo I have to download and scan the whole new updated collection and compare it with the old collection?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":212,"Q_Id":58005421,"Users Score":1,"Answer":"Create a lookup table or collection where it saves the last run time and if the documents in the collection have timestamp then save the timestamp and _id in the very same lookup table.\nIf there aren't any timestamps in the documents then you can use the _id but the object ids in increasing order here are because the spec says that \ntime|machine|pid|inc is the format for creating the ObjectId.\nThere is already a time component in the ObjectId, but that is in seconds. The Date type in Mongo is the representation of the number of milliseconds from the epoch, which will give you some more precision for figuring out the time of insertion.\nI recommend to use a counter in the form of Sequence numbers if you need absolute precision beyond milliseconds and store the last sequence and the next run get query it by greater than to only get the delta data.","Q_Score":1,"Tags":"python,mongodb,pymongo,airflow","A_Id":58005686,"CreationDate":"2019-09-19T07:00:00.000","Title":"How to check for changes in collection periodically","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm working on a python script to grab every field in an mssql database and various metadata about each, then generating a series of data dictionaries in XLSX format. \nI've almost finished, but I'm now trying to grab 10 unique values from each field as an example of the data each field contains (for dates I'm using max & min). Currently I'm using select distinct top 10 X from table; for each field, but with a largish database, this is incredibly slow going. \nIs there a quicker\/better alternative?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":58054332,"Users Score":0,"Answer":"It would seem that by select distinct top 10 * from table; and then parsing that data with Python I save an incredible amount of time. I may not end up with 10 values per field, but it's good enough!","Q_Score":0,"Tags":"python,sql","A_Id":58054428,"CreationDate":"2019-09-23T00:14:00.000","Title":"Efficiently get a list of x unique values for every field in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an existing django project and need to create an instance of it in a new environment with new database. I have the database connection configured in the settings file. The schema does not exist. If I run the manage.py migrate command, does it also create the schema? It looks like it assumes the schema already exists because I am getting an error django.db.utils.OperationalError: (1049, \"Unknown database 'my_db'\"). Just wondering if I have to create the database first or if some django command is available to create it if it does not exists.\nI can create the schema manually via sql script if it's not doable via python django command.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":746,"Q_Id":58103427,"Users Score":1,"Answer":"As already pointed out in the comments to your question, the missing database is the problem, not the schema. You have to create the database first, which might involve setting the user permissions for the new database. After that, the manage.py migrate command will work just fine and create the schema for you.","Q_Score":1,"Tags":"python,mysql,django","A_Id":58107976,"CreationDate":"2019-09-25T17:07:00.000","Title":"Does django manage.py migrate command creates database\/schema if not exists?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I been learning how to use Apache-Airflow the last couple of months and wanted to see if anybody has any experience with transferring CSV files from S3 to a Mysql database in AWS(RDS). Or from my Local drive to MySQL.\nI managed to send everything to an S3 bucket to store them in the cloud using airflow.hooks.S3_hook and it works great. I used boto3 to do this.\nNow I want to push this file to a MySQL database I created in RDS, but I have no idea how to do it. Do I need to use the MySQL hook and add my credentials there and then write a python function?\nAlso, It doesn't have to be S3 to Mysql, I can also try from my local drive to Mysql if it's easier.\nAny help would be amazing!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1067,"Q_Id":58119536,"Users Score":0,"Answer":"were you able to resolve the 'MySQLdb._exceptions.OperationalError: (2068, 'LOAD DATA LOCAL INFILE file request rejected due to restrictions on access' issue","Q_Score":1,"Tags":"python,mysql,amazon-s3,airflow","A_Id":70966957,"CreationDate":"2019-09-26T14:54:00.000","Title":"S3 file to Mysql AWS via Airflow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I saw lots of information about using multiple databases with one server but I wasn't able to find contents about sharing one database with multiple servers.\nUsing Micro Service Architectures, If I define a database and models in a django server, named Account, How can I use the database and models in Account server from another server named like Post??\nWhat I'm thinking is to write same models.py in both servers and use the django commands --fake\nThen, type these commands\npython manage.py makemigrations\npython manage.py migrate\nand in another server\npython manage.py makemigrations\npython manage.py migrate --fake\nI'm not sure if this would work and I wonder whether there is any good ways.","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":2031,"Q_Id":58126278,"Users Score":4,"Answer":"I doubt this is the best approach, but if you want two separate Django projects to use the same database you could probably create the first like normal then, in the second project, copy over all of the models.py and migration files. Django creates a database table behind the scenes to track which migrations have been applied, so as long as the apps, models, and migration files are identical in the second app it should work without having to fake any migrations.\nThat said, this sounds like a mess to maintain going forward. I think what I would do is create a single Django project that talks to the database, then create an API in that first project that all other apps can interface with to communicate with the database. That way you avoid duplicating code or having to worry about keeping multiple projects in sync.","Q_Score":5,"Tags":"python,django","A_Id":58126737,"CreationDate":"2019-09-27T00:27:00.000","Title":"How can I use one database with multiple django servers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am not able to connect to SQL Server 2005 using pyodbc through windows authentication.\nI'm getting error 4060 Login failed","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":58145618,"Users Score":0,"Answer":"The problem resolved. I was trying to put double quotes around my password while trying to login no \"\" required","Q_Score":0,"Tags":"python,sql-server-2005,windows-authentication,pyodbc","A_Id":63840832,"CreationDate":"2019-09-28T10:56:00.000","Title":"Not able to connect to SQL Server 2005 through pyodbc using windows authentication","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python package 'jira' for establishing connection with the jira. I basically use the information from excel file and create tickets automatically in JIRA based on the excel information. Sometime there might be changes in the excel information for the same ticket in which case I need to run the code manually. So I would like to know if it is possible to do this automatically whenever there is a change in the excel file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":58187671,"Users Score":0,"Answer":"I assume you are only interested in filing new tickets. i.e. adding new rows in excel sheet.\nTwo options:\n\nRun your code in a forever loop with sleep.\nHave your code run by a cron.\n\nNow, you can maintain the hash(md5 or sha256) of your file, and write the hash in some file on host machine if you are not using a database.\nYour code has to read from this file, and calculate fresh hash of that excel file. If they are not same, means something has changed in your file.\nNow, you also need to maintain till what row you have created the jira tickets. You can write this information also in some file.","Q_Score":0,"Tags":"python,excel,jira,python-jira","A_Id":59089182,"CreationDate":"2019-10-01T15:04:00.000","Title":"Synchronization using python code based on an excel file to JIRA","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to insert a column in an existing excel on Sikuli python ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":58229875,"Users Score":0,"Answer":"You can use excellibrary to add column ,\nDo by using pandas in python and then import that file to robot framework","Q_Score":0,"Tags":"python,excel,sikuli","A_Id":64276347,"CreationDate":"2019-10-04T04:24:00.000","Title":"How to insert a column in an existing excel on Sikuli python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any method where I can upsert into a SQL datawarehouse table ?\nSuppose I have a Azure SQL datawarehouse table :\ncol1 col2 col3 \n2019 09 10\n2019 10 15\nI have a dataframe \ncol1 col2 col3\n2019 10 20\n2019 11 30\nThen merge into the original table of Azure data warehouse table \ncol1 col2 col3\n2019 09 10 \n2019 10 20 \n2019 11 30 \nThanks for everyone idea","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":782,"Q_Id":58255818,"Users Score":0,"Answer":"you can save the output in a file and then use the stored procedure activity from azure data factory for the upsert. Just a small procedure which will upsert the values from the file. I am assuming that you are using the Azure data factory here.","Q_Score":1,"Tags":"python,databricks","A_Id":58305335,"CreationDate":"2019-10-06T09:13:00.000","Title":"Databricks: merge dataframe into sql datawarehouse table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on moving data from Postgres DB to AWS Redshift using Python3.7. I created an SQL query that retrieves a query set when executed (I'm using where clause to query. So for every query execution, I'll be changing the ID that I'm passing).\nI'm going to run this script in Flask docker container which will be ultimately run on Kubernetes.\nI have exposed an POST method enpoint in dockerized Flask app on which I'll receiving a list of IDs that needs to be queried on DB and data to be moved to Redshift using Python.\nI want to do multithreading for executing multiple queries at once and moving data as there could be lot of IDs in the POST request that I receive. \nBut, as I'm using Python3.7, I came to know that GIL is going to be a bottleneck and it doesn't matter if how many threads you are running, and there will be only one thread executing at any time.\nHow do I overcome this problem and make the parallel execution of SQL queries on DB possible and that finally works on Kubernetes.\nCan I go with multiprocessing or is there any other better way to achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":291,"Q_Id":58317133,"Users Score":0,"Answer":"If you're using Flask app, you can host your app in Gunicorn with multiple workers and threads which allows parallelism by running multiple workers.\nGunicorn launches a master process that can create multiple (configured workers) where each worker can handle an HTTP request independently. The kernel handles the load balancing of request among the workers. \nIf you add Gevent to the mix, that also provides concurrency per worker i.e. each worker would be able to handle multiple requests concurrently (not in parallel).\nBoth of these are available as pip packages, Gevent requires the installation of libev. \nThe support for flask by Gunicorn is ensured by the fact that flask is a WSGI framework and Gunicorn is a WSGI server, they're simply plugged into each other.","Q_Score":0,"Tags":"python-3.x,postgresql,docker,flask,amazon-redshift","A_Id":58317462,"CreationDate":"2019-10-10T06:57:00.000","Title":"How to do parallel PostgreSQL query exections in Python3.7 docker container?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use Snowflake Python Connector through SQLAlchemy, While I am running pip install --upgrade snowflake-sqlalchemy I am getting error failed to build pyarrow during installation. I am using python version 3.7","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2238,"Q_Id":58318943,"Users Score":3,"Answer":"I recommend reinstalling pyarrow, and then you might want to upgrade snowflake-sqlalchemy after that. Probably couldn't hurt to redo the standard connector as well, steps as follows:\n1.\npip install --upgrade pyarrow\n\npip install --upgrade snowflake-connector-python\n\n3.\npip install --upgrade snowflake-sqlalchemy","Q_Score":2,"Tags":"python,pip,snowflake-cloud-data-platform","A_Id":58325992,"CreationDate":"2019-10-10T08:50:00.000","Title":"Failed to build pyarrow during installation of Snowflake-SQLAlchemy through pip installation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a cluster setup of 3 nodes. \nI am designing my microservice and I am wondering if each node should have their own Cassandra session or if all three should share the same session created by any of the nodes.\nI have read in the Cassandra docs:\n\n\"The Session instance is a long-lived object and it should not be used\n in a request\/response short-lived fashion. Basically you will want to\n share the same cluster and session instances across your application.\"\n\nWhat does this mean?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":55,"Q_Id":58320879,"Users Score":1,"Answer":"You share your session on application level. Different applications should have own sessions.\nYour quote means that you dont open a session for a query but rather have a singleton Session instance in your application.","Q_Score":1,"Tags":"python,cassandra","A_Id":58321657,"CreationDate":"2019-10-10T10:32:00.000","Title":"Should every instance of application have their own cassandra session or they should share the same session?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to store images in database for various users.Its just like famous blog app but instead of blogs i want to store images in database.How can i implement my idea?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":603,"Q_Id":58324502,"Users Score":0,"Answer":"You can do one thing -> change the image into base64 string and save it to database than when you want that image convert it from base64 to normal format.\nand you can find multiple tutorial on internet how to change image to base64 in python.\nwhy i am suggesting this because i used it in an android app.","Q_Score":0,"Tags":"python,django,image,django-models,storage","A_Id":58324611,"CreationDate":"2019-10-10T13:48:00.000","Title":"Django:How to store images in database instead of media file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Amazon suggests to not include big libraries\/dependencies in lambda functions.\nAs far as I know, SQLAlchemy is quite a big python library. Do you think it is a good idea to use it in lambda functions? An option would be to include it as a Lambda Layer and use it across all related Lambda functions.\nAnyways, what is the best practise?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7485,"Q_Id":58341292,"Users Score":0,"Answer":"From what I read, SQLAlchemy performs in memory caching of data it has read and uses that for future calls. Based on what you are doing, it would be good to check out the SQLAlchemy caching strategy so another Lambda does not change the data from under the first lambda with SQLAlchemy.","Q_Score":12,"Tags":"python,sqlalchemy,aws-lambda,serverless-framework,aws-lambda-layers","A_Id":64030658,"CreationDate":"2019-10-11T12:27:00.000","Title":"Is it a good idea to use Python SQLAlchemy in AWS Lambda?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Amazon suggests to not include big libraries\/dependencies in lambda functions.\nAs far as I know, SQLAlchemy is quite a big python library. Do you think it is a good idea to use it in lambda functions? An option would be to include it as a Lambda Layer and use it across all related Lambda functions.\nAnyways, what is the best practise?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":7485,"Q_Id":58341292,"Users Score":1,"Answer":"Serverless functions are meant to be small self-contained functions. SQLAlchemy is an ORM, which allows you to manipulate database objects like objects in python. If you're just writing a few serverless functions that do you're average CRUD operations on a database you're better off writing the SQL by composing the strings and directly executing that through your database driver (which you'll have to install anyways, even if you're using sqlalchemy). If you're building your own framework on top of AWS Lambda then perhaps consider sqlalchemy.","Q_Score":12,"Tags":"python,sqlalchemy,aws-lambda,serverless-framework,aws-lambda-layers","A_Id":62886393,"CreationDate":"2019-10-11T12:27:00.000","Title":"Is it a good idea to use Python SQLAlchemy in AWS Lambda?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a database in pythonanywhere, after doing my migrations from my django project etc.. i wanted to create a trigger in my db, but the following message appears:\n\nERROR 1419 (HY000): You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)\n\nMy user does not have the permissions","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":107,"Q_Id":58345033,"Users Score":1,"Answer":"You cannot create triggers in MySQL on PythonAnywhere","Q_Score":1,"Tags":"mysql,mysql-python,pythonanywhere","A_Id":58355715,"CreationDate":"2019-10-11T16:12:00.000","Title":"ERROR 1044 creating a trigger in mysql on pythonanywhere","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"_mysql_exceptions.OperationalError: (2026, 'SSL connection error: SSL_CTX_set_tmp_dh failed')\nis thrown at me when I try to run my script which connects to my SQL server. \nI installed MySQLdb via conda. I've read that this may be an openssl issue, but I'm having trouble downgrading that as well.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":58416787,"Users Score":0,"Answer":"I was able to fix this my using mysql.connector instead of importing MySQLdb in my python scripts","Q_Score":1,"Tags":"python,openssl,conda,mysql-python","A_Id":58613928,"CreationDate":"2019-10-16T15:24:00.000","Title":"Python MySQLdb cannot connect to server, SSL Issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently overhauling a project here at work and need some advice. We currently have a morning checklist that runs daily and executes roughly 30 SQL files with 1 select statement each. This is being done in an excel macro which is very unreliable. These statements will be executed against an oracle database.\nBasically, if you were re-implementing this project, how would you do it? I have been researching concurrency in python, but have not had any luck. We will need to capture the results and display them, so please keep that in mind.If more information is needed, please feel free to ask.\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":45,"Q_Id":58437542,"Users Score":2,"Answer":"There are lots of ways depending on how long the queries run, how much data is output, are there input parameters and what is done to the data output.\nConsider:\n1. Don't worry about concurrency up front\n2. Write a small python app to read in every *.sql file in a directory and execute each one.\n3. Modify the python app to summarize the data output in the format that it is needed\n4. Modify the python app to save the summary back into the database into a daily check table with the date \/ time the SQL queries were run. Delete all rows from the daily check table before inserting new rows\n5. Have the Excel spreadsheet load it's data from that daily check table including the date \/ time the data was put in the table \n6. If run time is slows, optimize the PL\/SQL for the longer running queries\n7. If it's still slow, split the SQL files into 2 directories and run 2 copies of the python app, one against each directory.\n8. Schedule the python app to run at 6 AM in the Windows task manager.","Q_Score":0,"Tags":"python,sql,multithreading","A_Id":58438628,"CreationDate":"2019-10-17T17:03:00.000","Title":"Most efficient way to execute 20+ SQL Files?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are trying to insert a batch of records(100k) in green plum. In case a particular record has some issue, is there a way to trace back to the specific record which is causing the issue? \nCurrently, it's failing the whole batch and we are trying to filter the error records.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":58447773,"Users Score":0,"Answer":"If you are using the COPY command to load or gpfdist and external tables to do the insert, you can set a segment reject limit and an error log that will allow the command to insert all good rows with any rejected rows placed in the error log. The default is to roll back everything on one error. You can check the Greenplum documentation or, in psql, use \\h COPY or \\h CREATE EXTERNAL TABLE. A simple example with COPY is:\nCOPY your_table from '\/your_path\/your_file' with delimiter as '|' \nLOG ERRORS\nSEGMENT REJECT LIMIT 1000 rows;\nThat means it will log up to 1000 bad rows before rolling everything back. Set as needed for your data. You can see what bad rows are in the log with:\nselect gp_read_error_log('your_table');\nOne of the columns in the log shows the bad row with exactly where and what the error is.\nJim McCann\nPivotal","Q_Score":1,"Tags":"python,postgresql,psycopg2,greenplum","A_Id":58478186,"CreationDate":"2019-10-18T09:20:00.000","Title":"Track error records when doing a multi row update in Greenplum(Postgres 8.4)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While trying to install psycopg2-binary with pip3, I get the following error message:\n\nSetup script exited with error: command 'C:\\Program Files (x86)\\Microsoft Visual\n Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.23.28105\\bin\\HostX86\\x86\\link.exe' failed with exit status 1120\n\nI have 83 error LNK2001.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":652,"Q_Id":58464206,"Users Score":1,"Answer":"On windows, I could not install psycopg2. When browsing around, people were saying to isntall directly the binary which I was trying to do. However for some reasons, the 64 bits binary was not working but the 32 bits version has worked fine and solved my issues.","Q_Score":0,"Tags":"python","A_Id":58971163,"CreationDate":"2019-10-19T13:31:00.000","Title":"How to install psycopg2 on windows 10?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a typical Django project that uses PostgreSQL as it's database backend. I need to set up a specific endpoint (\/status\/) that works even when the connection to the database is lost. The actual code is very simple (just returns the response directly without touching the DB) but when the DB is down I still get OperationalError when calling this endpoint. This is because I use some pieces of middleware that attempt to contact the database, e.g. session middleware and auth middleware. Is there any way to implement such \/status\/ endpoint? I could theoretically implement this as a piece of middleware and put it before any other middleware but that seems as kind of hack.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":58541016,"Users Score":0,"Answer":"Up to my knowledge no.\nIf some Middlewares requires a database and you enabled them then they will be used for each request.\nI did not read about a way of conditionally executing middlewares depending on the request's url.\nSo up to my knowledge a normal djangoview will not be able to handle such a status page.\nThe only solution, that I can imagine is your suggestion to implement a middleware, that is handled first, shortcuts all the other middlewares and returns the result.","Q_Score":3,"Tags":"python,django","A_Id":58541424,"CreationDate":"2019-10-24T12:03:00.000","Title":"Add Django endpoint that doesn't require database connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible?\nI know ws.set_row('B:B', options={'hidden': True})\nBut, is there something like ws.set_row('B:B', options={'delete_row': True})?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6148,"Q_Id":58541514,"Users Score":0,"Answer":"You cannot delete a column with XlsxWriter. The best option is to structure your application so it doesn't write data to the column in the first place.","Q_Score":7,"Tags":"python,xlsxwriter","A_Id":58544117,"CreationDate":"2019-10-24T12:34:00.000","Title":"Delete row\/column from Excel with xlsxwriter","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Someone worked before with streaming data into (google) BigQuery using Google Cloud Functions (insert_rows_from_dataframe())?\nMy problem is it seems like sometimes the table schema is not updated immediately and when you try to load some data into table immediately after creation of a new field in the schema it returns an error:\n\nBigQueryError: [{\"reason\": \"invalid\", \"location\": \"test\", \"debugInfo\": \"\", \"message\": \"no such field.\"}]\"\n\nHowever, if I try to load again after few seconds it all works fine, so my question if someone knows the maximum period of time in seconds for this updating (from BigQuery side) and if is possible somehow to avoid this situation?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1468,"Q_Id":58546090,"Users Score":1,"Answer":"Because the API operation on BigQuery side is not atomic, you can't avoid this case. \nYou can only mitigate the impact of this behavior and perform a sleep, a retries, or set a Try-catch to replay the insert_rows_from_dataframe() several times (not infinite, in case of real problem, but 5 times for example) until it pass.\nNothing is magic, if the consistency is not managed on a side, the other side has to handle it!","Q_Score":1,"Tags":"python,google-cloud-platform,google-bigquery","A_Id":58570577,"CreationDate":"2019-10-24T17:06:00.000","Title":"How to solve problem related to BigQueryError \"reason\": \"invalid\", \"location\": \"test\", \"debugInfo\": \"\", \"message\": \"no such field.\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on Chapter 12 of Automate the Boring Stuff with Python and it is about working with spreadsheets using openpyxl. I have an object called 'c' that is 'B1' of the spreadsheet. Whenever I use 'c.column' it returns the number '2' instead of the letter 'B'. The example on the page returns the letter so I'm wondering why mine is different.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1579,"Q_Id":58572156,"Users Score":0,"Answer":"I found a solution. Openpyxl has utilities to solve this problem, so here is the solution.\nfrom openpyxl.utils import get_column_letter\nthen c.column can be passed in the function get_column_letter(c.column) which will return the letter instead of the number.","Q_Score":0,"Tags":"python,openpyxl","A_Id":58572578,"CreationDate":"2019-10-26T15:29:00.000","Title":"How to return column letters instead of column numbers with openpyxl?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a support telegram bot using pytelegrambotapi. While I was making it, I met a problem. There a lot of global variables appeared in my code and I want to systematize it. I searched about this in internet and found few solutions: to make a json serializable class and save all data in json file or to make a database of all users using MySql. But I don't know what is better. The amount of users, who will use a bot will be about 100-150, so what is the better solution for my telegram bot? thnx in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":153,"Q_Id":58583279,"Users Score":0,"Answer":"Both solutions are ok, but if you want to increase the number of users you have to use database for integrity and consistency","Q_Score":0,"Tags":"mysql,python-3.x,telegram-bot","A_Id":58675828,"CreationDate":"2019-10-27T20:53:00.000","Title":"Use MySql vs json file for user database for python telegram bot","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am designing a web application that has users becoming friends with other users. I am storing the users info in a database using sqlite3. \nI am brainstorming on how I can keep track on who is friends with whom.\nWhat I am thinking so far is; to make a column in my database called Friendships where I store the various user_ids( integers) from the user's friends.\nI would have to store multiple integers in one column...how would I do that?\nIs it possible to store a python list in a column?\nI am also open to other ideas on how to store the friendship network information in my database....\nThe application runs through FLASK","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":182,"Q_Id":58595931,"Users Score":1,"Answer":"What you are trying to do here is called a \"many-to-many\" relationship. Rather than making a \"Friendships\" column, you can make a \"Friendship\" table with two columns: user1 and user2. Entries in this table indicate that user1 has friended user2.","Q_Score":1,"Tags":"python,sqlite","A_Id":58595975,"CreationDate":"2019-10-28T17:46:00.000","Title":"Storing multiple values in one column","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am designing a web application that has users becoming friends with other users. I am storing the users info in a database using sqlite3. \nI am brainstorming on how I can keep track on who is friends with whom.\nWhat I am thinking so far is; to make a column in my database called Friendships where I store the various user_ids( integers) from the user's friends.\nI would have to store multiple integers in one column...how would I do that?\nIs it possible to store a python list in a column?\nI am also open to other ideas on how to store the friendship network information in my database....\nThe application runs through FLASK","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":182,"Q_Id":58595931,"Users Score":1,"Answer":"It is possible to store a list as a string into an sql column. \nHowever, you should instead be looking at creating a Friendships table with primary keys being the user and the friend.\nSo that you can call the friendships table to pull up the list of friends.\nOtherwise, I would suggest looking into a Graph Database, which handles this kind of things well too.","Q_Score":1,"Tags":"python,sqlite","A_Id":58595986,"CreationDate":"2019-10-28T17:46:00.000","Title":"Storing multiple values in one column","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When i call the below function through API;\nIn both try and except conditions I have to keep log in separate table named api.log.\nWhile the function enters in except condition, error occurs on creating record on api.log table","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":58602087,"Users Score":0,"Answer":"It is solved by using commit function.","Q_Score":0,"Tags":"python-2.7,odoo-10","A_Id":58852385,"CreationDate":"2019-10-29T06:00:00.000","Title":"Unable to create a record in except loop of try-except odoo10","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to use Pandas read_sql to validate some fields in my app.\nWhen i read my db using SQL Developer, i get these values:\n\n603.29\n1512.00\n488.61\n488.61\n\nBut reading the same sql query using Pandas, the decimal places are ignored and added to the whole-number part. So i end up getting these values:\n\n60329.0\n1512.0\n48861.0\n48861.0\n\nHow can i fix it?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":309,"Q_Id":58627984,"Users Score":3,"Answer":"I've found a workaround for now.\nConvert the column you want to string, then after you use Pandas you can convert the string to whatever type you want.\nEven though this works, it doesn't feel right to do so.","Q_Score":2,"Tags":"python,sql,pandas","A_Id":58631271,"CreationDate":"2019-10-30T14:47:00.000","Title":"Python - Pandas read sql modifies float values columns","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pyodbc is correctly connecting to the same db. When I run \nSELECT name FROM sys.databases;\nSELECT name FROM master.dbo.sysdatabases;\nI get the list of all the DBs I can see in MSSQLSMS. \nWhen I look at my Event Profiler in SSMS, I can see that Pyodbc is executing code actions on the same database in the same server as I look at with SSMS. I see my create table statements, select statements, that I'm running in Python with Pyodbc, executing on my SQL server. \nSo why can I not see the tables I've created in SSMS? Why, when I run the same queries in SSMS, do I not see the table I've created using Pyodbc? \nI am extremely confused. Pyodbc appears to be connecting to my local SQL server correctly, and executing SQL code on it, but I'm not able to view the results using SSMS. I can find the table with Pyodbc, and Pyodbc and SSMS are both telling me they're looking at the same places, but SSMS can't see anything Pyodbc has done. \nEDIT : Solved\nconn.autocommit=True is required for Pyodbc to make permanent changes.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":520,"Q_Id":58633363,"Users Score":1,"Answer":"SQL Server allows some DDL statements (e.g., CREATE TABLE) to be executed inside a transaction. Therefore we also have to remember to commit() those changes if we haven't specified autocommit=True on the Connection.","Q_Score":0,"Tags":"python,sql-server,ssms,pyodbc","A_Id":58634438,"CreationDate":"2019-10-30T20:43:00.000","Title":"Pyodbc can create\/alter tables, but I can't see them in SSMS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to copy files from 2 different bucket which are in two different aws account using same access key.\nSo it provide an error saying 403 forbidden.So what i want to do is, I want to check whether the access key that i am using to copy the file has permission to those bucket before i copy the file using boto3. Is there are anyway to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":579,"Q_Id":58638363,"Users Score":1,"Answer":"If you are copying objects between Amazon S3 buckets that belong to different AWS accounts, then you will need to use a single set of credentials that have:\n\nGetObject permission on the source bucket\nPutObject permission on the destination bucket\n\nAlso, the CopyObject command should be sent to the destination bucket to avoid problems with object ownership.\nTherefore, I would recommend:\n\nUse credentials from the destination account (dest-IAM-user)\nAdd a bucket policy to the source bucket that permits GetObject access by dest-IAM-user","Q_Score":3,"Tags":"python,amazon-s3,boto3","A_Id":58639883,"CreationDate":"2019-10-31T07:09:00.000","Title":"How to check whether s3 access key has access to a specific bucket or not in aws using boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a huge master data dump excel file. I have to append data to it on a regular basis. The data to be appended is stored as a pandas dataframe. Is there a way to append this data to the master dump file without having to read its contents. \nThe dump file is huge and takes a considerable amount of time for the program to load the file (using pandas).\nI have already tried openpyxl and XlsxWriter but it didn't work.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":152,"Q_Id":58669599,"Users Score":2,"Answer":"It isn't possible to just append to an xlsx file like a text file. An xlsx file is a collection of XML files in a Zip container so to append data you would need to unzip the file, read the XML data, add the new data, rewrite the XML file(s) and then rezip them.\nThis is effectively what OpenPyXL does.","Q_Score":2,"Tags":"python,excel,pandas","A_Id":58670829,"CreationDate":"2019-11-02T08:54:00.000","Title":"Is there a way to append data to an excel file without reading its contents, in python?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to do that the best way?\nHow to autostart and run the script every 5 seconds? (i read something from a rs232 device)\nI want to write some values every 5 seconds to a postgresql database and for this is it ok to open the database connection every 5 seconds and close it or can it be stay opend?\nthanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":58684337,"Users Score":0,"Answer":"I think the best way is to have a constantly running script that reads the value, sends to db, and sleeps for the remainer of the interval and keep the connection open. This way you can monitor and react if a read, write or both take too long for example. And then to have a separate script just to check if the main one is alive and notify you or restart the main one. I had some success with this model when reading from a bitcoin exchange api and inserting into mariadb every 6 seconds","Q_Score":0,"Tags":"python,python-3.x,postgresql,raspberry-pi,raspbian","A_Id":58684649,"CreationDate":"2019-11-03T20:30:00.000","Title":"Best and most efficient way to execute a Python script at Raspberry Pi (Raspbian Buster) every 5 seconds and store in PostgreSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've an excel file and I have to go through each row, always get columns say 2,3 and then in another owl file find the corresponding entity(which I get from column 2 of each row of the excel file) and populate it, and repeat the whole task for all rows of the excel file. Since both files are big doing the trivial way: go through each row of the excel file then go through each entity of the owl file, find the correct entity and then populate will take a lot time. \nIs there a different method I should try, which has lower complexity?\nAny help is highly appreciated.\nPS: I'm a CS student and done DSA in my previous sem. I now realise the practical importance of algos runtime now. \nFor working with excel sheet I'm using openpyxl, though unnecessary info.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":58710010,"Users Score":0,"Answer":"Depending on 'big' you may get away with reducing the big overhead of file-IO by storing one of the files (only the parts you need) in RAM and than iterate the other file only once.\nKomplexity keeps to be O(n*m)\nYou could reduce the complexity (if still needed) by storing the data of the file you keep in RAM in a HashMap which has access complexity of O(1) (in most cases)\nKomplexity is O(m) where m is the size of the file not kept in the HashMap (in RAM).\nIf 'big' means that even the data from the smaller file do not fit in RAM, You can use the same approach just do it in chunks of a size that fit in your RAM.","Q_Score":0,"Tags":"python,algorithm,owl,ontology","A_Id":58711528,"CreationDate":"2019-11-05T11:04:00.000","Title":"Faster way to populate a file after reading each line of another file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table named as TRENDS, containing around 20k records. I need to manipulate each row of TRENDS table based on the each column value and final output of the row is a string, named insight which is nothing but that manipulated row. And then i need to store that insight into a INSIGHTS table. Along with a insight i am generating 3 more queries which are in three seprate functions. Result of each query is get stored into another table called FACTS along with a insight_id to indicates that these 3 facts belongs to the same insight.\n\nSince the data is in mysql database I used mysql-connector library of python to run on my scripts for retrieval and insertion operations.\n With each insight and 3 facts i am performing execute() and commit() which is taking 3 sec for one set of record to insert and these is 20k recods in TRENDS table which is taking lot of time to complete.\n\nWhat is the fastest way to solve this problem?\nPlease suggest a better algo if possible.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":58756975,"Users Score":0,"Answer":"We'll be able to provide much more help if you can provide a sample of the data in each table and your desired output. Here are some very general pointers based on what you've said:\n\nThere are a lot of overheads to excuting a query, if you read and write one line at a time you will waste a huge amount of execution time waiting for queries to execute\nBulk operations are much faster, why not read all 20k rows at once then write 20k back, or if that's too demanding on your local system why not do 1000 at a time?\n...or see if you can write a query which completes the entire operation in SQL","Q_Score":0,"Tags":"mysql,sql,python-3.x,cx-oracle,mysql-connector-python","A_Id":58757322,"CreationDate":"2019-11-07T21:13:00.000","Title":"What is the fastest way to implementing algorithm to manipulate and insert sql records?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm assuming that the more commits I make to my database, the more put requests I make. Would it be less expensive to commit less frequently (but commit larger queries at a time)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":58757794,"Users Score":0,"Answer":"I am assuming you're either using RDS for MySQL or MySQL-Compatible Aurora; in either case, you're charged based on the number of running hours, storage and I\/O rate, and data transferred OUT of the service (Aurora Serverless pricing is a different story).\nIn RDS, you're not charged by PUT requests, and there is not such a concept with pymysql.\nThe frequency of commits should be primarily driven by your application functional requirements, not cost. Let's break it down to give you a better idea of how each cost variable would relate to each approach (commit big batches less frequently vs. commit small batches more frequently).\n\nRunning hours: Irrelevant, same for both approaches.\nStorage: Irrelevant, you'll probably consume the same amount of storage. The amount of data is constant.\nI\/O rate: There are many factors involved in how the DB engine consumes\/optimizes I\/O. I wouldn't get to this level of granularity.\nData transferred IN: Irrelevant, free for both cases.","Q_Score":0,"Tags":"python,python-3.x,amazon-web-services,amazon-rds,pymysql","A_Id":58760421,"CreationDate":"2019-11-07T22:30:00.000","Title":"Is using connection.commit() from pymysql more frequently to AWS RDS more expensive?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table with 4 columns filled with integer. Some of the rows have a value \"null\" as its more than 1000 records with this \"null\" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?\nThanks","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1193,"Q_Id":58789936,"Users Score":0,"Answer":"use the 'drop.isnull()' function.","Q_Score":1,"Tags":"python-3.x,jupyter-notebook","A_Id":58818221,"CreationDate":"2019-11-10T15:08:00.000","Title":"How to remove rows from a datascience table in python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table with 4 columns filled with integer. Some of the rows have a value \"null\" as its more than 1000 records with this \"null\" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?\nThanks","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1193,"Q_Id":58789936,"Users Score":0,"Answer":"To remove a row in a datascience package:\nname_of_your_table.remove() # number of the row in the bracket","Q_Score":1,"Tags":"python-3.x,jupyter-notebook","A_Id":63076734,"CreationDate":"2019-11-10T15:08:00.000","Title":"How to remove rows from a datascience table in python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table with 4 columns filled with integer. Some of the rows have a value \"null\" as its more than 1000 records with this \"null\" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?\nThanks","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1193,"Q_Id":58789936,"Users Score":0,"Answer":"#df is the original dataframe#\n#The '-' operator removes the null values and re-assigns the remaining ones to df#\ndf=idf[-(df['Column'].isnull())]","Q_Score":1,"Tags":"python-3.x,jupyter-notebook","A_Id":66193032,"CreationDate":"2019-11-10T15:08:00.000","Title":"How to remove rows from a datascience table in python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table with 4 columns filled with integer. Some of the rows have a value \"null\" as its more than 1000 records with this \"null\" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?\nThanks","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1193,"Q_Id":58789936,"Users Score":0,"Answer":"use dataframe_name.isnull() #To check the is there any missing values in your table.\nuse dataframe_name.isnull.sum() #To get the total number of missing values.\nuse dataframe_name.dropna() # To drop or delete the missing values.","Q_Score":1,"Tags":"python-3.x,jupyter-notebook","A_Id":68397031,"CreationDate":"2019-11-10T15:08:00.000","Title":"How to remove rows from a datascience table in python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Do I need SQLAlchemy if I want to use PostgreSQL with Python Pyramid, but I do not want to use the ORM? Or can I just use the psycopg2 directly? And how to do that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":58792135,"Users Score":0,"Answer":"Even if you do not want to use ORM, you can still use SQLAlchemy's query\nlanguage.\nIf you do not want to use SQLAlchemy, you can certainly use psycopg2 directly. Look into Pyramid cookbook - MongoDB and Pyramid or CouchDB and Pyramid for inspiration.","Q_Score":0,"Tags":"python,postgresql,pyramid","A_Id":58813231,"CreationDate":"2019-11-10T19:22:00.000","Title":"How to use PostgreSQL in Python Pyramid without ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am finished with my project and now I want to put it on my website where people could download it and use it. My project is connected to my MySQL and it works on my machine. On my machine, I can read, and modify my database with python. It obviously, will not work if a person from another country tries to access it. How can I make it so a person from another town, city, or country could access my database and be able to read it?\nI tried using SSH but I feel like it only works on a local network.\nI have not written a single line of code on this matter because I have no clue how to get started. \nI probably know how to read my database on a local network but I have no clue how to access it from anywhere else.\nAny help, tips, or solutions would be great and appreciated. \nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":558,"Q_Id":58794269,"Users Score":0,"Answer":"If I'm understanding correctly, you want to run a MySQL server from your home PC and allow others to connect and access data? Well, you would need to make sure the correct port is forwarded in your router and firewall, default is TCP 3306. Then simply provide the user with your current IP address (could change).\n\nDetermine the correct MySQL Server port being listened on.\nAllow port forwarding on the TCP protocol and the port you determined, default is 3306.\nAllow incoming connections on this port from software firewall if any.\nProvide the user with your current IP Address, Port, and Database name.\nIf you set login credentials, make sure the user has this as well.\nThat's it. The user should be able to connect with the IP Address, Port, Database Name, Username, and Password.","Q_Score":0,"Tags":"mysql,python-3.x,mysql-python,remote-server","A_Id":58794382,"CreationDate":"2019-11-11T00:39:00.000","Title":"How to access MySQL database that is on another machine, located in a different locations (NOT LOCAL) with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using pg_cron to schedule a task which should be repeated every 1 hour.\nI have installed and using this inside a docker environment inside the postgres container.\nAnd I am calling the query to create this job using python from a different container.\nI can see that job is created successfully but is not being executed due to lack of permission since the pg_hba.conf is not set to trust or due to no .pgpass file.\nBut if I enable any of those both, anyone can enter into database by using docker exec and do psql in the container.\nIs there anyway to avoid this security issue??? Since in production environment it should not be allowed for anyone to enter into the database without a password.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":58796008,"Users Score":0,"Answer":"Either keep people from running docker exec on the container or use something else than pg_cron.\nI would feel nervous if random people were allowed to run docker exec on the container with my database or my job scheduler in it.","Q_Score":1,"Tags":"python,postgresql,docker,dockerfile","A_Id":58796832,"CreationDate":"2019-11-11T05:34:00.000","Title":"How do i solve the security problems caused by pg_cron?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"After inserting a couple of thousand rows in my mysqldb (xampp) via the python interface (spyder python 3.7), the database is losing the port.\nSpyder error-message:\n\nInterfaceError: Can't connect to MySQL server on 'localhost:3306' (10048\n\nxampp error-message:\n\nNetStatTable] NetStat TCP service stopped. Please restart the control panel. Returned 122\n\nDoes anybody have any idea?\nthx in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":58822343,"Users Score":0,"Answer":"this are the mysql logfiles for the last two crashes: (sorry, now Idea how to format properly)\nInnoDB: using atomic writes.\n2019-11-12 16:40:02 0 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions\n2019-11-12 16:40:02 0 [Note] InnoDB: Uses event mutexes\n2019-11-12 16:40:02 0 [Note] InnoDB: Compressed tables use zlib 1.2.11\n2019-11-12 16:40:02 0 [Note] InnoDB: Number of pools: 1\n2019-11-12 16:40:02 0 [Note] InnoDB: Using SSE2 crc32 instructions\n2019-11-12 16:40:02 0 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M\n2019-11-12 16:40:02 0 [Note] InnoDB: Completed initialization of buffer pool\n2019-11-12 16:40:02 0 [Note] InnoDB: Starting crash recovery from checkpoint \nLSN=19733761912\n2019-11-12 16:40:02 0 [Note] InnoDB: 128 out of 128 rollback segments are active.\n2019-11-12 16:40:02 0 [Note] InnoDB: Removed temporary tablespace data file: \"ibtmp1\"\n2019-11-12 16:40:02 0 [Note] InnoDB: Creating shared tablespace for temporary tables\n2019-11-12 16:40:02 0 [Note] InnoDB: Setting file 'C:\\xampp\\mysql\\data\\ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...\n2019-11-12 16:40:02 0 [Note] InnoDB: File 'C:\\xampp\\mysql\\data\\ibtmp1' size is now 12 MB.\n2019-11-12 16:40:02 0 [Note] InnoDB: Waiting for purge to start\n2019-11-12 16:40:02 0 [Note] InnoDB: 10.4.8 started; log sequence number 19733761921; \ntransaction id 9577888\n2019-11-12 16:40:02 0 [Note] InnoDB: Loading buffer pool(s) from C:\\xampp\\mysql\\data\\ib_buffer_pool\n2019-11-12 16:40:02 0 [Note] Plugin 'FEEDBACK' is disabled.\n2019-11-12 16:40:02 0 [Note] Server socket created on IP: '::'.","Q_Score":0,"Tags":"python,mysql,xampp","A_Id":58822609,"CreationDate":"2019-11-12T15:59:00.000","Title":"SQL insert via python looses port 3306","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table in sql I'm looking to read into a pandas dataframe. I can read the table in but all column dtypes are being read in as objects. When I write the table to a csv then re-read it back in using read_csv, the correct data types are assumed. Obviously this intermediate step is inefficient and I just want to be able to read the data directly from sql with the correct data types assumed.\nI have 650 columns in the df so obviously manually specifying the data types is not possible.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":651,"Q_Id":58855925,"Users Score":0,"Answer":"So it turns out all the data types in the database are defined as varchar.\nIt seems read_sql reads the schema and assumes data types based off this. What's strange is then I couldn't convert those data types using infer_objects().\nThe only way to do it was to write to a csv then read than csv using pd.read_csv().","Q_Score":0,"Tags":"python,sql,pandas","A_Id":58861981,"CreationDate":"2019-11-14T11:38:00.000","Title":"Pandas not assuming dtypes when using read_sql?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a procedural function (written in pl\/python) which queries table A, does some calculations and then returns a set. I use this function as query for my materialized view B.\nEverything works perfectly except that when I want to restore my dump, I get the following error:\n\nDETAIL: spiexceptions.UndefinedTable: relation \"A\" does not exist.\n\nThe line which raises this error is the last line of my sql dump:\nREFRESH MATERIALIZED VIEW B;\nI know that I can ignore this error and refresh my materialized view after restoration process, but I want to know why this happens? Is it because this function runs in another transaction which doesn't know anything about current restoration process? And what can I do to prevent this error?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":58859643,"Users Score":1,"Answer":"For security reasons, pg_dump (or pg_restore) emits a command which empties the search_path, so when you restore the process gets run with an empty search path. But it does not edit the text body of your function at all but emits it as-is, so it can't alter it to specify the fully qualified name of the table. So the function can't find the table when run inside the process doing the restore.\nYou can fully qualify the table name in the function, or you can define the function with SET search_path = public. Or you can edit the dump file to remove the part that clears the search_path, if you are not concerned about the security implications.","Q_Score":0,"Tags":"postgresql,materialized-views,plpython","A_Id":58866874,"CreationDate":"2019-11-14T14:53:00.000","Title":"Postgresql functions execution process","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm connecting to an oracle database and trying to bring across a table with roughly 77 million rows. At first I tried using chunksize in pandas but I always got a memory error no matter what chunksize I set. I then tried using Dask since I know its better for large amounts of data. However, there're some columns that need to be made NULL, is there away to do this within read_sql_table query like there is in pandas when you can write out your sql query?\nCheers","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":83,"Q_Id":58868931,"Users Score":0,"Answer":"If possible, I recommend setting this up on the oracle side, making a view with the correct data types, and using read_sql_table with that.\nYou might be able to do it directly, since read_sql_table accepts sqlalchemy expressions. If you can phrase it as such, it ought to work.","Q_Score":0,"Tags":"python,pandas,dask","A_Id":58879617,"CreationDate":"2019-11-15T01:16:00.000","Title":"Is there a way to set columns to null within dask read_sql_table?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a python code which writes a hyperlink into a excel file.This hyperlink should open in a specific page in a pdf document.\nI am trying something like\nWorksheet.write_url('A1',\"C:\/Users\/.....\/mypdf#page=3\") but this doesn't work.Please let me know how this can be done.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":404,"Q_Id":58871922,"Users Score":1,"Answer":"Are you able to open the pdf file directly to a specific page even without xlsxwriter? I can not.\nFrom Adobe's official site:\n\nTo target an HTML link to a specific page in a PDF file, add\n #page=[page number] to the end of the link's URL.\nFor example, this HTML tag opens page 4 of a PDF file named\n myfile.pdf:\n\nNote: If you use UNC server locations (\\servername\\folder) in a link,\n set the link to open to a set destination using the procedure in the\n following section. \nIf you use URLs containing local hard drive addresses (c:\\folder), you cannot link to page numbers or set destinations.","Q_Score":0,"Tags":"python,excel,pdf,xlsxwriter","A_Id":59052912,"CreationDate":"2019-11-15T07:07:00.000","Title":"How do I link to a specific page of a PDF document inside a cell in Excel?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to Python and Django and want to install mysqlclient on windows. When I use the command pip install django mysqlclient in cmd it throws this error : \n\nFile \"d:\\myprojects\\python\\mytestdjangoprj\\myproject\\lib\\genericpath.py\", line 30, in isfile\n st = os.stat(path)\n TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType\n\nPlease help me.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":562,"Q_Id":58888555,"Users Score":2,"Answer":"As Alasdair said , using 64 bit solve problem.thanks Alasdair.","Q_Score":2,"Tags":"python,mysql,django,windows","A_Id":58889819,"CreationDate":"2019-11-16T07:31:00.000","Title":"pip install django mysqlclient 'path should be string, bytes, os.PathLike or integer, not NoneType' on windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to compare current datetime with the stored datetime in DB.(working on python)\nSo I tried by using this command\ncursor.execute('SELECT ID,URL,LAST_HEARD_TIME,NEXT_SCHEDULE_TIME,STATUS FROM VCP_THIN_AGENT WHERE datetime.datetime.now() > NEXT_SCHEDULE_TIME')\nBut it shows the error -cross-database references are not implemented: datetime.datetime.now\nHow can I solve this?\nThanks in Advance :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":58928695,"Users Score":0,"Answer":"I solve the Issue.\nwe have to use like\ncursor.execute('SELECT ID,URL,LAST_HEARD_TIME,NEXT_SCHEDULE_TIME,STATUS FROM VCP_THIN_AGENT WHERE %s> NEXT_SCHEDULE_TIME',(datetime.datetime.now(),))","Q_Score":0,"Tags":"python,database","A_Id":58929216,"CreationDate":"2019-11-19T07:28:00.000","Title":"cross-database references are not implemented: **datetime.datetime.now**","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The approach I am trying is to write a dynamic script that would generate mirror tables as in Oracle with similar data types in SQL server. Then again, write a dynamic script to insert records to SQL server. The challenge I see is incompatible data types. Has anyone come across similar situation? I am a sql developer but I can learn python if someone can share their similar work.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":648,"Q_Id":58943788,"Users Score":0,"Answer":"I used linked server, got all the metadata of the tables from dba_tab_columns in Oracle. Wrote script to create tables based on the metadata. I needed to use SSIS script task to save the create table script for source control. Then I wrote sql script to insert data from oracle, handled type differences through script.","Q_Score":0,"Tags":"python,sql-server,oracle,data-migration","A_Id":59293359,"CreationDate":"2019-11-19T22:25:00.000","Title":"What is the best way to migrate all data from Oracle 11.2 to SQL Server 2012?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The approach I am trying is to write a dynamic script that would generate mirror tables as in Oracle with similar data types in SQL server. Then again, write a dynamic script to insert records to SQL server. The challenge I see is incompatible data types. Has anyone come across similar situation? I am a sql developer but I can learn python if someone can share their similar work.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":648,"Q_Id":58943788,"Users Score":0,"Answer":"Have you tried the \"SQL Server Import and Export Wizard\" in SSMS?\ni.e. if you create an empty SQL server database and right click on it in SSMS then one of the \"tasks\" menu options is \"Import Data...\" which starts up the \"SQL Server Import and Export Wizard\". This builds a once-off SSIS package .. which can be saved if you want to re-use.\nThere is a data source option for \"Microsoft OLE DB Provider for Oracle\". \nYou might have a better Oracle OLE DB Provider available also to try.\nThe will require Oracle client software to be available.\nI haven't actually tried this (Oracle to SQL*Server) so am not sure if reasonable or not.\nHow many tables, columns?\nOracle DB may also have Views, triggers, constraints, Indexes, Functions, Packages, sequence generators, synonyms.","Q_Score":0,"Tags":"python,sql-server,oracle,data-migration","A_Id":58945520,"CreationDate":"2019-11-19T22:25:00.000","Title":"What is the best way to migrate all data from Oracle 11.2 to SQL Server 2012?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create an application using python, In which I would like to able to read a .csv or .xlsx file and display its contents on my application, I believe there should be some packages which helps to do this in python, can I have some suggestions?\nRegards,\nRam","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":36,"Q_Id":58994269,"Users Score":1,"Answer":"I think working with PyQt for large application is the best option ( for large applications ) but tkinter is the secondary option for fast small apps.","Q_Score":1,"Tags":"python","A_Id":58994326,"CreationDate":"2019-11-22T12:25:00.000","Title":"How to integrate spreadsheet\/excel kind of view to my application using python?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wish to use the latest version of SQLite3 (3.30.1) because of its new capability to handle SQL 'ORDER BY ... ASC NULLS LAST' syntax as generated by the SQLAlchemy nullslast() function.\nMy application folder env\\Scripts contains the existing (old) version of sqlite3.dll (3.24), however when I replace it, there is no effect. In fact, if I rename that DLL, the application still works fine with DB accesses.\nSo, how do I update the SQLite version for an application?\nMy environment:\nWindows 10, 64-bit (I downloaded a 64-bit SQlite3 DLL version). I am running with pyCharm, using a virtual env.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":390,"Q_Id":59011272,"Users Score":0,"Answer":"I have found that the applicable sqlite3.dll is determined first by a Windows OS defined lookup. It first goes through the PATH variable, finding and choosing the first version it finds in any of those paths.\nIn this case, probably true for all pyCharm\/VirtualEnv setups, a version found in my user AppData\\Local\\Programs\\Python\\Python37\\DLLs folder was selected.\nWhen I moved that out of the way, it was able to find the version in my env\\Scripts folder, so that the upgraded DLL was used, and the sQLAlchemy nullslast() function did its work.","Q_Score":0,"Tags":"python,sqlite,sqlalchemy","A_Id":59037581,"CreationDate":"2019-11-23T19:16:00.000","Title":"How can I update the version of SQLite in my Flask\/SQLAlchemy App?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I enter this line pip3 install mysqlclient it gives me this error :\n\nERROR: Command errored out with exit status 1:\n\nthis happened after I tried using pip install mysqlclient","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":59039604,"Users Score":0,"Answer":"In order to use pip or pip3 in Windows operating system ; \n\nPYTHONPATH system variable should be added to environment variables for the path of\npython.exe file such as\nC:\\Users\\pc\\AppData\\Local\\Programs\\Python\\Python36\\\nPATH system variable should be edited by adding new paths such as\nC:\\Users\\pc\\AppData\\Local\\Programs\\Python\\Python36\\ and \nC:\\Users\\pc\\AppData\\Local\\Programs\\Python\\Python36\\Scripts\\\n\nThen, both of the variables might be checked whether they're set from command line by \nC:\\Users\\pc>echo %PYTHONPATH% and C:\\Users\\pc>echo %PATH%","Q_Score":0,"Tags":"python,mysql,pip","A_Id":59040353,"CreationDate":"2019-11-25T20:27:00.000","Title":"Error when trying to download mysql in a trial django project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Pandas DataFrame with a bunch of rows and labeled columns.\nI also have an excel file which I prepared with one sheet which contains no data but only\nlabeled columns in row 1 and each column is formatted as it should be: for example if I\nexpect percentages in one column then that column will automatically convert a raw number to percentage.\nWhat I want to do is fill the raw data from my DataFrame into that Excel sheet in such a way\nthat row 1 remains intact so the column names remain. The data from the DataFrame should fill\nthe excel rows starting from row 2 and the pre-formatted columns should take care of converting\nthe raw numbers to their appropriate type, hence filling the data should not override the column format.\nI tried using openpyxl but it ended up creating a new sheet and overriding everything.\nAny help?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1888,"Q_Id":59050052,"Users Score":0,"Answer":"If your # of columns and order is same then you may try xlsxwriter and also mention the sheet name to want to refresh: \ndf.to_excel('filename.xlsx', engine='xlsxwriter', sheet_name='sheetname', index=False)","Q_Score":1,"Tags":"python,python-3.x,pandas,dataframe","A_Id":59050279,"CreationDate":"2019-11-26T11:34:00.000","Title":"Fill an existing Excel file with data from a Pandas DataFrame","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My actual problem is that python sqlite3 module throws database disk image malformed.\nNow there must be a million possible reasons for that. However, I can provide a number of clues:\n\nI am using python multiprocessing to spawn a number of workers that all read (not write) from this DB\nThe problem definitely has to do with multiple processes accessing the DB, which fails on the remote setup but not on the local one. If I use only one worker on the remote setup, it works\nThe same 6GB database works perfectly well on my local machine. I copied it with git and later again with scp to remote. There the same script with the copy of the original DB gives the error\nNow if I do PRAGMA integrity_check on the remote, it returns ok after a while - even after the problem occurred\nHere are the versions (OS are both Ubuntu):\n\nlocal: sqlite3.version >>> 2.6.0, sqlite3.sqlite_version >>> 3.22.0\nremote: sqlite3.version >>> 2.6.0, sqlite3.sqlite_version >>> 3.28.0\n\n\nDo you have some ideas how to allow for save \"parallel\" SELECT?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":911,"Q_Id":59067407,"Users Score":0,"Answer":"The problem was for the following reason (and it had happened to me before):\nUsing multiprocessing with sqlite3, make sure to create a separate connection for each worker!\nApparently this causes problems with some setups and sometimes doesn't.","Q_Score":0,"Tags":"python,sqlite,select,multiprocessing,malformed","A_Id":59071448,"CreationDate":"2019-11-27T09:52:00.000","Title":"SQLITE3 \/ Python - Database disk image malformed but integrity_check ok","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the performance difference between two JSON loading methods into BigQuery:\nload_table_from_file(io.StringIO(json_data) vs create_rows_json\nThe first one loads the file as a whole and the second one streams the data. Does it mean that the first method will be faster to complete, but binary, and the second one slower, but discretionary?\nAny other concerns?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":88,"Q_Id":59143310,"Users Score":3,"Answer":"It's for two different logics and they have their own limits.\n\nLoad from file is great if you can have your data placed in files. A file can be up to 5TB in size. This load is free. You can query data immediately after completion. \nThe streaming insert, is great if you have your data in form of events that you can stream to BigQuery. While a streaming insert single request is limited up to 10MB, it can be super parallelized up to 1 Million rows per second, that's a big scale. Streaming rows to BigQuery has it's own cost. You can query data immediately after streaming, but for some copy and export jobs data can be available later up to 90 minutes.","Q_Score":0,"Tags":"python,google-bigquery","A_Id":59144477,"CreationDate":"2019-12-02T16:47:00.000","Title":"Performance difference in json data into BigQuery loading methods","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using sqlite3 via sqlalchemy package in Python 3.7. The SQLite database file is stored on a USB 2.0 flash drive, plugged in a MacBook Pro.\nEven though many times I don't have problems, but sometimes I see a write operation (for example, a transaction of updating 2 tables in a commit) would be very slow, e.g. 8 seconds. This happens frequent enough that I have to debug it. In good times, the same operation takes less than 0.5 second.\nMy question is: how can I distinguish where this slowness is in sqlalchemy or in sqlite3 or just the USB driver itself? I was not able to find an existing post with such problem.\n(I was wondering if I should replace USB 2.0 drive with USB 3.0 drive, maybe because USB 2.0 is half-duplex? But I'm not sure and wanted to see if any way to confirm).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":59167216,"Users Score":0,"Answer":"Finally I went out and bought a USB 3.1 drive (SanDisk), so with nothing else changed, the performance is greatly improved: the max latency is reduced from 12 seconds to 0.25 second. Looks like USB 2.0 drive was the main cause of the slowness. (Sqlite3 WAL mode helped a bit too). Thanks for comments from @Shawn and @Selcuk.","Q_Score":0,"Tags":"python,sqlite,sqlalchemy,usb-drive","A_Id":59183169,"CreationDate":"2019-12-04T00:12:00.000","Title":"sqlite3 via sqlalchemy write operation is slow on USB drive","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been teaching myself Python to automate some of our work processes. So far reading from Excel files (.xls, .xlsx) has gone great.\nCurrently I have hit a bit of a snag. Although I can output .xlsx files fine, the software system that we have to use for our primary work task can only take .xls files as an input - it cannot handle .xlsx files, and the vendor sees no reason to add .xlsx support at any point in the foreseeable future.\nWhen I try to output a .xls file using either Pandas or OpenPyXl, and open that file in Excel, I get a warning that the file format and extension of the file do not match, which leads me to think that attempting to open this file using our software could lead to some pretty unexpected consequences (because it's actually a .xlsx file, just not named as such)\nI've tried to search for how to fix this all on Google, but all I can find are guides for how to convert a .xls file to a .xlsx file (which is almost the opposite of what I need). So I was wondering if anybody could please help me on whether this can be achieved, and if it can, how.\nThank you very much for your time","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2236,"Q_Id":59187722,"Users Score":0,"Answer":"Felipe is right the filename extension will set the engine parameter.\nSo basically all it's saying is that the old Excel format \".xls\" extension is no longer supported in Pandas. So if you specify the output spreadsheet with the \".xlsx\" extension the warning message disappears.","Q_Score":2,"Tags":"python,excel,pandas,openpyxl,xls","A_Id":69641984,"CreationDate":"2019-12-05T03:19:00.000","Title":"Outputting A .xls File In Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a replacement of MySQL\/Connector for Python and was wondering if PyMySQL can be used as a direct replacement. If I go that way should I expect some different output formatting of the output. For it I run a SELECT query on table that contains text, numbers and date\/time fields, would PyMySQL return the output formatted in the same way as MySQL\/Connector?\nAlso should I expect any issues running other type of queries like INSERT , DELETE etc.\nI'd appreciate if you share your experience.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":187,"Q_Id":59197601,"Users Score":0,"Answer":"Since, there are no other answers, I'm going to share my experience so far. \nI can say PyMySQL works fine as a replacement of MySQL\/Connector for Python. The query outputs are compatible and I had no issues updating the tables. \nOne thing that is worth noting is the slightly different error output. I didn't find much about error handling in PyMySQL documentation. Generally it returns a tuple consisting of the error number and error message. It looks slightly different than MySQL\/Connector's error output, so one should be careful when capturing them.\nAlso the list of .connect() arguments is different but these are well documented. \nHope that may be helpful.","Q_Score":0,"Tags":"mysql,pymysql,mysql-connector-python","A_Id":59282966,"CreationDate":"2019-12-05T14:45:00.000","Title":"Can PyMySQL be used as a drop-in replacement of MySQL\/Connector for Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a gui in python to manipulate stored records and I have the mysql script to set up the database and enter all information. How do I get from the mysql script to the .db file so that python can access and manipulate it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":319,"Q_Id":59198212,"Users Score":0,"Answer":"db files are SQLite databases most of the time. What you are trying to do is converting a dumped MySQL database into an SQLite database. Doing this is not trivial, as I think both dialects are not compatible. If the input is simple enough, you can try running each part of it using an SQLite connection in your Python script. If it uses more complex features, you may want to actually connect to a (filled) MySQL database and fetch the data from there, inserting it back into a local SQLite file.","Q_Score":0,"Tags":"python,sql,database","A_Id":59198273,"CreationDate":"2019-12-05T15:19:00.000","Title":"How do I create a .db file from an sql script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to load data from our cloud environment (pivotal cloud foundry) into SQL Server. Data is fetched from API and held in memory and we use tds to insert data to SQL Server, but only way in documentation I see to use bulk load is to load a file. I cannot use pyodbc because we dont have odbc connection in cloud env.\nHow can I do bulk insert directly from dictionary?\npytds does not offer bulk load directly, only from file","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":59216350,"Users Score":0,"Answer":"The first thing that comes to mind is to convert the data into bulk insert sql. Similar to how you migrate mysql.\nOr if you could export the data into cvs, you could import use SSMS (Sql Server Managment Studio).","Q_Score":0,"Tags":"python,sql-server","A_Id":59217041,"CreationDate":"2019-12-06T15:50:00.000","Title":"Best way for bulk load of data into SQL Server with Python without pyodbc","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Do you know if it's possible to create a role\/user for the Postgresql database from within Python code?\nI would prefer to use asyncpg library, since my program is based on asynchronous code, but if there are better libraries for this specific task, I don't mind using them.\nI already have a pre-installed database on my server machine, so another possibility would be to just run the Shell command from withing the Python program to create a role. However, I am not sure if you can create a role in just one Shell line.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":898,"Q_Id":59272516,"Users Score":1,"Answer":"After some digging, the answer appeared to be very straightforward: \npool.execute(\"CREATE ROLE name ...\")","Q_Score":2,"Tags":"python,postgresql,powershell,roles,asyncpg","A_Id":59272865,"CreationDate":"2019-12-10T17:23:00.000","Title":"Create PostgreSQL Role\/User from within Python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been trying to look for ways to call my python script from my perl script and pass the database handle from there while calling it. I don't want to establish another connection in my python script and just use the db handle which is being used by the perl script. Is it even possible and if yes then how?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":84,"Q_Id":59298237,"Users Score":0,"Answer":"There answer is that almost all databases (Oracle, MySQL, Postgresql) will NOT allow you to pass open DB connections between processes (even parent\/child). This is a limit on the databases connection, which will usually be associated with lot of state information.\nIf it was possible to 'share' such a connection, it will be a challenge for the system to know where to ship the results for queries sent to the database (will the result go to the parent, or to the child ?).\nEven if it is possible somehow to forward connection between processes, trying to pass a complex object (database connection is much more the socket) between Perl (usually DBI), and Python is close to impossible.\nThe 'proper' solution is to pass the database connection string, username, and password to the Python process, so that it can establish it's own connection.","Q_Score":0,"Tags":"python,database,perl","A_Id":59437378,"CreationDate":"2019-12-12T05:29:00.000","Title":"Can a db handle be passed from a perl script to a python script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use mysql with python SQLAlchemy but while installing mysqlclient for python its giving error. kindly check details below:\n\nI'm running this on Windows 10 64 bit and Anaconda Python 3.7.4\nI have tried by installing another python version as well but no luck.\nTried to install MYSQL connector c++ as well. but still not working.\nIf I run code 'pip install mysqlclient'\n\nmysql.c(29): fatal error C1083: Cannot open include file: 'mysql.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.14.26428\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":512,"Q_Id":59302008,"Users Score":0,"Answer":"I just got an answer by looking at other options mentioned here.\nActually mistake was that,\n\nI have installed Python 3.7.4 with 32 bits on 64bits machine.\nCode was looking for below path but it was going to the wrong path in program files instead of program files(x86)\n\nC:\\Program Files (x86)\\MySQL\\MySQL Connector C 6.1\\lib\nI have installed a new Python with 3.7 with 64 bits and also while installing MySQL connector I have changed the path of installation to the above-mentioned path and now it's working fine.\nThank you, everyone, for your time and help","Q_Score":0,"Tags":"python,mysql","A_Id":59384975,"CreationDate":"2019-12-12T10:01:00.000","Title":"pip install mysqlclient on win64 not working giving error 'Cannot open file: 'mysql.h'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are calling postgres using psycopg2 2.7.5 this way that we perform a query then perform some operation on data that we received and then we open new connection and perform another query and so on. \nUsually the query takes between 15 s to 10 min. \nOccasinally after 2 h we receive error: Python Exception : connection already closed\nWhat may be the reason for that? Data is the same and query is the same and sometimes the same query gives results back in 3 min and sometimes it gets that timeout after 2 hrs. \nI wonder if it is possible that connection is broken earlier but in python we get that information for some reason after 2 hrs? \nI doubt that there are any locks on DB at the moment when we perform a query but it may be under huge load and max number of connections may be reached (not confirmed but this is an option).\nWhat would be the best way to track down the problem? Firewall is set to 30 min timeout.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":334,"Q_Id":59305899,"Users Score":2,"Answer":"We are calling postgres using psycopg2 2.7.5 this way that we perform a query then perform some operation on data that we received and then we open new connection and perform another query and so on.\n\nWhy do you keep opening new connections? What do you do with the old one, and when do you do it?\n\nI wonder if it is possible that connection is broken earlier but in python we get that information for some reason after 2 hrs?\n\nIn general, a broken connection won't be detected until you try to use it. If you are using a connection pooler, it is possible the pool manager checks up on the connection periodically in the background.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":59306103,"CreationDate":"2019-12-12T13:38:00.000","Title":"postgres\/psycopg2 strange timeout after 2 hrs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"all!\nI'm struggling with this. I have a crawler that runs my network to retrieve all live IP addresses and stores them on a table. Another service runs all the records on the table to update the status of the machines. The read and write access might be simultaneous.\nThese are the fields that I need to store. As you can see, it's quite simple.\n1 - The IP address is the PRIMARY KEY, so I it won't duplicate machines. (I can change it to the MAC address)\n2 - Timestamp of first contact\n3 - Timestamp of last contact\n4 - Response of the last contact (boolean)\nI'm thinking about using MySQL, but then it might come with an overhead. Already thought about using a flat text file, but the parsing would add up to the python scripts I already have.\n\nIs there any database solution that can fit my problem?\n\nThank you all!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":59326531,"Users Score":0,"Answer":"Try taking a look at sqlite - it might be just what you're looking for and it's much lighter than mysql.","Q_Score":0,"Tags":"python,mysql,database,nosql","A_Id":59328344,"CreationDate":"2019-12-13T16:35:00.000","Title":"Quick store, lookup and retrieve database design","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For context:\n\nUsing a Azure cloud instance of a PostGRES database.\nUsing RHEL7 with openssl and openssl-dev installed.\nUsing python2.7.\nI can import SSL in python2.7 shell without issue.\nI can connect to a locally hosted PostGRES database using psycopg2 without issue.\n\nWhen I try connecting to the remote database using sslmode='require' I receive an OperationalError that sslmode value \"require\" invalid when SSL support is not compiled in. Looking at the SSL settings for the PostGRES instance in Azure, I see that the SSL mode is \"prefer\", however if I try to use that for the psycopg2 connection, I'm told that a SSL connection is required. \nFor the record, I have no issue connecting to this remote database using python3.7 from a Windows 10 machine. This leads me to believe that there isn't some configuration issue with the remote instance, and that the issue lies somewhere in RHEL7 and python2.7. Has anyone else ran into this issue?\nedit: Pyscopg2 was installed in a virtual environment using 'pip install psycopg2'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1201,"Q_Id":59329759,"Users Score":0,"Answer":"I can import SSL in python2.7 shell without issue.\n\nSSL support for a PostgreSQL connection needs to come from libpq (which is what psycopg2 uses to establish and manage the database connection). Being able to load an SSL module at the python level won't help, as it can't stitch that module together with libpq. So it seems you somehow managed to install a libpq compiled without SSL support. I don't know how you did that. Can you show us how you installed psycopg2 and its dependencies?\n\nLooking at the SSL settings for the PostGRES instance in Azure, I see that the SSL mode is \"prefer\", \n\n\"prefer\" is a setting for clients. There is no such setting on the server side. The server can allow SSL, or can demand SSL. If it allows but does not demand, then it is up to the client to decide what to do.\n\nhowever if I try to use that for the psycopg2 connection, I'm told that a SSL connection is required.\n\nThe server demands SSL. The client is unable to comply, because it does not have support for it compiled in.","Q_Score":1,"Tags":"postgresql,python-2.7,redhat,psycopg2","A_Id":59330442,"CreationDate":"2019-12-13T21:00:00.000","Title":"Invalid SSL mode for remote PostgreSQL connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having slight issues when it comes to running a crawler through my s3 buckets. My folders have data that was dumped from redshift that was sliced into many different files. These files naming convention go as the following:\ndump_0000_part_00.gz, dump_0001_part_01.gz ....\nHowever when my crawler fetches the metadata in this folder, it instead makes a few hundred tables, thinking each one of these sliced files is its own table. Is there a way to tell the crawler to group all these sliced files into ONE catalog table?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":130,"Q_Id":59363093,"Users Score":0,"Answer":"When configuring the Crawler (or editing an existing one), under the Output section, expand Grouping behavior for S3 data (optional) and select Create a single schema for each S3 path","Q_Score":0,"Tags":"python,database,amazon-web-services,amazon-s3,aws-glue","A_Id":59365512,"CreationDate":"2019-12-16T19:26:00.000","Title":"Adding only one of s3 partitioned files to AWS Glue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Presently, we send entire files to the Cloud (Google Cloud Storage) to be imported into BigQuery and do a simple drop\/replace. However, as the file sizes have grown, our network team doesn't particularly like the bandwidth we are taking while other ETLs are also trying to run. As a result, we are looking into sending up changed\/deleted rows only. \nTrying to find the path\/help docs on how to do this. Scope - I will start with a simple example. We have a large table with 300 million records. Rather than sending 300 million records every night, send over X million that have changed\/deleted. I then need to incorporate the change\/deleted records into the BigQuery tables. \nWe presently use Node JS to move from Storage to BigQuery and Python via Composer to schedule native table updates in BigQuery.\nHope to get pointed in the right direction for how to start down this path.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":284,"Q_Id":59381483,"Users Score":1,"Answer":"Stream the full row on every update to BigQuery.\nLet the table accommodate multiple rows for the same primary entity.\nWrite a view eg table_last that picks the most recent row. \nThis way you have all your queries near-realtime on real data. \nYou can deduplicate occasionally the table by running a query that rewrites self table with latest row only. \nAnother approach is if you have 1 final table, and 1 table which you stream into, and have a MERGE statement that runs scheduled every X minutes to write the updates from streamed table to final table.","Q_Score":0,"Tags":"python,node.js,google-bigquery","A_Id":59382310,"CreationDate":"2019-12-17T20:15:00.000","Title":"BigQuery - Update Tables With Changed\/Deleted Records","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I currently have a python project which basically reads data from an excel file, transforms and formats it, performs intensive calculations on the formatted data, and generates an output. This output is written back on the same excel file.\nThe script is run using a Pyinstaller EXE which basically is packing all the required libraries and the code itself, so every user is not required to prep the environment to run the script.\nBoth, the script EXE and the Excel file, sit on the user's machine.\nI need some suggestion on how this entire workflow could be achieved using AWS. Like what AWS services would be required etc.\nAny inputs would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":98,"Q_Id":59397424,"Users Score":1,"Answer":"One option would include using S3 to store the input and output files. You could create a lambda function (or functions) that does the computing work and that writes the update back to S3.\nYou would need to include the Python dependencies in your deployment zip that you push to AWS Lambda or create a Lambda layer that has the dependencies.\nYou could build triggers to run on things like S3 events (a file being added to S3 triggers the Lambda), on a schedule (EventBridge rule invokes the Lambda according to a specific schedule), or on demand using an API (such as an API Gateway that users can invoke via a web browser or HTTP request). It just depends on your need.","Q_Score":1,"Tags":"python,amazon-web-services","A_Id":59397565,"CreationDate":"2019-12-18T17:37:00.000","Title":"Suggestions to run a python script on AWS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have some graph data with date type values.\nMy gremlin query for the date type property is working, but output value is not the date value.\nEnvironment:\n\nJanusgraph 0.3.1 \ngremlinpython 3.4.3\n\nBelow is my example:\n\nData (JanusGraph): {\"ID\": \"doc_1\", \"MY_DATE\": [Tue Jan 10 00:00:00 KST 1079]}\nQuery: g.V().has(\"ID\", \"doc_1\").valueMap(\"MY_DATE\")\nOutput (gremlinpython): datetime(1079, 1, 16)\n\nThe error is 6 days (1079.1.10 -> 1079.1.16).\nThis mismatch does not occur when the years are above 1600.\nDoes the timestamp have some serialization\/deserialization problems between janusgraph and gremlinpython?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":434,"Q_Id":59405888,"Users Score":1,"Answer":"After some try & search, I found that there are some difference between java Date and python datetime. (Julian vs. Gregorian Calendar)\nSo I have replaced SimpleDateFormat with JodaTime and got the expected result as below:\n\nData (Raw): {\"ID\": \"doc_1\", \"MY_DATE\": \"1079-1-29\"}\nData (JanusGraph): {\"ID\": \"doc_1\", \"MY_DATE\": [Wed Jan 23 00:32:08 KST 1079]}\n\n\n(I think the JanusGraph uses java Date object internally..)\n\nQuery: g.V().has(\"ID\", \"doc_1\").valueMap(\"MY_DATE\")\nOutput (gremlinpython): datetime(1079, 1, 29)\n\nThanks","Q_Score":1,"Tags":"gremlin,janusgraph,gremlinpython","A_Id":59463174,"CreationDate":"2019-12-19T08:33:00.000","Title":"Mismatch between janusgraph date value and gremlin query result","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to use Apache Superset and am considering if it will allow for me to connect to both POSTGres and MSSQL at the same time. \nThere might be instances of creating queries from both databases however, i cant figure out based on the API documentation if this can be done. \nnon-dev here.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2689,"Q_Id":59413246,"Users Score":2,"Answer":"You can indeed connect to both databases. Superset lets you connect many databases (and types thereof!) by adding various connection strings.\nHowever, you cannot do JOIN queries between the two databases. Superset takes your query and ships it down to the database to do the work. In order to join two databases in a query, Superset would have to basically do an ETL job pulling in both databases and doing the query there. That's not how it's built. \nBut again, you CAN have multiple data sources and have queries\/charts that call out to each of them, all rolled into one dashboard.","Q_Score":1,"Tags":"python,apache-superset","A_Id":59417376,"CreationDate":"2019-12-19T16:04:00.000","Title":"Multiple database access on Apache Superset?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have inserted data into table from postgresql directly. Now when I try to insert data from django application, it's generating primary key duplication error. How can I resolve this issue?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":59425728,"Users Score":0,"Answer":"I think problem is not in database. please check your django code probably you use get_or_create","Q_Score":0,"Tags":"python,django,database,postgresql","A_Id":59428772,"CreationDate":"2019-12-20T13:02:00.000","Title":"i have 600 records in my postgres database, now when i inserted from django it's generating primary duplication error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have inserted data into table from postgresql directly. Now when I try to insert data from django application, it's generating primary key duplication error. How can I resolve this issue?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":59425728,"Users Score":0,"Answer":"Run\npython manage.py sqlsequencereset [app_name]\nand execute all or just one for the required table SQL statements in the database to reset sequences.\n\nExplanation:\nYou probably inserted with primary keys already present in it, not letting postgresql to auto-generate ids. This is ok.\nThis means, internal Postgresql sequence used to get next available id has old value. You need to rest with sequence to start with maximum id present in the table.\nDjango manage.py has command intended just for that - print sql one can execute in db to reset sequences.","Q_Score":0,"Tags":"python,django,database,postgresql","A_Id":59425929,"CreationDate":"2019-12-20T13:02:00.000","Title":"i have 600 records in my postgres database, now when i inserted from django it's generating primary duplication error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently using AWS S3 as a storage for many json files (2 million and counting).\nI want to put all of these files inside a db, in a Postgres RDS.\nI am currently using AWS Lambda which is parsing the files, and it is significantly slower than running it locally. In addition, the work of running the script and installing external modules in Python is pretty terrible with lambda.\nIs there a quicker and more efficient way to work with S3 files, parse them and put them in Postgres without the need to download them? \nIt needs to run on every new file (that's why I chose lambda) and it needs to be divided to couple of tables, so it's not just putting the files as-is (the script already takes the file and parses it to the right tables).","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":78,"Q_Id":59485716,"Users Score":1,"Answer":"You can use aws glue. But that will cost you for each job run.","Q_Score":1,"Tags":"python,postgresql,amazon-web-services,amazon-s3,amazon-rds","A_Id":59486341,"CreationDate":"2019-12-26T08:20:00.000","Title":"Working with s3 files into Postgres effciently","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Flask app development, i need some clarifications on using database with flask app. \nWe have a MySQL db which has some metrics, currently we generate some reports using python using MySQL.connector.\nWe have python modules\/functions to fetch the data from the db using mysql.connector and populate a dictionary which has data to be put in the report. \nMy question is can i use the same python module in the app.py (if this is where i create the app) and get the data as dictionary and pass it to some template html to render the report?\nIf i can do this, what is the advantage of using Flask-MySql or Flask-SqlAlchemy and doing the app.config[] things which are mentioned in many tutorials?\nI am trying to understand what should be used when.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":676,"Q_Id":59496972,"Users Score":0,"Answer":"well for ur case if ur have not much experience in sql i prefer u to use orm like Sqlalchemy. for flask there is an extension call flask-sqlalchemy.it uses python like syntax without using direct sql it is very easy to learn and well documented.but not recommended for advanced user cases.If u want to pass pure sql queries through flask and to have more capablities in database site better to use flask extension for mysql flask-mysql.It depend on ur requirement,capability and user case.","Q_Score":1,"Tags":"flask,mysql-connector-python,flask-mysql","A_Id":59499709,"CreationDate":"2019-12-27T06:38:00.000","Title":"what is the Difference between using mysql connector and Flask-Mysql in Flask app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on an assignment and am stuck with the following problem:\nI have to connect to an oracle database in Python to get information about a table, and display this information for each row in an .html-file. Hence, I have created a python file with doctype HTML and many many \"print\" statements, but am unable to embed this to my main html file. In the next step, I have created a jinja2 template, however this passes the html template data (incl. \"{{ to be printed }}\") to python and not the other way round. I want to have the code, which is executed in python, to be implemented on my main .html file.\nI can't display my code here since it is an active assignment. I am just interested in general opinions on how to pass my statements from python (or the python file) into an html file. I can't find any information about this, only how to escape html with jinja. \nAny ideas how to achieve this?\nMany thanks.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":59508664,"Users Score":0,"Answer":"Thanks for the suggestions. What I have right now is a perfectly working python file containing jinja2 and the html output I want, but as a python file. When executing the corresponding html template, the curly expressions {{name}} are displayed like this, and not as the functions executed within the python file. Hence, I still have to somehow tell my main html file to execute this python script on my webpage, which I cannot manage so far.\nUnfortunately, it seems that we are not allowed to use flask, only jinja and django.","Q_Score":1,"Tags":"python,html,jinja2","A_Id":59509853,"CreationDate":"2019-12-28T06:27:00.000","Title":"How can I embed a python file or code in HTML?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on an assignment and am stuck with the following problem:\nI have to connect to an oracle database in Python to get information about a table, and display this information for each row in an .html-file. Hence, I have created a python file with doctype HTML and many many \"print\" statements, but am unable to embed this to my main html file. In the next step, I have created a jinja2 template, however this passes the html template data (incl. \"{{ to be printed }}\") to python and not the other way round. I want to have the code, which is executed in python, to be implemented on my main .html file.\nI can't display my code here since it is an active assignment. I am just interested in general opinions on how to pass my statements from python (or the python file) into an html file. I can't find any information about this, only how to escape html with jinja. \nAny ideas how to achieve this?\nMany thanks.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":59508664,"Users Score":0,"Answer":"You can't find information because that won't work. Browser cannot run python, meaning that they won't be able to run your code if you embed it into an html file. The setup that you need is a backend server that is running python (flask is a good framework for that) that will do some processing depending on the request that is being sent to it. It will then send some data to a template processor (jinja in this case work well with flask). This will in turn put the data right into the html page you want to generate. Then this html page will be returned to the client making the request, which is something the browser will understand and will show to the user. If you want to do some computation dynamically on the browser you will need to use javascript instead which is something a browser can run (since its in a sandbox mode).\nHope it helps!","Q_Score":1,"Tags":"python,html,jinja2","A_Id":59508694,"CreationDate":"2019-12-28T06:27:00.000","Title":"How can I embed a python file or code in HTML?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a web-app with Flask and have an philosophical\/architecture related question for all you with more experience than I.\nThe user enters some basic search criteria in my app, my app then queries multiple 3rd-party APIs for information related to the criteria and aggregates the results. \nUltimately, my app will send the user a bi-weekly email with an HTML-formatted table containing the information gathered by the API queries (as rows in the table). The information doesn't need to be stored long term, it becomes obsolete after a week or so there is really no point in storing it. The 3rd party APIs will always be queried anew each week or so. \nInitially I was thinking that I would need to maintain a database table for each user which would aggregate and store the results of their specific API queries. I was planning to create the contents of the emailed table from the rows in the database. \nI'm now wondering if there is a way to accomplish all of this without using a database to temporarily store the results of the API queries before emailing.\nMy question: What is the most efficient or optimal means for accomplishing what I'm trying to do? Is it possible to do this without a database for storing the results of the API queries?\nTo recap here was the sequence of operation for the initial concept:\nApp queries API for info --> App stores data returned by APIs in DB Table --> App puts info from DB table into formatted HTML table --> App sends HTML table to user in email --> The next time the App queries the APIs the DB tables would be over-written.\nFor context here are the different packages I'm using:\nFlask 1.1.1\nwerkzeug 0.15.5\nApplication server for both development and production.\ngunicorn 19.9.0\nTesting and static analysis.\npytest 5.1.0\npytest-cov 2.7.1\nmock 3.0.5\nflake8 3.7.8\nData and workers.\npsycopg2-binary 2.8.3\nFlask-SQLAlchemy 2.4.0\nSQLAlchemy 1.3.7\nalembic 1.0.11\nredis 3.3.7\ncelery 4.3.0\nForms.\nFlask-WTF 0.14.2\nWTForms-Components 0.10.4\nWTForms-Alchemy 0.16.9\nPayments.\nstripe 2.35.0\nUtils.\nfaker 2.0.0\nExtensions.\nflask-debugtoolbar 0.10.1\nFlask-Mail 0.9.1\nFlask-Login 0.4.1\nFlask-Limiter 1.0.1\nFlask-Babel 0.12.2\nFlask-Static-Digest 0.1.2","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":59521728,"Users Score":0,"Answer":"If you don't need it in a DB, it seems like you could work with the pandas module, and just use it as a DataFrame. The dataframe offers a lot of the easy manipulation of a database without having to actually use a database.","Q_Score":0,"Tags":"python,database,flask,architecture,web-development-server","A_Id":59521810,"CreationDate":"2019-12-29T17:34:00.000","Title":"Can I avoid using a database in this scenario?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to get 50 latest data (based on timestamp) from BigTable.\nI get the data using read_row and filter using CellsRowLimitFilter(50). But it didn't return the latest data. It seems the data didn't sorted based on timestamp? how to get the latest data?\nThank you for your help.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2096,"Q_Id":59527152,"Users Score":1,"Answer":"Turns out the problem was on the schema. It wasn't designed for timeseries data. I should have create the rowkey with id#reverse_timestamp and the data will be sorted from the latest. Now I can use CellsRowLimitFilter(50) and get 50 latest data.","Q_Score":0,"Tags":"python,google-cloud-bigtable,bigtable","A_Id":60200365,"CreationDate":"2019-12-30T07:27:00.000","Title":"How to get recent data from bigtable?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a GUI software using PyQt5 and want to connect it with MySQL to store the data. \nIn my computer, it will work fine, but what if I transfer this software to other computer who doesn't have MySQL, and if it has, then it will not have the same password as I will add in my code (using MySQL-connector)a password which I know to be used to connect my software to MySQL on my PC. \nMy question is, how to handle this problem???","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":59530440,"Users Score":2,"Answer":"If you want your database to be installed with your application and NOT shared by different users using your application, then using SQLite is a better choice than MySQL. SQLite by default uses a file that you can bundle with your app. That file contains all the database tables including the connection username\/password.","Q_Score":0,"Tags":"python,mysql,python-3.x,mysql-python","A_Id":59538866,"CreationDate":"2019-12-30T11:55:00.000","Title":"Will pyqt5 connected with MySQL work on other computers without MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a Django project with around 30 models and there are lots of relations(For example, foreign key relations) between the models.\nMy doubt is \"After 6 months, let's say I want to add a new field(s) in one of the model\/table(s) in models.py, and make migrations, the new migration files will get created without affecting the initial migration files which were created 6 months ago.\"\nWill the relations be maintained after adding new columns in different tables? (or) do I have to go to pgadmin console and tweak them accordingly?\nOne way is to erase all the migrations and start fresh, but this is not recommended often especially if there is production data (or) there are frequent changes in the database schema.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2209,"Q_Id":59553858,"Users Score":1,"Answer":"If you don't change Django version, adding new fields on models will not create any problem, even after many years. But, there are some situations this might create problems. For example, if Django is updated and you have installed the latest version.","Q_Score":1,"Tags":"django,python-3.x,foreign-keys,pgadmin-4,rdbms","A_Id":59554133,"CreationDate":"2020-01-01T15:48:00.000","Title":"Adding new field in Django model","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If i defined a database schema where some field cannot contain a NULL field\nBut i enter a NULL value, would the validation occur on the database software?\nIf that database server where run on a different machine, would be sent through the network before receiving an error response?\nis this what the mean by Database validations and application validations?\nWhere application validations are enforced before data transmission?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":59579050,"Users Score":0,"Answer":"Application validation validate input filed value before the migrate the data in database.\nIn database validation such as datatype length (these are also validate in application side)but some validate likeunique relationship with other data and some null value are validate on database side.\nEg. like Django application unique username name validate in model label(database) validate.\nrequired filed validate on form label or say application label validate","Q_Score":0,"Tags":"python,database","A_Id":59579216,"CreationDate":"2020-01-03T12:57:00.000","Title":"Difference between applcation and database validations","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If i defined a database schema where some field cannot contain a NULL field\nBut i enter a NULL value, would the validation occur on the database software?\nIf that database server where run on a different machine, would be sent through the network before receiving an error response?\nis this what the mean by Database validations and application validations?\nWhere application validations are enforced before data transmission?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":59579050,"Users Score":0,"Answer":"It is better to have the database ensure data integrity. After all, your application layer is not be the only way to change data -- someone could run explicit INSERT and UPDATE statements within the database.\nIn addition, the optimizers in some databases can make use of NOT NULL constraints for query optimization.","Q_Score":0,"Tags":"python,database","A_Id":59579109,"CreationDate":"2020-01-03T12:57:00.000","Title":"Difference between applcation and database validations","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building my first ever web-app in python, haven't even decided on a framework yet,\ndoes it make sense to start out with a MySQL database to manage users and credentials?\nor is it a completely ridiculous way to approach it?\nwhat are some existing solutions and best practices for managing user credentials?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":133,"Q_Id":59637029,"Users Score":2,"Answer":"The answer is, as always, it depends. There are many ways to build a web app in Python so you'll first need to decide on what you want to build or what technology you want to learn. \nIf you just want to focus on a Python backend as a learning exercise then you could use Flask which can run as a server and includes many modules to help you get started including managing users. If you plan to expose your app publicly though it is generally recommended to use Apache or some other battle tested server which can route the requests to Flask.\nThere are other Python frameworks like bottle which I believe is meant to be even simpler than Flask and Django which is more complicated but has more features. It all depends on what you want to do. You can also look at things like dash if you're end goal has a data analysis flavor.\nOne thing to note though is that managing user credentials is not trivial. It can be a useful exercise if you like to learn and tinker, but to do it correctly youll need to learn about salting passwords, cryptographically secure hashing, session management, https (and ideally which ciphers should be deprecated), how to protect against sql injection (good to know how to do this anyway if you don't already), cross site scripting, CORS, etc. The list goes on. None of these things are exclusively just for managing user credentials but you should understand all the ways things can go south on you.","Q_Score":0,"Tags":"python,mysql,security,authentication","A_Id":59637223,"CreationDate":"2020-01-07T22:32:00.000","Title":"Managing User Authentication for a web-app with MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have installed python 3.7.4 (64 bit) and oracle client 12.2.0 (64 bit) in my machine which is having windows 10 operating system. \nAnd I connect to database via robotframework-databaselibrary=1.2.4, but its displayed error as DatabaseError: DPI-1050: Oracle Client library is at version 0.0 but version 11.2 or higher is needed\nNote: I have the same setup in my local and am able to connect to database successfully but when I implement the same setup and try to execute in remote machine its throwing error","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":164,"Q_Id":59685699,"Users Score":2,"Answer":"The likely issue is that you have an Oracle Client library that is too old earlier in the PATH than your 12.2 Oracle Client. If you upgrade to cx_Oracle 7.3 you should get an error message that says as much. Search all of the directories in your PATH environment variable for oci.dll and check the version of each of them. Frequently older versions of the Oracle Client library were placed in C:\\Windows\\System32.","Q_Score":0,"Tags":"python-3.x,oracle,robotframework,cx-oracle,oracleclient","A_Id":59720970,"CreationDate":"2020-01-10T16:42:00.000","Title":"'DatabaseError: DPI-1050: Oracle Client library is at version 0.0 but version 11.2 or higher is needed' error is displayed in remote machine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to retrieve the following (historical) information while using the \nek.get_data()\nfunction: ISIN, MSNR,MSNP, MSPI, NR, PI, NT\nfor some equity indices, take \".STOXX\" as an example. How do I do that? I want to specify I am using the get data function instead of the timeseries function because I need daily data and I would not respect the 3k rows limit in get.timeseries. \nIn general: how do I get to know the right names for the fields that I have to use inside the \nek.get_data()\nfunction? I tried with both the codes that the Excel Eikon program uses and also the names used in the Eikon browser but they differ quite a lot from the example I saw in some sample code on the web (eg. TR.TotalReturnYTD vs TR.PCTCHG_YTD. How do I get to understand what would be the right name for the data types I need?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":695,"Q_Id":59735432,"Users Score":0,"Answer":"Considering the codes in your function (ISIN, MSNR,MSNP, MSPI, NR, PI, NT), I'd guess you are interested in the Datastream dataset. You are probably beter off using the DataStream WebServices (DSWS) API instead of the Eikon API. This will also relieve you off your 3k row limit.","Q_Score":0,"Tags":"python,refinitiv-eikon-api","A_Id":60455231,"CreationDate":"2020-01-14T14:08:00.000","Title":"Eikon API - ek.get_data for indices","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've set up a service account using the GCP UI for a specific project Project X. Within Project X there are 3 datasets: \nDataset 1\nDataset 2\nDataset 3\nIf I assign the role BigQuery Admin to Project X this is currently being inherited by all 3 datasets. \nCurrently all of these datasets inherit the permissions assigned to the service account at the project level. Is there any way to modify the permissions for the service account such that it only has access to specified datasets? e.g. allow access to Dataset 1 but not Dataset 2 or Dataset 3.\nIs this type of configuration possible? \nI've tried to add a condition in the UI but when I use the Name resource type and set the value equal to Dataset 1 I'm not able to access any of the datasets - presumably the value is not correct. Or a dataset is not a valid name resource.\nUPDATE\nAdding some more detail regarding what I'd already tried before posting, as well as some more detail on what I'm doing.\nFor my particular use case, I'm trying to perform SQL queries as well as modifying tables in BigQuery through the API (using Python).\nCase A:\nI create a service account with the role 'BigQuery Admin'.\nThis role is propagated to all datasets within the project - the property is inherited and I can not delete this service account role from any of the datasets.\nIn this case I'm able to query all datasets and tables using the Python API - as you'd expect.\nCase B:\nI create a service account with no default role.\nNo role is propagated and I can assign roles to specific datasets by clicking on the 'Share dataset' option in the UI to assign the 'BigQuery Admin' role to them.\nIn this case I'm not able to query any of the datasets or tables and get the following error if I try:\n*Forbidden: 403 POST https:\/\/bigquery.googleapis.com\/bq\/projects\/project-x\/jobs: Access Denied: Project X: User does not have bigquery.jobs.create permission in project Project X.*\nEven though the permissions required (bigquery.jobs.create in this case) exist for the dataset I want, I can't query the data as it appears that the bigquery.jobs.create permission is also required at a project level to use the API.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2089,"Q_Id":59736056,"Users Score":3,"Answer":"I'm posting the solution that I found to the problem in case it is useful to anyone else trying to accomplish the same.\nAssign the role \"BigQuery Job User\" at a project level in order to have the permission bigquery.jobs.create assigned to the service account for that project. \nYou can then manually assign specific datasets the role of \"BigQuery Data Editor\" in order to query them through the API in Python. Do this by clciking on \"Share dataset\" in the BigQuery UI. So for this example, I've \"Shared\" Dataset 1 and Dataset 2 with the service account.\nYou should now be able to query the datasets for which you've assigned the BigQuery Data Editor role in Python.\nHowever, for Dataset 3, for which the \"BigQuery Data Editor\" role has not been assigned, if you attempt to query a table this should return the error:\nForbidden: 403 Access Denied: Table Project-x:dataset_1.table_1: User does not have permission to query table Project-x:dataset_1.table_1.\nAs described above, we now have sufficient permissions to access the project but not the table within Dataset 3 - by design.","Q_Score":4,"Tags":"python,google-cloud-platform,google-bigquery","A_Id":59753770,"CreationDate":"2020-01-14T14:43:00.000","Title":"Is it possible to limit a Google service account to specific BigQuery datasets within a project?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My application is database heavy (full of very complex queries and stored procedures), it would be too hard and inefficient to write these queries in a lambda way, for this reason I'll have to stick with raw SQL.\nSo far I found these 2 'micro' ORMs but none are compatible with MSSQL:\nPonyORM\nSupports: SQLite, PostgreSQL, MySQL and Oracle\nPeewee\nSupports: SQLite, PostgreSQL, MySQL and CockroachDB\nI know SQLAlchemy supports MSSQL, however it would bee too big for what I need.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":638,"Q_Id":59741950,"Users Score":1,"Answer":"As of today - Jan 2020 - it seems that using pyodbc is still the way to go for SQL Server + Python if you are not using Django or any other big frameworks.","Q_Score":2,"Tags":"sql-server,python-3.x","A_Id":59856715,"CreationDate":"2020-01-14T21:20:00.000","Title":"Python 3.X micro ORM compatible with SQL Server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to import JSON data from an API to MySQL database using python. I can get the json data in python script but no idea how to insert this data to a mysql database.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1725,"Q_Id":59746712,"Users Score":0,"Answer":"finally, did it. Saved the JSON file locally first, then parsed it using key values in a for loop & lastly ran a query to insert into MySQL table.","Q_Score":0,"Tags":"python,mysql,json","A_Id":60133751,"CreationDate":"2020-01-15T07:21:00.000","Title":"import json data from an api to mysql database in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The processes would be running 24*7 and will be re-started periodically (like once in a week).\nIn this case which is a better option :\n\nOpening a postgres connection per processes which will persist until the life of the process.\nOpening a postgres connection pool and sharing it among the processes.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":59766207,"Users Score":0,"Answer":"The main objective is that there are not too many PostgreSQL connections at the same time, otherwise the danger increases that toi many of them will become active at the same time, thereby overloading the database.\nSo as long as you have some 20 processes, you can keep things simple and have a persistent connection per process. With many processes, you need a connection pool.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":59766701,"CreationDate":"2020-01-16T09:08:00.000","Title":"I am executing n no. of processes where n is could be between 5 - 50. Each process is making multiple dml operations on postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using databases package in my fastapi app. databases has execute and fetch functions, when I tried to return column values after inserting or updating using execute, it returns only the first value, how to get all the values without using fetch..\nThis is my query\n\nINSERT INTO table (col1, col2, col3, col4)\n VALUES ( val1, val2, val3, val4 ) RETURNING col1, col2;","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":124,"Q_Id":59822148,"Users Score":0,"Answer":"I had trouble with this also, this was my query:\n\nINSERT INTO notes (text, completed) VALUES (:text, :completed) RETURNING notes.id, notes.text, notes.completed\n\nUsing database.execute(...) will only return the first column.\nBut.. using database.fetch_one(...) inserts the data and returns all the columns.\nHopes this helps","Q_Score":1,"Tags":"python,database,fastapi","A_Id":60306879,"CreationDate":"2020-01-20T11:17:00.000","Title":"Get whole row using database package execute function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using databases package in my fastapi app. databases has execute and fetch functions, when I tried to return column values after inserting or updating using execute, it returns only the first value, how to get all the values without using fetch..\nThis is my query\n\nINSERT INTO table (col1, col2, col3, col4)\n VALUES ( val1, val2, val3, val4 ) RETURNING col1, col2;","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":124,"Q_Id":59822148,"Users Score":1,"Answer":"INSERT INTO table (col1, col2, col3, col4) VALUES ( val1, val2, val3, val4 ) RETURNING (col1, col2);\n\nyou can use this query to get all columns","Q_Score":1,"Tags":"python,database,fastapi","A_Id":59823349,"CreationDate":"2020-01-20T11:17:00.000","Title":"Get whole row using database package execute function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to create a column in a table that is autoupdated if one or more columns (possibly in another table) are updated, but it also should be possible to edit this column directly (and value should be kept in sql unless said other cols are updated, in which case first logic is applied)\nI tried column_property but it seems that its merely a construction inside python and doesnt represent an actual column\nI also tried hybrid_property and default, both didnt accomplish this\nThis looks like insert\/update trigger, however i want to know \"elegant\" way to declare it if its even possible\nI use declarative style for tables on postgres\nI dont make any updates to sql outside of sqlalchemy","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":59837697,"Users Score":0,"Answer":"Definitely looks like insert\/update triggers. But if I were you, I would incapsulate this logic in python by using 2 queries , so it will be more clear","Q_Score":0,"Tags":"python,sql,python-3.x,postgresql,sqlalchemy","A_Id":59838713,"CreationDate":"2020-01-21T09:17:00.000","Title":"sqlalchemy create a column that is autoupdated depending on other columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I keep getting the following error when I try to execute the cell in a Jupyter notebook on VSCode\ncon = cx_Oracle.connect(\"\/@DB\")\nDatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: \"libclntsh.so: cannot open shared object file: No such file or directory\". See https:\/\/oracle.github.io\/odpi\/doc\/installation.html#linux for help\nBut the same works fine when I run it in Jupyter Lab.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":456,"Q_Id":59886075,"Users Score":0,"Answer":"Sometimes stuff just doesn't work in some IDEs, i am not sure what your question is, i suggest using anaconda (and maybe spyder specifically) it has worked for me with everything so far, but i don't know about cx_Oracle.","Q_Score":1,"Tags":"python-3.x,visual-studio-code,jupyter-notebook,vscode-settings","A_Id":59886122,"CreationDate":"2020-01-23T19:49:00.000","Title":"Unable to connect using cx_Oracle in a Jupyter notebook on VSCode","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to make 2 business-like reports about media. Therefore I have to analyze data and give colleagues an excel file with multiple custom formatted tables. \nIs there a way to make custom formatted tables in R or python and export them to excel? \nThis way I can automate formatting the 3000+ tables :) \nThanks in advance!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":178,"Q_Id":59931829,"Users Score":0,"Answer":"Have you tried using pandas ? In python.","Q_Score":0,"Tags":"python,r,excel,python-3.x","A_Id":59931906,"CreationDate":"2020-01-27T13:09:00.000","Title":"How to export custom tables from R\/Python to Excel?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I want to upload a temporary table, populate it using a csv file and then run a bunch of other queries with the same connection. Currently I'm uploading a normal table, doing my queries and then dropping it. But I want to make it temporary to avoid confusion and to avoid large amounts of data being left in the db if the code stops for some reason (exception\/debugging etc.) before it gets a chance to drop the table. I'm doing all this in python using psycopg2. \nFirstly, I've assumed the temporary table will hang around as long as the connection is alive. Is this true? But more importantly, does a psycopg2 db connection ever automatically handle a momentary connection drop out by reestablishing a connection? The queries I'm running are very time consuming so I worry that this could happen. In which case is there some way of knowing when the connection refreshes so I can reupload the temporary table?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":434,"Q_Id":59947300,"Users Score":1,"Answer":"does a psycopg2 db connection ever automatically handle a momentary connection drop out by reestablishing a connection?\n\nDo you mean does it get impatient, kill a live but \"stalled\" (e.g. network congestion) connection, and replace it with a new one? No. You could probably write code to do that if you wanted (but why would you?) but psycopg2 itself won't do that.","Q_Score":1,"Tags":"python,database,postgresql,psycopg2,temp-tables","A_Id":59953360,"CreationDate":"2020-01-28T11:04:00.000","Title":"Do I need to recreate a temp table if a postgres connection refreshes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see that there's a built-in I\/O connector for BigQuery, but a lot of our data is stored in Snowflake. Is there a workaround for connecting to Snowflake? The only thing I can think of doing is to use sqlalchemy to run the query and then dump the output to Cloud Storage Buckets, and then Apache-Beam can get the input data from the files stored in the Bucket.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":924,"Q_Id":59960706,"Users Score":1,"Answer":"Google Cloud Support here!\nThere's no direct connector from Snowflake to Cloud Dataflow, but one workaround would be what you've mentioned. First dump the output to Cloud Storage, and then connect Cloud Storage to Cloud Dataflow.\nI hope that helps.","Q_Score":3,"Tags":"python,google-cloud-dataflow,pipeline,apache-beam,snowflake-cloud-data-platform","A_Id":59965614,"CreationDate":"2020-01-29T04:59:00.000","Title":"Python: How to Connect to Snowflake Using Apache Beam?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm aware that we can manage db transaction to maintain transaction.atomic(), it works really well with SQL, just wanted to understand if I use mongoengine as ODM then will it work or if not\nwhat option do I have to maintain atomicity?\nAny help will be useful","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":287,"Q_Id":59986210,"Users Score":0,"Answer":"django's atomic feature only applies to django's ORM which applies to sql databases. MongoEngine currently has no support for transactions and to my knowledge no python ORM currently supports them.\nIf atomicity is a hard requirement and you need to use mongoDB, I guess you need to go with the underlying driver pymongo","Q_Score":0,"Tags":"django,python-3.x,mongoengine","A_Id":60012311,"CreationDate":"2020-01-30T12:36:00.000","Title":"Does transaction.atomic works with mongoengine as well","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I stored the smart contract instance which written in Solidity inside the MySQL database. \ncontract_instance = eth_provider.contract(\n abi=contract_abi,\n address=contract_address,\n ContractFactoryClass=ConciseContract)\nThe value stored is liked Ex: web3.contract.ConciseContract object at 0x00000187148C9F98\nWhen I retrieve the value in Python flask and access the smart contract function, the error shown AttributeError: 'str' object has no attribute 'getCustomerList'\n. \nHow to convert the value from str back to smart contract instance?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":108,"Q_Id":59991820,"Users Score":0,"Answer":"Only contract_address are different in all the contracts. Store the address only and compiled the contract each time the user needs to use it","Q_Score":0,"Tags":"python,flask,solidity","A_Id":60219499,"CreationDate":"2020-01-30T17:54:00.000","Title":"I had stored a smart contract instance in MySQL as string. How to convert it back to solidity instance?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently rendering an simple HTML table using Flask\/Pandas Dataframe (to_html) using rows from a table in an SQLite3 DB. \nHow do I add checkboxes to the table?\nAfter selections are made dump all the values from the Name column to a text file?\nExample:\n\nID Name Manager \n1 server1 manager1\n2 server2 manager2\n\nAny help would be appreciated.\nP.S. This is my first question on Stack overflow. If you need more information please let me know.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":451,"Q_Id":60023407,"Users Score":0,"Answer":"I would do it by workaround here.\n\nCreate html combo box in template.html\nPopulate combobox with values from database in routes.py\nAlso in routes.py read combobox selected value and according to that value change sql request to database - e.g. value 1 - sql query without where condition else add where condition name = (in) selected value(s)\nWhen you send request you can in routes.py choose if you want render_template with different data or if you submit to another route which will export you data to text file (you can stay in pandas and use to_csv)\n\nAlso your question contain multiple problems. It would be better to be more specific.\nIf you would like only to add checkboxes as another column in table then you can, do the filtering by javascript but to pass the selection to flask route by javascript is another much harder thing. Also I would do selection on distinct values rather than all rows values.","Q_Score":5,"Tags":"python,html,pandas,sqlite,flask","A_Id":72276049,"CreationDate":"2020-02-02T05:11:00.000","Title":"How to add checkboxes to a Pandas Dataframe table using Flask?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have been using Connect To DataBase Keyword in Robot Framework. Is it possible to add a custom parameter to the keyword?\nFor ex : i want to add dictionary=true to cursor instance below is the keyword that i have used -\n\nEMPDB.Connect To Database pymysql ${EMPNAME} ${EMPUSER} ${PWD} ${HOST} ${PORT}\n\nTo Above Keyword statement can i use dictionary=true? So that when i select query the result i want is along with the column and the values.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":100,"Q_Id":60034232,"Users Score":2,"Answer":"That's is not possible using the library as is, so you need to implement that feature yourself. Usually DB queries return lists and they do not include column names.","Q_Score":0,"Tags":"python,robotframework","A_Id":60034844,"CreationDate":"2020-02-03T06:40:00.000","Title":"How to pass a custom parameter to Connect To DataBase Keyword in Robot Framework","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a multi-sheet excel file saved in a .xlsb format that I wish to covert to .xlsx to utilize the openpyxl library - code already written to support the same workbook that used to be .xlsx until macro-enabled, and wouldn't save in .xlsm.\nI have managed to convert from .xlsb to .csv, but cannot convert any further and have hit roadblocks with various libraries due to various formatting errors.\nAs my file has multiple sheets (all tables) I only need to copy and paste the text on every sheet (keeping the sheet names) and get it to a .xlsx format.\nFor simplicity sake, imagine all I need to do is: get sheet names, access a sheet, determine max row\/column, loop: copy and paste cell values, write to .xlsx with sheet name. With the starting file being .xlsb.\nAny suggestion would be much appreciated.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10994,"Q_Id":60036199,"Users Score":0,"Answer":"I got copy code to test run,but that return error,above error .\nValueError Traceback (most recent call last)\n in ()\n----> 1 df = pd.read_excel(r'C:\\Users\\l84193928\\Desktop\\test.xlsb', engine='pyxlsb')\nD:\\Users\\l84193928\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\pandas\\util_decorators.py in wrapper(*args, **kwargs)\n176 else:\n177 kwargs[new_arg_name] = new_arg_value\n--> 178 return func(*args, **kwargs)\n179 return wrapper\n180 return _deprecate_kwarg\nD:\\Users\\l84193928\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\pandas\\util_decorators.py in wrapper(*args, **kwargs)\n176 else:\n177 kwargs[new_arg_name] = new_arg_value\n--> 178 return func(*args, **kwargs)\n179 return wrapper\n180 return _deprecate_kwarg\nD:\\Users\\l84193928\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\pandas\\io\\excel.py in read_excel(io, sheet_name, header, names, index_col, usecols, squeeze, dtype, engine, converters, true_values, false_values, skiprows, nrows, na_values, parse_dates, date_parser, thousands, comment, skipfooter, convert_float, **kwds)\n305\n306 if not isinstance(io, ExcelFile):\n--> 307 io = ExcelFile(io, engine=engine)\n308\n309 return io.parse(\nD:\\Users\\l84193928\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\pandas\\io\\excel.py in init(self, io, **kwds)\n367\n368 if engine is not None and engine != 'xlrd':\n--> 369 raise ValueError(\"Unknown engine: {engine}\".format(engine=engine))\n370\n371 # If io is a url, want to keep the data as bytes so can't pass\nValueError: Unknown engine: pyxlsb","Q_Score":2,"Tags":"python","A_Id":69023706,"CreationDate":"2020-02-03T09:18:00.000","Title":"Convert .xlsb to .xlsx - Multi-sheet Microsoft Excel File","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to remove pymssql and migrate to pyodbc on a python 3.6 project that I'm currently on. The network topology involves two machines that are both on the same LAN and same subnet. The client is an ARM debian based machine and the server is a windows box. Port 1433 is closed on the MSSQL box but port 32001 is open and pymssql is still able to remotely connect to the server as it somehow falls back to using the named pipe port (32001). \nMy question is how is pymssql able to fall back onto this other port and communicate with the server? pyodbc is unable to do this as if I try using port 1433 it fails and doesn't try to locate the named pipe port. I've tried digging through the pymssql source code to see how it works but all I see is a call to dbopen which ends up in freetds library land. Also just to clarify, tsql -LH returns the named pip information and open port which falls in line with what I've seen using netstat and nmap. I'm 100% sure pymssql falls back to using the named pipe port as the connection to the named pipe port is established after connecting with pymssql.\nAny insight or guidance as to how pymssql can do this but pyodbc can't would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":113,"Q_Id":60065942,"Users Score":1,"Answer":"Removing the PORT= parameter and using the SERVER=ip\\instance in the connection string uses the named pipes to do the connection instead of port 1433. I'm still not sure how the driver itself knows to do this but it works and resolved my problem.","Q_Score":1,"Tags":"python,port,pyodbc,pymssql","A_Id":60189562,"CreationDate":"2020-02-04T21:42:00.000","Title":"How does the pymssql library fall back on the named pipe port when port 1433 is closed?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm studying BCP for export large amount of data (one time for initial and plan to do it in day-to-day job).\nSource data are in SQL Server tables, which consist of some small tables to larger ones (10M+ rows). Destination is in another machine (export to files).\nCurrently, I'm implementing it using python subprocess. \nBy using BCP command without specified batchsize (queryout, -U, -P, -S, -c).\nAnd the query is super straightforward (SELECT FROM ). Maybe adding WHERE dates in day-to-day job.\nI have tried with 100k data, it took around 2 minutes. However, I haven't tried with 10M+ data, due to my company's restriction to use production data in development environment. Also, I couldn't insert any data into the source SQL Server (only read access).\nCould anyone please suggest that is there any ways to optimize the BCP export process?\nMy understanding is that it should be able to make it better, since I did it in very straightforward way.\nThank you so much.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":428,"Q_Id":60068714,"Users Score":2,"Answer":"If you are moving data from one SQL Server to another MS SQL Server, then using the -N option to copy data to your files in native format will help reduce time to convert data types to text.\nUsing the -a option to specify network packet size. I cannot suggest a proper value here as this will depend on your network (is the file going to disk that is distributed from the server? if so, then try some different values here... if not, don't bother... not network involved).\nUse the -b option when importing data into your destination. I cannot suggest a proper value here as this will depend on your system architecture, but play with this value in testing to get a refined value. This does not work with export.\nWhen exporting a large table, thread out the copy to multiple files. Hopefully your large tables have a numeric key or some numeric value that has a high selectivity. This value can be used to partition your data into 10 or 100 threads. This will allow you to execute multiple bcp commands at the same time pulling from the same table. Use the \"queryout\" option and a command like:\n\"select * from db.dbo.mytable where key % 10 = 0\"\nto get 1\/10th of the data and:\n\"select * from db.dbo.mytable where key % 10 = 1\"\nto get the next or another 1\/10th of the data.\nExecute as many at the same time as your source server can withstand. This is great for speeding up a copy out, but be careful on loading into the destination. You wont be able to run as many together. This will likely be your biggest gain in performance. To get as many BCP commands running as your source server can withstand.","Q_Score":0,"Tags":"python,sql,sql-server,bcp,sqlbulkcopy","A_Id":60081589,"CreationDate":"2020-02-05T03:41:00.000","Title":"Any suggestion for optimizing BCP export from SQL Server using Python Subprocess","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How many rows I can insert in single time with execute_values command in Postgres database? I am using latest PostgreSQL (version 12) and executing this command in python.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":381,"Q_Id":60074813,"Users Score":0,"Answer":"It probably is limited by the amount of RAM. But rather than creating huge statements, use COPY and stream the data to the server.","Q_Score":0,"Tags":"python,python-3.x,database,postgresql","A_Id":60074908,"CreationDate":"2020-02-05T11:29:00.000","Title":"Maximum row insert limit | execute_values Command | Postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module.\n Did you install mysqlclient?\n\nGetting this error even installed the mysqlclient\npython version: Python 3.6.9","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":60092294,"Users Score":0,"Answer":"You have to install mysqlclient in the virtual environment using pip install mysqlclient.","Q_Score":1,"Tags":"python,django","A_Id":60092363,"CreationDate":"2020-02-06T10:02:00.000","Title":"Database error in django while running local sever?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a large macro enabled workbook that processes data and exports it in a usable format. Some of the data that it processes can be large and take a while to run through the workbook. I'd like to be able to open this workbook multiple times to process multiple data sets at once. Is there any way that this can be done? \nI am using a python 3 app that I developed to manage the books and am more than open to using other languages and software.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":165,"Q_Id":60116133,"Users Score":1,"Answer":"Think about your problem here.\nAssuming the workbook that processes the data is essentially a model, with input and output, what you could simply do is make copies of that workbook ahead of i\/o model run.\nSo you've gout 1-10,000 rows of data for Model1.xlsm 10,001-20,000 rows of data for Model2.xlsm.\nObviously this is hackneyed, but the fact that you have to do this with Excel and not just use python for calculation means this is probably the easiest way to overcome the problem.","Q_Score":0,"Tags":"python,excel,vba","A_Id":60116365,"CreationDate":"2020-02-07T15:11:00.000","Title":"Any way to run multiple instances of the same excel workbook with macros?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database that is made of (3862900,19), each column is a different parameter and includes outliers, is it possible to detect outliers in each column simultaneously, or do I have to repeat it 19 times for each column?\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":60132600,"Users Score":1,"Answer":"Yes, It is possible to detect outliers in each column simultaneously","Q_Score":0,"Tags":"python,pandas,jupyter-notebook,random-forest,outliers","A_Id":60132922,"CreationDate":"2020-02-09T00:48:00.000","Title":"Isolation Forest large dataset","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to DB\/postgres SQL.\nScenario: \nNeed to load an csv file into postgres DB. This CSV data needs to loaded into multiple tables according DB schema. I'm looking for better design using python script.\nMy thought:\n1. Load CSV file to intermediate table in postgres\n2. Write a trigger on intermediate table to insert data into multiple tables on event of insert\n3. Trigger includes truncate data at end\nAny suggestions for better design\/other ways without any ETL tools, and also any info on modules in Python 3.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":743,"Q_Id":60143750,"Users Score":1,"Answer":"Rather than using a trigger, use an explicit INSERT or UPDATE statement. That is probably faster, since it is not invoked per row.\nApart from that, your procedure is fine.","Q_Score":0,"Tags":"sql,python-3.x,postgresql,database-design,triggers","A_Id":60145699,"CreationDate":"2020-02-10T03:33:00.000","Title":"How to load csv file to multiple tables in postgres (mainly concerned about best practice)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Flask-SQLAlchmey app running in Gunicorn connected to a PostgreSQL database, and I'm having trouble finding out what the pool_size value should be and how many database connections I should expect.\nThis is my understanding of how things work:\n\nProcesses in Python 3.7 DON'T share memory\nEach Gunicorn worker is it's own process\nTherefore, each Gunicorn worker will get it's own copy of the database connection pool and it won't be shared with any other worker\nThreads in Python DO share memory\nTherefore, any threads within a Gunicorn worker WILL share a database connection pool\n\nIs that correct so far? If that is correct, then for a synchronous Flask app running in Gunicorn: \n\nIs the maximum number of database connections = (number of workers) * (number of threads per worker)?\nAnd within a worker, will it ever use more connections from a pool than there are workers?\n\nIs there a reason why pool_size should be larger than the number of threads? So, for a gunicorn app launched with gunicorn --workers=5 --threads=2 main:app should pool_size be 2? And if I am only using workers, and not using threads, is there any reason to have a pool_size greater than 1?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":4431,"Q_Id":60233495,"Users Score":5,"Answer":"Adding my 2 cents. Your understanding is correct but some thoughts to consider:\n\nin case your application is IO bound (e.g. talking to the database) you really want to have more than 1 thread. Otherwise your CPU wont ever reach 100% of utilization. You need to experiment with number of threads to get the right amout, usually with load test tool and comparing requests per second and CPU utilization.\nHaving in mind the relation between number of workers and connections, you can see that when changing the number of workers, you will need to adjust the max pool size. This can be easy to forget, so maybe a good idea is to set the pool size a little above the number of workers e.g. twice of that number.\npostgresql creates a process per connection and might not scale well, when you will have lots of gunicorn processes. I would go with some connection pool that sits between your app and the database (pgbouncer being the most popular I guess).","Q_Score":19,"Tags":"python,database,sqlalchemy,flask-sqlalchemy,gunicorn","A_Id":60371314,"CreationDate":"2020-02-14T21:03:00.000","Title":"Choosing DB pool_size for a Flask-SQLAlchemy app running on Gunicorn","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use Flask-SQLAlchemy with Celery. The two play poorly together if the Celery task takes a long time, as when it is done when the commit occurs, the MySQL connection will have timed out and \"gone away\".\nIs it possible to make changes to a SQLAlchemy object, attempt a commit, and when that fails, open a new session, attach the objects to the new session, and commit them? If so, how? What kind of SQLAlchemy function can do this? Or now that the commit failed as the session is gone, are the SQLAlchemy objects invalidated and all the work on them must be done again?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":88,"Q_Id":60291093,"Users Score":0,"Answer":"The answer is merge. merge is what can be used to attach objects to different sessions.","Q_Score":0,"Tags":"python,sqlalchemy,celery,flask-sqlalchemy","A_Id":60444294,"CreationDate":"2020-02-18T23:53:00.000","Title":"How to recover from a failed SQLAlchemy commit?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Multiple issues arise when I try to run tests in parallel.\nAccording to the docs, \"test_\" is prepended to the database name specified in DATABASES. I used the name \"postgres\", so the database created when running tests is called test_postgres. When running tests in parallel, the following databases are created (which is expected): test_postgres_1, test_postgres_2, test_postgres_3, and test_postgres_4. When running all tests with the --parallel=4 option, however, every test fails with the following message: django.db.utils.OperationalError: FATAL: database \"postgres_x\" does not exist where x can be 1, 2, 3 or 4. I can see that the following databases have been created: test_postgres_x where x can be 1, 2, 3 or 4. Where's \"postgres_x\" coming from? Why isn't \"test_\" being prepended to these?\nFurthermore, if I manually create the expected databases postgres_x (x = 1 to 4), the migrations applied to the \"main\" database aren't applied to the clones. This results in errors like this: django.db.utils.ProgrammingError: relation \"users_user\" does not exist. Roughly 1\/4 tests pass when using 4 cores.\nLastly, if I try to migrate postgres_x by using migrate --database=postgres_x, I get: django.db.utils.ConnectionDoesNotExist: The connection postgres_x doesn't exist.\nI have ensured that all tests are isolated just so I can run them in parallel. What am I supposed to do?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":881,"Q_Id":60291234,"Users Score":1,"Answer":"Instead of building your test harness yourself I suggest using pytest and pytest-django and pytest-xdist this will handle the db creation and migration for each parallel worker. (pytest can run Django UnitTest tests without modification)","Q_Score":3,"Tags":"python,django,django-rest-framework,postgresql-11,django-tests","A_Id":60291722,"CreationDate":"2020-02-19T00:11:00.000","Title":"Errors when running tests in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I im trying to insert data read from a raspberry pi into a database hosted on another raspberry pi, i used mysql as database and my code is writtin in python on the \"client\" pi, this operation is all done on local network. \ni did all the config in order to connect as \"RaspberryPi\" user that i created and granted all permissions on the specific database and table on ip: 192.168.0.20 which is the client pi, i created and granted that user from root user of mysql which i granted all permission just before in case it needed to.\nmy mysql server is at 192.168.0.14. when i run my python program it shows this error: Failed to insert record into HumiditySensor table 2003: Can't connect to MySQL server on '192.168.0.14:3306' (111 Connection refused)\nThe thing is that i used all the correct infos regarding host,database,user,password in my mysql.connector.connect() \nI veryfied if the server was using the right port to communicate and it was port 3306 which is what i expected.\nI saw online that the problem might be caused by tcp\/ip skipping, i looked at my my.cnf file and all i have is: \n[client-server]\n!includedir \/etc\/mysql\/conf.d\/\n!includedir \/etc\/mysql\/mariadb.conf.d\/\nThe rest is commented.\ni couldnt see bind-address nor tcp\/ip skipping so i dont believe it's because of an ip binding or wtv \nI also looked if my mysql server was running by looking if the mysql.sock file was in \/var\/run\/mysqld folder and it was... \ni did this command to see if the grant permission worked on my RaspberrPi user by typing: \nSELECT * from information_schema.user_privileges where grantee like \"'RaspberryPi'%\"; \nin mysql shell on host raspberry pi and it showed me in the \"IS_GRANTABLE\" section that everything was at \"YES\" instead of \"NO\" which means that this user has all permissions. \nI've been trying to solve this for days i really wish someone can help me on this, thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":759,"Q_Id":60359505,"Users Score":0,"Answer":"thanks for the answer! But i found it! if anybody encounters the same error i had, to solve that problem, you need to modify the \/etc\/mysql\/mariadb.conf.d\/50-server.cnf file since today when you install mysql, it install mariadb instead and i guess the config files are different. then once you get into 50-server.cnf you just need to comment \"bind-address = 127.0.0.1\" and it will now listen to other IPs requests.","Q_Score":0,"Tags":"python,mysql,raspberry-pi","A_Id":60367326,"CreationDate":"2020-02-23T05:36:00.000","Title":"2003: Can't connect to MySQL server on '192.168.0.14:3306' (111 Connection refused)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Django Project and want to connect to SQL Server 2019. But I have a problem when doing migration. Here is the error message:\n\ndjango.db.utils.NotSupportedError: SQL Server v15 is not supported.\n\nI'm using Django 2.1.15 and SQL Server Microsoft SQL Server 2019 (RTM) - 15.0.2000.5 \nDoes it mean this Django version cannot using SQL Server 2019?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3906,"Q_Id":60364005,"Users Score":1,"Answer":"I just create an account to answer thi question. I lost a lot of time until can fix that. \nTo work with SQL Server 2019 i nedded to update the Django to actual version, 3.x and installed django-mssql-backend, django-pyodbc, pyodbc\nIn youd DATABASE definitions, add this line to work:\n'OPTIONS': {\n 'driver': 'ODBC Driver 17 for SQL Server',\n },","Q_Score":2,"Tags":"python,sql,sql-server,django","A_Id":60697994,"CreationDate":"2020-02-23T15:50:00.000","Title":"Django Cannot Connect to SQL Server 2019","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a problem, I have an Excel file (.xlsx) and this file have some buttons in it to help to change the language and a button that make a raport based of the data.\nThe problem is...If I write something in the file and then I save it with openpyxl the file will lose those buttons and looks like a normal excel.\nWhat can I use to save that file with the same format?\nI installed an addin to see those buttons.\nWhat can I do?\nEDIT: I tried to save it .xlsm but it doesn't open if I do that","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":115,"Q_Id":60390029,"Users Score":0,"Answer":".xlsx dose not have macros. Save it as .xlsm instead.","Q_Score":1,"Tags":"python,excel","A_Id":60390105,"CreationDate":"2020-02-25T08:06:00.000","Title":"Saving Excel file with Python (Openpyxl)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to access the database configured in the setting via a shell command in Flask. Is there any equivalent command in Flask to python manage.py dbshell in Django?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":361,"Q_Id":60405637,"Users Score":0,"Answer":"No there is no such command exists for flask , but you can access your db with normal db clients","Q_Score":0,"Tags":"python,database,flask","A_Id":60406419,"CreationDate":"2020-02-26T01:45:00.000","Title":"Shell command to access db in Flask","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a SQL query which contains date formatting on one of the columns as shown below:\n%Y-%m-%d %h:%m:%s \nHere %d and %s are creating problem since they are used for formatting in Python just like C.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17,"Q_Id":60420752,"Users Score":1,"Answer":"If this is a format string (used on the LHS of a %), then use %% to have a format that \"expands\" to a single %. In your case that would be %%Y-%%m-%%d %%h:%%m:%%s","Q_Score":0,"Tags":"python-3.x","A_Id":60421115,"CreationDate":"2020-02-26T19:00:00.000","Title":"How do I use %s and %d as a string literal rather than formatter in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I come from .NET world.\nIn python code I have database connection string. Is there any way to keep these connection strings encrypted within python code?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":877,"Q_Id":60431929,"Users Score":1,"Answer":"Securing the DB configuration or any sensitive configurations can be done by keeping configs encrypted. In python you can do it using packages like secureconfig or encrypted-config or DIY with some standard encryption techniques. \nKeeping the encryption keys outside is the next challenge, which you can tackle by making it a injected config via environmental variables or a command-line-parameter.","Q_Score":2,"Tags":"python,python-3.x,python-cryptography","A_Id":60447102,"CreationDate":"2020-02-27T11:10:00.000","Title":"Encrypting db connection string","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 20 symbols for which I need to record tick data continuously for 6 hours a day every week.\nSo I want 20 excel files to be created automatically by the module (if files don't exist) and a excel writer which stores tick data (row by row). Then I need to resample the data to 5 minutes timeframe after reading them through dataframe. Dataframe should be able to read the tick data created by the module.\nWhat best excel writer can be used for this function. I want to write to the files when they are closed.\nwhich of them will work better?\n\nIn built open function\nOpenpyxl\nXlwt","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":60462134,"Users Score":0,"Answer":"hello i would recommed you xlwings. as it is the best module to stream the tick data to excel when file is opened.","Q_Score":0,"Tags":"python","A_Id":60650847,"CreationDate":"2020-02-29T05:02:00.000","Title":"How to create excel files and then read the files through dataframe?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a workbook sheet that shows the first 3 rows with data when opened with LibreOffice Calc. If I use conditional formatting to set cell background color to red if a cell is blank, all cells in rows 4 and following show red. When I read the spreadsheet with Pandas, I get 20 rows with rows 4 through 20 all blank. When I read the spreadsheet with openpyxl, I also get 20 rows and here is the interesting part: rows 4 through 20 have values in column AC (29). That column has a data validation drop-down. While no data shows up in LibrOffice Calc or Pandas, data shows up in openpyxl. This appears to be a ghost of data that has been deleted.\nI can delete blank rows in Pandas after I read the worksheet but the read_excel method throws up data validation errors for rows 4 through 20 before I can delete them. I would like to detect and remove the rows before read_excel. Is there a way to detect this and remove the spurious rows using openpyxl? I could then use openpyxl.load_workbook, delete the bad rows, and then use read_excel giving it the openpyxl workbook.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":285,"Q_Id":60470936,"Users Score":0,"Answer":"I found the problem. The items for the data validation were in a hidden column (AD) on the same sheet.","Q_Score":0,"Tags":"python,pandas,validation,openpyxl,libreoffice-calc","A_Id":60471515,"CreationDate":"2020-03-01T00:29:00.000","Title":"Excel (LibreOffice Calc) show blank cells but openpyxl shows values","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using openpyxl module, but now it turns out that it doesnot support csv format. So, How can I differentiate whether the file incoming is .xlsx or .csv format","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1664,"Q_Id":60491746,"Users Score":0,"Answer":"Load it in try:..except:.. At least in case of xlsx you can be quite confident that if it's failing, then it's not xlsx. CSV is so primitive a format, that almost everything can be treated as such, but I think that's the maximum that you can squeeze out of it.","Q_Score":1,"Tags":"python","A_Id":60492038,"CreationDate":"2020-03-02T15:14:00.000","Title":"How to check if the browsed file is .xlsx or .csv?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"First of all thanks, I'm no expert in programming.\nI work with jupyter notebooks and with my boss we use a Dropbox folder where he is able to run all my codes( including exporting and importing files within the folder), since it is possible to run jupyter notebook from within a desired folder( in this case, the Dropbox one). Based on this, when importing files I can just type a path like this one: \"Dropxboxsharedfolder\/some-otherfolder\/jdjd.csv\" and it will find the file in both of our computers. \nNow we are running SQL scripts through Jupyter, of course, the .SQL file is within the Dropbox folder, but the script has within itself a code where I import a CSV file located inside the Dropbox folder. Nevertheless (of course, it won't) it won't let me just type the path as \"dropboxsharedfolder\/somefolder\/djdhjd.csv\" I have to type \"User\/username\/dropboxsharedfolder\/...\/jdjd.csv\". At the end, when my boss runs the notebook it won't work cause it won't find the file.\nIs there a solution for this situation?\nThank you so much for your time!\nPd: we use postgresSQL","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":60517828,"Users Score":0,"Answer":"You could use an environment variable to point to the shared dropbox folder on each computer (the value of the variable would be set accordingly on each computer) and use the environment variable in your script instead of hardcoded paths","Q_Score":0,"Tags":"python,postgresql,csv,path,jupyter-notebook","A_Id":60518145,"CreationDate":"2020-03-04T01:23:00.000","Title":"Importing a csv file to a Postgres table from different computers ( hence, different paths)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently learning MongoDb and creating an app on python. I need to select one random document from user collection using uMongo, I have tried aggregate but it says that umongo have no this function. How can i do that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":60521822,"Users Score":0,"Answer":"Indeed, you need to use the aggregation framework.\nTo do so, you must circumvent umongo to call the underlying driver (e.g. pymongo) directly.","Q_Score":0,"Tags":"python,mongodb,umongo","A_Id":64908144,"CreationDate":"2020-03-04T08:27:00.000","Title":"uMongo get random document from collection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our IDs look something like this \"CS0000001\" which stands for Customer with the ID 1. Is this possible to to with SQL and Auto Increment or do i need to to that in my GUI ?\nI need the leading zeroes but with auto incrementing to prevent double usage if am constructing the ID in Python and Insert them into the DB.\nIs that possible?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":705,"Q_Id":60541482,"Users Score":1,"Answer":"You have few choices:\n\nConstruct the CustomerID in your code which inserts the data into\nthe Customer table (=application side, requires change in your code)\nCreate a view on top of the Customer-table that contains the logic\nand use that when you need the CustomerID (=database side, requires change in your code)\nUse a procedure to do the inserts and construct the CustomerID in\nthe procedure (=database side, requires change in your code)","Q_Score":0,"Tags":"python,mysql,sql,python-3.x","A_Id":60542261,"CreationDate":"2020-03-05T08:58:00.000","Title":"SQL - Possible to Auto Increment Number but with leading zeros?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to running the appserver with python manage.py runserver with python 3.8.2 and django 3.0.3. I've setup a mysql database connection and inserted my \"myApp.apps.myAppConfig\" into INSTALLED_APPS, declared a couple of database-view based models, a form and a view. Nothing that seems too out of the way for the tutorials i've found. When i run the python manage.py runserver command, this is the output:\n\nWatching for file changes with StatReloader Performing system\n checks...\nException in thread django-main-thread: Traceback (most recent call\n last): File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\apps\\registry.py\",\n line 155, in get_app_config\n return self.app_configs[app_label] KeyError: 'admin'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last): File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\threading.py\",\n line 932, in _bootstrap_inner\n self.run() File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\threading.py\",\n line 870, in run\n self._target(*self._args, **self._kwargs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\autoreload.py\",\n line 53, in wrapper\n fn(*args, **kwargs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\management\\commands\\runserver.py\",\n line 117, in inner_run\n self.check(display_num_errors=True) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\management\\base.py\",\n line 392, in check\n all_issues = self._run_checks( File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\management\\base.py\",\n line 382, in _run_checks\n return checks.run_checks(**kwargs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\checks\\registry.py\",\n line 72, in run_checks\n new_errors = check(app_configs=app_configs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\checks\\urls.py\",\n line 13, in check_url_config\n return check_resolver(resolver) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\checks\\urls.py\",\n line 23, in check_resolver\n return check_method() File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\urls\\resolvers.py\",\n line 407, in check\n for pattern in self.url_patterns: File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\functional.py\",\n line 48, in get\n res = instance.dict[self.name] = self.func(instance) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\urls\\resolvers.py\",\n line 588, in url_patterns\n patterns = getattr(self.urlconf_module, \"urlpatterns\", self.urlconf_module) File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\functional.py\",\n line 48, in get\n res = instance.dict[self.name] = self.func(instance) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\urls\\resolvers.py\",\n line 581, in urlconf_module\n return import_module(self.urlconf_name) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\importlib__init__.py\",\n line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 1014, in _gcd_import File\n \"\", line 991, in _find_and_load File\n \"\", line 975, in _find_and_load_unlocked \n File \"\", line 671, in _load_unlocked\n File \"\", line 783, in\n exec_module File \"\", line 219, in\n _call_with_frames_removed File \"C:\\Users\\celli\\Desktop\\Interventi Comuni\\Python\\django-projects\\zabbixPyFace\\zabbixPyFace\\urls.py\", line\n 21, in \n path('admin\/', admin.site.urls), File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\functional.py\",\n line 224, in inner\n self._setup() File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\contrib\\admin\\sites.py\",\n line 537, in _setup\n AdminSiteClass = import_string(apps.get_app_config('admin').default_site) File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\apps\\registry.py\",\n line 162, in get_app_config\n raise LookupError(message) LookupError: No installed app with label 'admin'.\n\nI tried searching big G for answers but there's many sources that can cause this problem, could any of you gurus provide some insight?\nUpdate:\nI've already checked the INSTALLED_APPS and django.contrib.admin is present: \nINSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # jupyter notebook plugin 'django_extensions', ]","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":8213,"Q_Id":60567760,"Users Score":1,"Answer":"You need to add \"django.contrib.admin\" to your INSTALLED_APPS setting.","Q_Score":0,"Tags":"python,django,python-3.x","A_Id":60567873,"CreationDate":"2020-03-06T16:07:00.000","Title":"Django LookupError: No installed app with label 'admin'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to running the appserver with python manage.py runserver with python 3.8.2 and django 3.0.3. I've setup a mysql database connection and inserted my \"myApp.apps.myAppConfig\" into INSTALLED_APPS, declared a couple of database-view based models, a form and a view. Nothing that seems too out of the way for the tutorials i've found. When i run the python manage.py runserver command, this is the output:\n\nWatching for file changes with StatReloader Performing system\n checks...\nException in thread django-main-thread: Traceback (most recent call\n last): File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\apps\\registry.py\",\n line 155, in get_app_config\n return self.app_configs[app_label] KeyError: 'admin'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last): File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\threading.py\",\n line 932, in _bootstrap_inner\n self.run() File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\threading.py\",\n line 870, in run\n self._target(*self._args, **self._kwargs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\autoreload.py\",\n line 53, in wrapper\n fn(*args, **kwargs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\management\\commands\\runserver.py\",\n line 117, in inner_run\n self.check(display_num_errors=True) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\management\\base.py\",\n line 392, in check\n all_issues = self._run_checks( File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\management\\base.py\",\n line 382, in _run_checks\n return checks.run_checks(**kwargs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\checks\\registry.py\",\n line 72, in run_checks\n new_errors = check(app_configs=app_configs) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\checks\\urls.py\",\n line 13, in check_url_config\n return check_resolver(resolver) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\core\\checks\\urls.py\",\n line 23, in check_resolver\n return check_method() File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\urls\\resolvers.py\",\n line 407, in check\n for pattern in self.url_patterns: File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\functional.py\",\n line 48, in get\n res = instance.dict[self.name] = self.func(instance) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\urls\\resolvers.py\",\n line 588, in url_patterns\n patterns = getattr(self.urlconf_module, \"urlpatterns\", self.urlconf_module) File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\functional.py\",\n line 48, in get\n res = instance.dict[self.name] = self.func(instance) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\urls\\resolvers.py\",\n line 581, in urlconf_module\n return import_module(self.urlconf_name) File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\importlib__init__.py\",\n line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 1014, in _gcd_import File\n \"\", line 991, in _find_and_load File\n \"\", line 975, in _find_and_load_unlocked \n File \"\", line 671, in _load_unlocked\n File \"\", line 783, in\n exec_module File \"\", line 219, in\n _call_with_frames_removed File \"C:\\Users\\celli\\Desktop\\Interventi Comuni\\Python\\django-projects\\zabbixPyFace\\zabbixPyFace\\urls.py\", line\n 21, in \n path('admin\/', admin.site.urls), File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\utils\\functional.py\",\n line 224, in inner\n self._setup() File \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\contrib\\admin\\sites.py\",\n line 537, in _setup\n AdminSiteClass = import_string(apps.get_app_config('admin').default_site) File\n \"C:\\Users\\celli\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\django\\apps\\registry.py\",\n line 162, in get_app_config\n raise LookupError(message) LookupError: No installed app with label 'admin'.\n\nI tried searching big G for answers but there's many sources that can cause this problem, could any of you gurus provide some insight?\nUpdate:\nI've already checked the INSTALLED_APPS and django.contrib.admin is present: \nINSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # jupyter notebook plugin 'django_extensions', ]","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":8213,"Q_Id":60567760,"Users Score":3,"Answer":"Answering my own question, searching the net for informations regarding this error leads to many ambigous results, since this error seems to be fired even if the root cause was of another nature. In my case i forgot to apply the python manage.py makemigrations directive.","Q_Score":0,"Tags":"python,django,python-3.x","A_Id":60576410,"CreationDate":"2020-03-06T16:07:00.000","Title":"Django LookupError: No installed app with label 'admin'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am facing issue with SQLite vulnerability which fixed in SQLite version 3.31.1.\nI am using the python3.7.4-alpine3.10 image, but this image uses a previous version of SQLite that isn't patched.\nThe patch is available in python3.8.2-r1 with alpine edge branch but this image is not available in docker hub.\nPlease help how can i fix this issue?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":128,"Q_Id":60668371,"Users Score":0,"Answer":"Your choices are limited to two options:\n\nWait for the official patched release\nPatch it yourself\n\nOption 1 is easy, just wait and the patch will eventually propagate through to docker hub. Option 2 is also easy, just get the code for the image from github, update the versions, and run the build yourself to produce the image.","Q_Score":1,"Tags":"python-3.x,sqlite,docker,security","A_Id":60668536,"CreationDate":"2020-03-13T10:02:00.000","Title":"how to fix CVE-2019-19646 Sqlite Vulnerability in python3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to format an SQL query, and it looks like this:\ns += \" t{}.{} = '{}' and\".format(t_c, filter_c, filter_value)\nbut when the filter_value is something like m's it will result in \npsycopg2.errors.SyntaxError: syntax error\nif I use the double quote, it will say there's no such column\nAny way I can resolve this problem, please?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":210,"Q_Id":60696210,"Users Score":0,"Answer":"Caused by injection vulnerability. Use parameters for filter_value and let the database API handle it.\nIf the table\/schema names are coming from user input, whitelist those too. Parameters aren't possible for table names).","Q_Score":0,"Tags":"python,python-3.x,postgresql,psycopg2","A_Id":60696279,"CreationDate":"2020-03-15T18:23:00.000","Title":"How to template sql with python and deal with sql composition problem?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge database of over 20 million rows. I can export the whole database (which takes hours), but when I try to filter the data using python (pandas) pycharm fails (due to memory issues).\nIs there a way to export the database in batches of 2 million rows for an example? Export 2mil, then other 2mil and have 10 files of 2 million rows at the end? This way I can filter every file using python (pandas) and I won't have memory issues.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1609,"Q_Id":60696669,"Users Score":0,"Answer":"You can use pg_dump to only extract one or more tables or exclude tables if that is going to help","Q_Score":0,"Tags":"python,sql,database,postgresql,csv","A_Id":60696938,"CreationDate":"2020-03-15T19:15:00.000","Title":"How to export huge postgresql database in batches?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When loading the output of query into a DataFrame using pandas, the standard behavior was to convert integer fields containing NULLs to float so that NULLs would became NaN. \nStarting with pandas 1.0.0, they included a new type called pandas.NA to deal with integer columns having NULLs. However, when using pandas.read_sql(), the integer columns are still being transformed in float instead of integer when NULLs are present. Added to that, the read_sql() method doesn't support the dtype parameter to coerce fields, like read_csv().\nIs there a way to load integer columns from a query directly into a Int64 dtype instead of first coercing it first to float and then having to manually covert it to Int64?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":284,"Q_Id":60731612,"Users Score":1,"Answer":"Have you tried using \nselect isnull(col_name,0) from table_name. This converts all null values to 0.\nIntegers are automatically cast to float values just as boolean values are cast to objects when some values are n\/a.","Q_Score":2,"Tags":"python,pandas","A_Id":60731768,"CreationDate":"2020-03-17T23:29:00.000","Title":"Is there a way to load a sql query in a pandas >= 1.0.0 dataframe using Int64 instead of float?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to connect to RDS with python with IAM Database Authentication.\nI can find how to connect to RDS with IAM SSL certification or how to connect to RDS with psycopg2.\nBut I cannot find how to connect to RDS with python with IAM Database Authentication.\nIs there any way to do that?\nthanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":60735695,"Users Score":0,"Answer":"I can do that just only to use psycopg2 with sslrootsert.\nThanks!","Q_Score":0,"Tags":"python,amazon-iam,rds","A_Id":60749502,"CreationDate":"2020-03-18T08:16:00.000","Title":"Connect to AWS RDS with python with IAM Database Authentication","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":".execute(\"copy into tmp_cdp_score_feature from @cdp_json_acpt\/data\/1 file_format = (type=json)\"\nas the last value will be varying from 1 to 100 like data\/2, data\/3, so I need to pass as variable .\nsomething like \n.execute(\"copy into tmp_cdp_score_feature from @cdp_json_acpt\/data\/(%s) file_format = (type=json),(2)\"","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":976,"Q_Id":60743574,"Users Score":0,"Answer":"copy into tmp_stg from @stg_json_acpt\/\"'%s'\" file_format = (type=json), Force=True\" %(des1)\ndes1= path of the s3 object file\nIt works fine, tested","Q_Score":0,"Tags":"python,snowflake-cloud-data-platform","A_Id":60808996,"CreationDate":"2020-03-18T16:22:00.000","Title":"How to pass the bind variable to the copy into statement in snowflake from python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have MS Access File with a file size of 1.7GB. I have already tried to compact and repair, but the file size remains the same.\nThis is what I did. I download about 29 files from jupyter python to excel and csv files. The total amount of data is about 934MB.\nI need to update the data everyday, therefore I linked the Access file to all the exported files under linked table and create another table to have a relationship with each other. So I have 2 tables for each exported file, for example: customer_linked and customer.\nAnd this is the step by step of query:\n1. Delete query for all data in non-linked table\n2. Append query to append the linked-table to non-linked table\nI have no idea that this way, it will make the file super bloated to 1.7GB. Is there any way to make it smaller?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":60,"Q_Id":60752859,"Users Score":1,"Answer":"You can look into converting the excel files into csvs if you're looking to try and save space. Depending on how large the files are, there might be a lot of bloat in the excel files, full of extra stuff you don't need. If you're not using the excel files for images\/graphs\/formatting of some kind then you're best off just converting them all to csvs.","Q_Score":1,"Tags":"python,vba,ms-access","A_Id":60753022,"CreationDate":"2020-03-19T07:48:00.000","Title":"MS Access Big File Size Problem with exported file from Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have MS Access File with a file size of 1.7GB. I have already tried to compact and repair, but the file size remains the same.\nThis is what I did. I download about 29 files from jupyter python to excel and csv files. The total amount of data is about 934MB.\nI need to update the data everyday, therefore I linked the Access file to all the exported files under linked table and create another table to have a relationship with each other. So I have 2 tables for each exported file, for example: customer_linked and customer.\nAnd this is the step by step of query:\n1. Delete query for all data in non-linked table\n2. Append query to append the linked-table to non-linked table\nI have no idea that this way, it will make the file super bloated to 1.7GB. Is there any way to make it smaller?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":60752859,"Users Score":0,"Answer":"Surprisingly the file in csv take more space than the file in excel.\n\nNo surprise if that is an .xlsx file, as these are zip files. Try renaming it to .zip and unpack it to see the real size.\nIf you have compacted it, that is the size - Nothing to worry about.\nThe only concern could be, that you are approaching the size limit of 2GB for an Access file. If that could hit you, consider moving data to, say, the free SQL Server Express edition which allows for 10GB.","Q_Score":1,"Tags":"python,vba,ms-access","A_Id":60753727,"CreationDate":"2020-03-19T07:48:00.000","Title":"MS Access Big File Size Problem with exported file from Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently we are pulling database tables from Hadoop server through AMBARI or PUTTY connections. But I would like to know is there any efficient way to pull database tables through python jupyter directly without using AMBARI or PUTTY.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":60783782,"Users Score":0,"Answer":"It's not clear what you mean by Hadoop database because neither Hive nor Hadoop are databases.\nYes, Jupyter works fine. So do libraries like pyhive \nIf you specifically want to use sql only, look at HUE or Apache Superset. \nOf course, JDBC\/ODBC clients like DBVisualizer and Tableau work too","Q_Score":0,"Tags":"python,database,hadoop,ssh","A_Id":60785330,"CreationDate":"2020-03-21T02:02:00.000","Title":"connecting to hadoop database tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a relatively large queryset and i want to export it into excel file i was useing the XLWT library and django streaming response with csv file.\nwhen i export very large queryset of a table in sqldeveloper or navicat, the export operation is very fast but django's libraries is relatively slow. i think the excel write by row and column, or csv streaming response, write row by row in file but i looking for a way to write whole of queryset to excel.\nis there a way to export whole of queryset to excel in python django?\nSomething that comes to my mind is call os command in python code to run export command in database but i not tested it.\nthanks everybody","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":320,"Q_Id":60803795,"Users Score":0,"Answer":"I should have used pandas.\nadjust and add my queryset in data frame and to_excel or to_csv.\nthanks anyway","Q_Score":0,"Tags":"python,django,excel","A_Id":60824753,"CreationDate":"2020-03-22T19:13:00.000","Title":"django export large excel queryset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have written a python script that connects to a remote Oracle database and inserts some data into its tables.\nIn the process I had to first import cx_Oracle package and install Oracle InstantClient on my local computer for the script to execute properly.\nWhat I don't understand is why did I have to install InstantClient?\nI tried to read through the docs but I believe I am missing some fundamental understanding of how databases work and communicate.\nWhy do I need all the external drivers, dlls, libraries for a python script to be able to communicate with a remote db? I believe this makes packaging and distribution of a python executable much harder.\nAlso what is InstantClient anyway?\nIs it a driver? What is a driver? Is it simply a collection of \"programs\" that know how to communicate with Oracle databases? If so, why couldn't that be accomplished with a simple import of a python package?\nThis may sound like I did not do my own research beforehand, but I'm sorry, I tried, and like I said, I believe I am missing some underlying fundamental knowledge.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":71,"Q_Id":60840650,"Users Score":1,"Answer":"We have a collection of drivers that allow you to communicate with an Oracle Database. Most of these are 'wrappers' of a sort that piggyback on the Oracle Client. Compiled C binaries that use something we call 'Oracle Net' (not to be confused with .NET) to work with Oracle.\nSo our python, php, perl, odbc, etc drivers are small programs written such that they can be used to take advantage of the Oracle Client on your system. \nThe Oracle Client is much more than a driver. It can include user interfaces such as SQL*Plus, SQL*Loader, etc. Or it can be JUST a set of drivers - it depends on which exact package you choose to download and install. And speaking of 'install' - if you grab the Instant Client, there's nothing to install. You just unzip it and update your environment path bits appropriately so the drivers can be loaded.","Q_Score":0,"Tags":"python,sql,database,oracle,database-connection","A_Id":60840804,"CreationDate":"2020-03-24T23:04:00.000","Title":"Explain the necessity of database drivers, libraries, dlls in a python application that interacts with a remote database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to run a text file that has queries delimited by ';' in the impala-shell. However, I have some queries that require the results of another query. For example, if Query1 gives me name | age | birthday and then the following query is something like SELECT * FROM table1 WHERE age in (...), and those ages are from the age column from the first query. \nI know you can specify with --vars option, but that seems to be for inserting specific values. Is there a way to create Python script to handle something like this that would run in the impala-shell?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":737,"Q_Id":60869209,"Users Score":0,"Answer":"I think creating temp tables will help here. \nImpala Only solution -\nstep 0 - load the table from file.\nstep 1 - create temp table tmp_table as Query 1.\nstep 2 SELECT * FROM table1 WHERE age in (tmp_table).\nstep 3 - Drop table tmp_table.\nYou can use subquery as well in case all are in impala tables.\nSELECT * FROM table1 WHERE age in (select age from Query1) \nYes, you can always use python to run impala-shell queries. But they will be like call scripts one after another and for your requirement you can do them entirely in impala.","Q_Score":1,"Tags":"python,sql,variables,impala","A_Id":61185489,"CreationDate":"2020-03-26T14:11:00.000","Title":"Run Python script in impala-shell","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have fetched data from a CSV file, and it is held and manipulated in my Dask dataframe. From there I need to write the data into a data table. I have not really come across any solutions for this. Pandas have built-in functionality for this with its to_sql function, so I am unsure whether I need to convert to Pandas first? I currently think that converting the Dask dataframe to Pandas will cause it to be loaded fully into memory, which may defeat the purpose of using Dask in the first place. \nWhat would the best and fastest approach be to write a Dask dataframe to a datatable?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1297,"Q_Id":60871938,"Users Score":0,"Answer":"I have no problem with @kfk's answer, as I also investigated that, but my solution was as follows.\nI drop the DASK dataframe to a csv, and from there pick the CSV up with a Golang application that shoves the data into Mongo using multi-threading. For 4.5 million rows, the speed went from 38 minutes using \"load local infile\" to 2 minutes using a multi-threaded app.","Q_Score":1,"Tags":"python,dask,dask-dataframe","A_Id":61123236,"CreationDate":"2020-03-26T16:37:00.000","Title":"How do I get a DASK dataframe into a MySQL datatable?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to insert a formula into Excel using Python. I am creating a dataframe and adding a column containing the formulas and then writing it into an Excel.\nThe formula has the format '=HYPERLINK(\"#'\"&F2&\"'!A1\",F2)'\nF2 is a variable and all other characters are constant. I need to dynamically generate this string. \nI tried doing =HYPERLINK(\"#'\"&F2&\"'!A1\",F2) but it is not working and I got '=HYPERLINK(\"#\\'\"&F2&\"\\'!A1\",F2)' which includes the back slash and the formula does not work.\nHow do I create a string like '=HYPERLINK(\"#'\"&F2&\"'!A1\",F2)' ?\nAny help would be highly appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":60881008,"Users Score":0,"Answer":"Since you are making string dynamically. There are two things you can do.\n\nr'=HYPERLINK(\"#'\"&F2&\"'!A1\",F2)' r signifies raw string (if you are not using any variable).\nIf you are making string dynamically using other variables. Use formatted string.\nvar1= '\"#'\"&F2&\"'!A1\"'\nvar2= 'F2'\nformula= f'=HYPERLINK({var1},{var2} )'\n\n\nprint(formula)\n'=HYPERLINK(\"#&F2&!A1\",F2 )'","Q_Score":0,"Tags":"python,excel,string,dataframe,formula","A_Id":60881203,"CreationDate":"2020-03-27T06:32:00.000","Title":"Create a string with single and double quotes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Looking for some tips here. I did a quiet a bit of coding and research using python3 and lambda. However, timeout is the biggest issue I am struggling with atm. I am trying to read a very large csv file (3GB) from S3 and push the rows into DynamoDB. I am currently reading about 1024 * 32 bytes at a time, then pushing the rows into dynamo DB (batch write with asyncio) using a pub\/sub pattern, and it works great for small files, i.e. ~500K rows. It times out when I have millions of rows. I\u2019m trying NOT to use AWS glue and\/or EMR. I have some constraints\/limitations with those.\nDoes anyone know if this can be done using Lambda or step functions? If so, could you please share your ideas? Thanks!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":334,"Q_Id":60898702,"Users Score":2,"Answer":"Besides lambda time constraint you might run into lambda memory constraint while you are reading file in AWS Lambda as lambda has just \/tmp directory storage of 512 MB and that again depends on how you are reading the file in lambda.\nIf you don't want to go via AWS Glue or EMR, another thing you can do is by provisioning an EC2 and run your same code you are running in lambda from there. To make it cost effective, you can make EC2 transient i.e. provision it when you need to run S3 to DynamoDB job and shut it down once the job is completed. This transient nature can be achieved by Lambda function. You can also orchestrate the same with Step Functions also. Another option that you can look into is via AWS Datapipeline.","Q_Score":0,"Tags":"python-3.x,aws-lambda,aws-step-functions","A_Id":61392536,"CreationDate":"2020-03-28T08:02:00.000","Title":"How to ETL very large csv from AWS S3 to Dynamo","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have looked at similar posts but could not find the solution to my problem. I had installed mySQL 8.0 version using homebrew on MacOS but then needed to downgrade it to mySql 5.6. I uninstalled the 8.0 version completely and deleted any left over files.I then installed the 5.6.1 using the native mySQL dmg package for Mac. On running my python project I get the Library not loaded error for \/usr\/local\/opt\/mysql\/lib\/libmysqlclient.21.dylib referenced from the _mysql.cpython-36m-darwin.so. I am not sure why this location is getting referenced as I have only libmysqlclient.18.dylib on my system under a different folder usr\/local\/mysql\/lib . How can I fix the issue ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":489,"Q_Id":60943218,"Users Score":0,"Answer":"My project with Python 3.6 was lookin for libmysqlclient.21.dylib.\nI installed brew install mysql-client. It installed mysql-client 8.0. it has libmysqlclient.21.dylib. Where as i wanted to use mysql@5.6.\nSo I copied the libmysqlclient.21.dylib from \/usr\/local\/Cellar\/mysql-client\/8.0.19\/lib to \/usr\/local\/lib\/ \nsudo ln -s \/usr\/local\/Cellar\/mysql-client\/8.0.19\/lib\/libmysqlclient.21.dylib \/usr\/local\/lib\/libmysqlclient.21.dylib","Q_Score":0,"Tags":"mysql,mysql-python","A_Id":61588197,"CreationDate":"2020-03-31T03:56:00.000","Title":"Library not loaded: \/usr\/local\/opt\/mysql\/lib\/libmysqlclient.21.dylib error when it does not exist","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need a python scripts to load the multiple excel sheet data into hive table using python. Any one helping on this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":251,"Q_Id":60943498,"Users Score":0,"Answer":"Yes, it is very easy!!\nYou should have pandas library installed or install it using pip if you don't have by typing this in the command prompt - py -m pip install pandas\nThen, use the following code -\nimport pandas as pd\ndf = pd.read_excel('', '')\nprint(df)\nYou will see that the table is available in excel.","Q_Score":0,"Tags":"python,python-3.x","A_Id":60943912,"CreationDate":"2020-03-31T04:31:00.000","Title":"How to load the excel data into hive using python script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write a program for Windows using python and python frameworks (pandas, numpy etc.), which I want to use to replace several engineering excel spreadsheets (which are repetitive). Please point me in the right direction regarding which python frameworks to use.\nThe program should contain and be able to do the following:\n- Have a GUI to input variables\n- Compute excel formulas using said variables\n- Create graphs\n- Export information to excel spreadsheets\n- Create PDFs\nI know I can set up a single excel spreadsheet to do the above, but I'm using this as a way to learn programming...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":60946507,"Users Score":0,"Answer":"A good way to learn, maybe a less good way to solve the said problem.\nGUI - ?\nMatplotlib can be used for the graphs.\nExporting to excel can be done easily in pandas with df.to_excel()\nweasyprint library can be used to export to pdf.","Q_Score":0,"Tags":"python,excel,frameworks","A_Id":60947622,"CreationDate":"2020-03-31T08:41:00.000","Title":"Writing a Windows program to replace several engineering excel spreadsheets","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"this has weighed me down for a week now. seems like there's no straight-forward solution anywhere, I'm really devastated.\nI have hosted my python flask webapp and it's postgres database successfully. Now I only need to link it with any cloud (not Google pls!) service that would enable the webapp to save images, and retrieve these images on request. \nI plan to use redis for caching these images, greatly reducing the rate of requests sent to the cloud storage. \nPlease help!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":446,"Q_Id":60951615,"Users Score":1,"Answer":"when you talk about save or retrieve on cloud storage what kind of storage that you have in mind? because there's few approach:\nA. Store image \/ video as Binary\nUsing this approach it means all uploaded file or image will be written into db and all requested file will be read from db.\nB. Store image \/ video as path \nUsing this approach it means if its local path then it all uploaded file will stored as local file in the server where the code is hosted.\nif its remote path then it will stored on some cloud (Google Cloud Storage, AWS S3)\nAlso in your question you mention about redis as cache. I want to clarify some few things about redis. In approach A if you want to reduce query sent to db you can utilize redis to store binary of frequent accessed file, so on the next request it will be fetch from cache instead from db. In approach B redis doesn't really reduce 'requests sent to the cloud storage' because client still fetch the file from cloud storage and your app just return where it stored. if you want to reduce no of request sent to cloud storage then maybe you looking about client-side cache or CDN.","Q_Score":0,"Tags":"python,cloud","A_Id":60962425,"CreationDate":"2020-03-31T13:21:00.000","Title":"save and retrieve images\/videos on cloud storage linked to python flask server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to process an excel file with ~600k rows and ~10 columns.\nI want to execute some program line by line (or row by row) as if it is an iterator (like txt\/csv files). However, if I use xlrd or pandas to read the excel file, it takes ~2-3min for opening the file.\nI wonder if it is possible to read the excel file line by line efficiently so that, for example, I can verify my program with the first 10 rows without waiting for a long time for every trial.\nEdit: Thank you for suggesting to convert the excel file to a csv before processing. However, I have to create an one-click program to the user. So, I still hope to find out a way to directly read the excel file efficiently, if possible.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":943,"Q_Id":60966543,"Users Score":0,"Answer":"Excel files are zip folder with xml file for each worksheet, maybe it's more efficient to open this file has a zip and read your line with a xml library ?","Q_Score":3,"Tags":"python,pandas","A_Id":60966649,"CreationDate":"2020-04-01T08:27:00.000","Title":"Read Excel file line by line efficiently","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been using SQLAlchemy's Column(default=) when I really wanted Column(server_default=), so that the default would be in the schema and work in raw SQL. Is there a reason to use both when they are the same value, or should I switch over to using server_default= only?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":60975696,"Users Score":0,"Answer":"If you have server_default then there's no reason that I'm aware of to be inserting the same value via default.\nI haven't tested this but logically sending a default as well seems like it would preempt the server default from being used at all since the value would no longer be null from the database's perspective. Thus the database wouldn't need to step in to fill it, except in the case of using raw SQL when default doesn't apply anyway.","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":69110131,"CreationDate":"2020-04-01T16:26:00.000","Title":"In a SQLAlchemy Column(), is it necessary to define both default= and server_default?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a complete application intended to be compiled to .exe (via Py2EXE). I would love to use the DAO object for working with an Access database. I would like to have as little dependencies as possible (for the user - no office install, etc).\nMost of the sources I have reviewed claim that the bitness of the Engine\/Driver\/Office Installation, must all align. This confuses me as to what the user will require on their machine if i compile my working code. \nWill they just need Access Run-time for the bitness of the engine I develop the app in?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":61005720,"Users Score":0,"Answer":"I wouldn't recommend Acess with Python.\nPython and Access are two completely different \"worlds\", Python does have the necessary drivers to \"speak\" to Access but is not a good match especially when it has built in support for SQLite and i reckon far better support for other Database engines.\nI am an Access expert and i do Python but i wouldn't mix them unless its for convenience like using the report engine of Access.","Q_Score":0,"Tags":"python,windows,ms-access,dao","A_Id":61059022,"CreationDate":"2020-04-03T05:43:00.000","Title":"What is the easiest way to use DAO Engine (Access) in Python for a complete application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"While i run python script to scrape new data and add to database then mysql server is down.\nAny Suggestion","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":61005885,"Users Score":0,"Answer":"I think need more information to solve this for example sql log.\nIn my opinion, your statement is insert a huge data and server has not a correct configuration or your query should be split to solve this issue.","Q_Score":0,"Tags":"mysql,python-3.x","A_Id":61008408,"CreationDate":"2020-04-03T05:58:00.000","Title":"Mysql Server is down When Python Script is run","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"slides=prs1.slides\n for slide in prs1.slides:\n ImageData.insert(1, \"Slide \"+str(slides.index(slide)+1))\n TextData.insert(1, \"Slide \"+str(slides.index(slide)+1))\n for shape in slide.shapes:\n if 'Picture' in shape.name:\n write_image(shape, ImageData)\n elif shape.has_table:\n table_data(shape, TextData)\n elif shape.has_text_frame:\n s=\"\"\n for paragraph in shape.text_frame.paragraphs:\n for run in paragraph.runs:\n s+= \" \"+run.text\n TextData.append(s)\n elif shape.shape_type == MSO_SHAPE_TYPE.GROUP:\n group_data(shape, ImageData, TextData)\n\ndef group_data(group_shape, ImageData, TextData):\n #for group_shape in group_shapes:\n for shape in group_shape.shapes:\n if 'Picture' in shape.name:\n if (ImageData == []):\n ImageData.append(\"Slide \"+str(slides.index(slide)+1))\n write_image(shape, ImageData)\n elif shape.has_table:\n TextData.append(\"Slide \"+str(slides.index(slide)+1))\n table_data(shape, TextData)\n elif shape.has_text_frame:\n if (TextData == []):\n TextData.append(\"Slide \"+str(slides.index(slide)+1))\n s=\"\"\n for paragraph in shape.text_frame.paragraphs:\n for run in paragraph.runs:\n s+= \" \"+run.text\n TextData.append(s)\n elif shape.shape_type == MSO_SHAPE_TYPE.GROUP:\n group_data(shape, ImageData, TextData)\nI'm not able to read SmartArt data from slides. This is the above code through which i'm able to get 80% of pptx file data. I want to fetch 100% data and store it in a csv file. Even i want to save ppt file as pptx file using python code.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":449,"Q_Id":61013161,"Users Score":1,"Answer":"python-pptx has no API-support for SmartArt. The schema and semantics for the XML of SmartArt are unpublished (the last time I went looking) so it's not likely to be added anytime soon.\nIf you want to interpret SmartArt objects you'll have to dig into the XML yourself and do the best you can.\nLike a chart or a table, SmartArt is enclosed in a GraphicFrame shape. Like a chart, its contents are stored as a separate part.\nNot really an answer, but at least some background to get you started. I recommend you look for an alternative because this direction is going to be a lot of frustrating work that probably ends up in brittle code because you're reverse-engineering rather that working from a spec.","Q_Score":0,"Tags":"python-pptx","A_Id":61016223,"CreationDate":"2020-04-03T13:36:00.000","Title":"How to read PowerPoint SmartArt Data using python ? Python-PPTX","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My application records status changes in discord. (Online, offline, idle and DND) This is achieved using discord.py and an SQL server (MariaDB).\nSpecifically, if I receive hundreds of of status updates per second (all with the same Unix timestamp since they happened within a second of each other), should I Store each Unix timestamp in a separate table with an ID for each?\nI am asking this question in hopes of saving disk space in the long run.\nMy hypothesis is that if a smaller number is stored with less digits (the ID that points to a timestamp), it would take up less space as the large Unix timestamp wouldn't need to be repeated hundreds of times.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":61014008,"Users Score":2,"Answer":"A few questions\/remarks are raised by your post...\n\nDo you really need to store timestamps with up-to-the-second accuracy ? SToring with a bigger interval would mean less space usage.\nYou could use a smaller int to store timestamps with an epoch closer to the present, so you could use less space overall, without needing a new table.\nYou should maybe consider dropping events that happen too frequently. If a user change status 10 times in a minute, maybe it's not worth it recording them all ?\nFinaly, an enum (smallint) and a timestamp is not that big in a database. They can usualy handle thousand of gigabytes \"just fine\"","Q_Score":3,"Tags":"python,python-3.x,database,mariadb,diskspace","A_Id":61014524,"CreationDate":"2020-04-03T14:22:00.000","Title":"Should I Store Unix Time In A Separate Table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 120,000 csv inside my AWS EC2 instance, each containing 120,000 rows. I can't insert each of them as is into my AWS RDS postgresql DB, that will be 120,000^2 = 14,400,000,000 records. Each csv is about 2 MB.\nMy approach is:\n\nPython script that converts 120,000 records into just 1 record (list of dictionary) for each csv (now 5 MB after condensing the data) with the help of pandas library\nThe python script then insert each csv's 1 record into AWS postgresql database via pandas.to_sql (which uses sqlalchemy\nI use python multiprocessing module to fully utilize my AWS EC2 to speed up data insertion\nI did not create additional indexes in order to speed up my data insertion\nI use AWS EC2 instance with up to 25GB of network connection\n\nMy observation is:\n\nAt the beginning, my code will insert 50 csv per min, which is decent speed\nHowever, right now, with 50k csv being inserted, it only insert 1k csv in 5 hours, about 3.33 csv per min\nI tried using psql \\copy and realized that it takes between 30-50 sec to insert 1 csv, that's slower than my script that converts the data and insert into the DB directly\n\nI am not sure how I can speed up things up.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":180,"Q_Id":61028971,"Users Score":2,"Answer":"The reason the database performance drop from 50 to 3.33 csv per min is because of the AWS RDS instance class.\nI am using db.t2.micro class, which I just learnt that it's limited by CPU credits. After I change the instance class to t3, my code is back to 50 csv per min.\n\nAmazon RDS T3 DB instances run in Unlimited mode, which means that you will be charged if your average CPU utilization over a rolling 24-hour period exceeds the baseline of the instance. CPU Credits are charged at $0.075 per vCPU-Hour. The CPU Credit pricing is the same for all T3 instance sizes across all regions and is not covered by Reserved Instances.\n\nConsidering that my code takes <1 sec to execute, and only 1 record to insert each time. pandas.to_sql shouldn't be the bottleneck. Though I do believe using SQLAlchemy will perform better than pandas.to_sql. For the same reason psycopg2 performs better than SQLAlchemy.\nIn short, this is an issue due to hardware rather than software. Fix it by upgrading to a more powerful instance. Thanks.","Q_Score":1,"Tags":"python,database,postgresql,amazon-web-services,multiprocessing","A_Id":61039956,"CreationDate":"2020-04-04T13:40:00.000","Title":"Speed up AWS PostgreSQL insertion via Python script","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing something that essentially refines and reports various strings out of an enormous python dictionary (the source file for the dictionary is XML over a million lines long).\nI found mongodb yesterday and was delighted to see that it accepts python dictionaries easy as you please... until it refused mine because the dict object is larger than the BSON size limit of 16MB.\nI looked at GridFS for a sec, but that won't accept any python object that doesn't have a .read attribute.\nOver time, this program will acquire many of these mega dictionaries; I'd like to dump each into a database so that at some point I can compare values between them.\nWhat's the best way to handle this? I'm awfully new to all of this but that's fine with me :) It seems that a NoSQL approach is best; the structure of these is generally known but can change without notice. Schemas would be nightmarish here.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":61149473,"Users Score":0,"Answer":"Have your considered using Pandas? Yes Pandas does not natively accept xmls but if you use ElementTree from xml (standard library) you should be able to read it into a Pandas data frame and do what you need with it including refining strings and adding more data to the data frame as you get it.","Q_Score":2,"Tags":"python,database","A_Id":61149842,"CreationDate":"2020-04-10T22:16:00.000","Title":"What's the best strategy for dumping very large python dictionaries to a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing something that essentially refines and reports various strings out of an enormous python dictionary (the source file for the dictionary is XML over a million lines long).\nI found mongodb yesterday and was delighted to see that it accepts python dictionaries easy as you please... until it refused mine because the dict object is larger than the BSON size limit of 16MB.\nI looked at GridFS for a sec, but that won't accept any python object that doesn't have a .read attribute.\nOver time, this program will acquire many of these mega dictionaries; I'd like to dump each into a database so that at some point I can compare values between them.\nWhat's the best way to handle this? I'm awfully new to all of this but that's fine with me :) It seems that a NoSQL approach is best; the structure of these is generally known but can change without notice. Schemas would be nightmarish here.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":61149473,"Users Score":0,"Answer":"So I've decided that this problem is more of a data design problem than a python situation. I'm trying to load a lot of unstructured data into a database when I probably only need 10% of it. I've decided to save the refined xml dictionary as a pickle on a shared filesystem for cool storage and use mongo to store the refined queries I want from the dictionary. \nThat'll reduce their size from 22MB to 100K.\nThanks for chatting with me about this :)","Q_Score":2,"Tags":"python,database","A_Id":61218273,"CreationDate":"2020-04-10T22:16:00.000","Title":"What's the best strategy for dumping very large python dictionaries to a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've a dataframe in-memory which has certain identifiers, using those identifiers i want to fetch only relevant data from a very large(500M rows) table persisted in a RDBMS(Sql server).\nWhat's the best way to do this? Definitely don't want to bring the entire table in-memory. And can't loop through either. If it was single column key to lookup, I could still think of building a comma-separated string and doing IN clause against that list but I've multiple fields that are identifiers.\nOnly option I see is saving dataframe into db, doing join on db server and bringing in data back. But seems so clunky. \nI've read about dask as an option, but not really sure about that one because bringing entire table in-memory\/disk still doesnt seem like an efficient technique to me","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":85,"Q_Id":61157467,"Users Score":1,"Answer":"Only option I see is saving dataframe into db, doing join on db server and bringing in data back. But seems so clunky.\n\nThis sounds like the most efficient option in terms of compute time.\n\nI've read about dask as an option, but not really sure about that one because bringing entire table in-memory\/disk still doesnt seem like an efficient technique to me\n\nIf your dataframe with the entries that you care about is small then Dask Dataframe probably won't read everything into memory at once. It will likely scan through your database intelligently in small space. The real cost of using Dask here is that you need to move data in and out of the database, which will be slow or fast depending on your database connector.\nI would try it out, and see how it performs.","Q_Score":0,"Tags":"python,sql,sql-server,pandas,dask","A_Id":61164672,"CreationDate":"2020-04-11T13:23:00.000","Title":"Joining in-memory dataframe with very large persisted table in a db?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Following is the configuration:\n1.Python - Python 3.8.1 (tags\/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32\n2.MS Access 2016 MSO(16.0.12624.20348) 64 bit\n3.Microsoft Access Driver (*.mdb, *.accdb) 16.00.4513.1000\n4.Installed Microsoft Access Database Engine 2016 Redistributable\nFacing the error while trying to create a connection:\nimport pyodbc\nconn = pyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\Users\\tejas\\Documents\\First.accdb;')\ncursor = conn.cursor()\nError:\nTraceback (most recent call last):\n File \"C:\\Users\\tejas\\eclipse-workspace\\HelloWorld\\DB\\Insert.py\", line 3, in \n conn = pyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\Users\\tejas\\Documents\\First.accdb;')\npyodbc.Error: ('HY000', '[HY000] [Microsoft][ODBC Microsoft Access Driver] The database you are trying to open requires a newer version of Microsoft Access. (-1073) (SQLDriverConnect); [HY000] [Microsoft][ODBC Microsoft Access Driver] The database you are trying to open requires a newer version of Microsoft Access. (-1073)')\nI have gone through other similar questions and tried various options but no luck so far. Any help would be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":696,"Q_Id":61158715,"Users Score":0,"Answer":"I found this because I added a table to a MS Access database and set one of the fields to \"Large Number\". This broke my connection with pyodbc, and consequently my python scripts which write to this database. I had no way to revert the file.\nI fixed it by creating a new database and importing all the old tables that didn't have any Large Number fields. I then was able to simply copy and paste the queries and forms over to the new database file. It seems to work fine now with the new file. I've learned by lesson and won't be assigning \"large number\" to anything soon in MS Access.\nThanks!","Q_Score":0,"Tags":"python,python-3.x,ms-access,odbc,pyodbc","A_Id":62536343,"CreationDate":"2020-04-11T14:52:00.000","Title":"Connect Python to Access 2016: \"database ... requires a newer version of Microsoft Access\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using flask security to login to my admin panel. It was using the email and password just fine, but recently the login is requiring the id instead of the email. As far as I know the default behavior is to use the email column. When investigating I found that the SQL query is trying to use the id in the where clause instead of the email. My models are set up correctly with peewee so I am confused as to what is going on.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":33,"Q_Id":61232849,"Users Score":1,"Answer":"It was a peewee version problem. The newest version breaks flask-security.","Q_Score":2,"Tags":"python,flask-admin,flask-peewee","A_Id":61238305,"CreationDate":"2020-04-15T15:43:00.000","Title":"Flask Security Is using id instead of email in where clause","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have made a web application that uses Google Sheets as a database. I want to upload my project on github. But since it uses google sheets API for fetching data, I was wondering if is it safe to upload because I will also have to upload API credentials as well?\nI have seen a lot of questions like this on Stack Overflow but none of them addressed this question clearly. Also, my application\/database is nothing confidential or anything like that. My only concern is if uploading API credentials can cause any harm to my google account?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":61262038,"Users Score":2,"Answer":"You should never upload private\/secret credentials to a public forum like GitHub.\nThere will always be people looking to exploit free credentials like that and it's impossible to know the damage that could be caused. What if someone deletes data on your account? What if they spam calls and get your account suspended? \nThe best practice is to put some kind of placeholder in the code that you put on GitHub and add a section to the ReadMe explaining where the credentials need to be provided.","Q_Score":1,"Tags":"python,django,github,google-sheets-api","A_Id":61262110,"CreationDate":"2020-04-16T23:56:00.000","Title":"Can we upload google sheets api credentials on github?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can anyone advise the status of the Sql-Alchemy project for Informix?\nI am relatively new to Python. I have worked with Dbi and SQL-Alchemy for postgres and made both of those work.\nI have spend many hours trying to make Informix work. I find the instructions on GitHub for sql-alchemy difficult to follow and I am put off by the comment that says it not yet \"ready\".\nI have made the IfxPy module work and I have also made IfxPyDbi work. I would be happy to work with IFxPyDbi if the execute method returned a dictionary (like IfxPy.fetch_assoc) but I can only make it return tuples.\nDoes anyone have any advice about the best approach to start working on a python project with Informix. What is the best place to start? Am I on the right track with these modules? Am I missing something on SQL-Alchemy for Informix?\nAny advice would be appreciated.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":317,"Q_Id":61296078,"Users Score":1,"Answer":"The Infomix Python driver (IfxPy and IfxPyDbi) is reasonably well tested and Informix team is happy to help you if you face problem. At the same time the Python SQL Alchemy adapter for Informix database it is work in progress and not ready for use; we still need to complete the metadata mapping for the Informix database to produce right output. Unfortunately, we don\u2019t have an ETA yet for this task.","Q_Score":1,"Tags":"python,sqlalchemy,informix","A_Id":61329161,"CreationDate":"2020-04-18T20:51:00.000","Title":"SqlAlchemy Informix Status","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone advise the status of the Sql-Alchemy project for Informix?\nI am relatively new to Python. I have worked with Dbi and SQL-Alchemy for postgres and made both of those work.\nI have spend many hours trying to make Informix work. I find the instructions on GitHub for sql-alchemy difficult to follow and I am put off by the comment that says it not yet \"ready\".\nI have made the IfxPy module work and I have also made IfxPyDbi work. I would be happy to work with IFxPyDbi if the execute method returned a dictionary (like IfxPy.fetch_assoc) but I can only make it return tuples.\nDoes anyone have any advice about the best approach to start working on a python project with Informix. What is the best place to start? Am I on the right track with these modules? Am I missing something on SQL-Alchemy for Informix?\nAny advice would be appreciated.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":317,"Q_Id":61296078,"Users Score":0,"Answer":"It should be noted that I\u2019ve found reasonable success using the windows 64-bit informix odbc driver, and pyodbc.\nPython 3.8, informix 12.\nA bit of advice getting it into a data frame.\npd.DataFrame.from_records() is your friend.","Q_Score":1,"Tags":"python,sqlalchemy,informix","A_Id":63759400,"CreationDate":"2020-04-18T20:51:00.000","Title":"SqlAlchemy Informix Status","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to take an excel file, which contains different cell types like dates, currency ETC and parse it with Python including the cell types.\nI have tried using Pandas, but when I open it with Python using pd.read_excel, all of these cell types are disappearing.\nFor Example - a cell containing '50 USD' (Cell containing currency type) will be shown as '50'.\nIs there a method in Python that is able to read these cells with their cell types saved?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":61302777,"Users Score":0,"Answer":"I think you may be confusing cell values and cell formatting. For example, with 50 USD, Excel stores the numeric value and then applies a currency format for display. So it is correct to read it into pandas as an integer if you want to sum, average, or otherwise analyze that column. \nDates should be automatically parsed and, if they aren't, read_excel has a parse_dates parameter that allows you to do so.\nNow, depending on how you want to output the data after you've manipulated it in pandas, you could have a function that outputs a new dataframe that converts all values to string and applies formats to different columns. Or, if you are working in a notebook, you can use the pandas styling API. You could also write the file back to excel with pandas and then apply styles programatically with openpyxl.","Q_Score":0,"Tags":"python-3.x,excel,pandas","A_Id":61309931,"CreationDate":"2020-04-19T10:21:00.000","Title":"Save Excel values while parsing in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"error code: \nclient = pymongo.MongoClient(\"mongodb+srv:\/\/********?retryWrites=true&w=majority\")\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/pymongo\/mongo_client.py\", line 621, in init\nconnect_timeout=timeout)\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/pymongo\/uri_parser.py\", line 463, in parse_uri\nnodes = dns_resolver.get_hosts()\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/pymongo\/srv_resolver.py\", line 102, in get_hosts\n_, nodes = self._get_srv_response_and_hosts(True)\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/pymongo\/srv_resolver.py\", line 83, in _get_srv_response_and_hosts\nresults = self._resolve_uri(encapsulate_errors)\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/pymongo\/srv_resolver.py\", line 79, in _resolve_uri\nraise ConfigurationError(str(exc))\npymongo.errors.ConfigurationError: query() got an unexpected keyword argument 'lifetime'\nversions: \npython 3\npymongo: 3.10.1","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":61320226,"Users Score":0,"Answer":"In my case was solution to use older type of URL:\nclient = pymongo.MongoClient(\"mongodb:\/\/:@...\nSW:\n\nUbuntu 18.04\nPython 2.7.17 \/ 2.7.12\nPymongo 3.11.1\nGoogle Cloud SDK 319.0.0\n\nConnection from test file directly for os was OK, but same code run from Google SDK dev appserver2 failed.\nAfter change URL generated by cloud.mongodb.com Atlas\nin section Cluster -> Connect -> Choose a connection method -> Python - 3.4 or later\nIt finally (after 4 days searching) started working.","Q_Score":0,"Tags":"python,pymongo","A_Id":64961308,"CreationDate":"2020-04-20T10:27:00.000","Title":"pymongo error on ec2 but working on VM on laptop","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing a tool to update my bigquery tables to reflect my locally defined schemas. Let's say I'm updating the table users. The steps for doing this are:\n\nCreate a new table with the new schema, called users_update.\nCreate a SELECT query where values are cast (e.g. TIMESTAMP to DATETIME), and new columns added (e.g. CURRENT_TIMESTAMP() as date_updated)\nRun a QueryJob (python) to execute the query with the new table as the destination.\nDelete the table users\nCopy users_update to users\nDelete the table users_update\n\nThe problem is that this query converts every field in the new table to NULLABLE. I've stepped through my script, and I've verified that the new table has correct modes before the QueryJob. I also provide the new schema (with correct modes) to the QueryJob.\nAre there some parameters to the QueryJob I need to set, or what am I missing here?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":61320473,"Users Score":1,"Answer":"I had set the job_config.write_disposition to WRITE_TRUNCATE. Removing this solved my issue.","Q_Score":1,"Tags":"python,google-bigquery","A_Id":61322170,"CreationDate":"2020-04-20T10:39:00.000","Title":"Google BigQuery: SELECT with destination table overrides field modes to NULLABLE","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My use case is the following: i developed a small python script which does some time-series analysis and then writes the output into a database table where it's used by an Oracle application. The script resides on the server and is run from the Oracle interface. All is working good but i want to be able to retrieve any potential errors from the script into a database table. Is this possible?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":13,"Q_Id":61323099,"Users Score":1,"Answer":"Yes, you can either:\n\nwrap all of the script's main logic in a try: except Exception as exc: block, and then use the exception handler to post the traceback (traceback.format_traceback()) to your database\nuse a separate wrapper script that runs your script, and in case it fails (returncode != 0), post the stderr output to your database","Q_Score":1,"Tags":"python,linux,database,oracle","A_Id":61323158,"CreationDate":"2020-04-20T12:58:00.000","Title":"Retrieving Python errors into an Oracle database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am reading over 100 million records from mongodb and creating nodes and relationships in neo4j. \nwhenever I run this after executing certain records I am getting pymongo.errors.CursorNotFound: cursor id \"...\" not found\nearlier when I was executing it without \"no_cursor_timeout=True\" in the mongodb query then at every 64179 records I was getting the same error but after looking for this on StackOverflow I had tried this adding no_cursor_timeout=True but now also at 2691734 value I am getting the same error. HOW CAN I GET RID OF THIS ERROR I had also tried by defining the batch size.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1292,"Q_Id":61338852,"Users Score":0,"Answer":"Per the ticket Belly Buster mentioned you should try:\n\nmanually specifying the session to use with all your operations, and\nperiodically pinging the server using that session id to keep it alive on the server","Q_Score":0,"Tags":"python,mongodb,pymongo,pymongo-3.x","A_Id":61355353,"CreationDate":"2020-04-21T08:13:00.000","Title":"getting pymongo.errors.CursorNotFound: cursor id \"...\" not found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried to install the mysqlclient-module for python 3.8. I ran into several problems while doing so that are described in other posts. I then tried to install it on another pc where it worked. On the first one it downloaded the mysql.tar.gz file and tried to build the file by itself(failed due to another error). But on the other pc it instead downloaded the 1.4.6-cp38-cp38-win_amd64.whl file and installed it correctly. Both machines run Windows 10 and when running platform.architecture() i receive the same result on both. I even tried installing the wheel manually on the first machine where i got the \"not supported wheel on this plattform\". The only difference i found in the systemversions is the build which is 17763 on the one it worked and 18363 on the one it didnt work (although i believe this shouldnt be the source)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":61363303,"Users Score":0,"Answer":"After trying around a bit i realized this error only occurs while beeing in the virtual enviroment in pycharm on the first machine. I was able to install it outside of the virtual enviroment and then copy the created files manually to the virtual enviroment.","Q_Score":0,"Tags":"python","A_Id":61365161,"CreationDate":"2020-04-22T10:51:00.000","Title":"Python wheel not supported on this plattform","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i'am building a simple desktop application in python(PyQt5) and using the mongodb as my database.\nnow i want to package this script into executable to distribute, here i'm in confusion that how to package database with it so user don't need to install mongodb server in their machine. can i use mongodb or there is different approach for desktop app","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":61395408,"Users Score":0,"Answer":"The license allows you to distribute MongoDB server binaries, you still need to manage the process lifetime (start\/stop).","Q_Score":0,"Tags":"database,mongodb,desktop-application,python-packaging","A_Id":61395832,"CreationDate":"2020-04-23T19:27:00.000","Title":"packaging mongodb database with desktop app without installing mongodb server at users machine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I changed the database from SQLite3 to PostgreSQL in my Django project. Is it possible to store my new database in the GitHub repository so that after cloning and running by the command\npython manage.py runserver\nthe project has started with the whole database?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":586,"Q_Id":61426399,"Users Score":1,"Answer":"You cannot save the database as such, you can instead create fixtures to be run. Whenever someone will clone the project, he\/she can simple run those fixtures to populate the database.","Q_Score":1,"Tags":"python,django,git,postgresql,github","A_Id":61426459,"CreationDate":"2020-04-25T13:29:00.000","Title":"Storing the Django project with the PostgreSQL database on GitHub","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to load Amazon fine food data set using SQlite3 in juptyer notebooks but when i try to select it throws an error: Execution failed on sql \nSELECT * FROM Reviews WHERE Score != 3 LIMIT 1000\n\nDatabaseError: no such table: Reviews","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":61438055,"Users Score":0,"Answer":"Check your db connection and db file path...","Q_Score":0,"Tags":"python,sql,amazon","A_Id":61439012,"CreationDate":"2020-04-26T08:47:00.000","Title":"Amazon fine food reviews dataset SQL error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large dump file created by mongodump utility, for example \"test.dump\". I want get one exact collection from this dump, and manually read it into memory for further processing as valid BSON documents. I cannot load full dump in memory due to it's size. \nI do not need physically restore anything to mongo instances! I basically even have none of them up and running. So mongorestore utility could be a solution only if can help me to read my collection from a dump file to memory.\nI'm using Python 3 and pymongo, but can import another third-party libs if necessary or launch any CLI utilities with stdout results.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":596,"Q_Id":61458371,"Users Score":1,"Answer":"I am unfamiliar with any off-the-shelf tools that would extract a collection out of a dump file. That said:\n\nAWS offers x1e.32xlarge instance type with almost 4 TB of memory. How big is your dump exactly?\nSurely the easiest solution is to just load the dump into a MongoDB deployment (which doesn't need much memory or other resources, if you are going to dump one collection back). Hardware is very cheap these days.\nThe BSON format is not that complicated. I expect you'd need to write the tooling for this yourself but if the dump is in fact valid BSON you can manually traverse it using BSON reading code that is part of every MongoDB driver.","Q_Score":0,"Tags":"python,mongodb,pymongo,mongodump,mongorestore","A_Id":61465519,"CreationDate":"2020-04-27T11:58:00.000","Title":"Read mongodump manually","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a group of excel files in a directory. I put the list of file names in a list and iterate over them to concatenate certain columns into a single file. Periodically, one of the files does not have the proper sheet name, and the my notebook throws an error. \nI get it that I could first open the file another way and then query the file to see if it contains the sheet_name. I just want to be Pythonic: I am asking if file lacks sheet_name='Unbilled' go to next file. \n...\nfor file in files_to_process:\n df = pd.read_excel(file, usecols=colNames, sheet_name='Unbilled', index=0, header=4)\n...\nI am doing this in a Jupyter notebook, just FYI","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":61511574,"Users Score":0,"Answer":"As I thought about my question while working on my question in stackoverflow and reading not on point questions, an answer came to me. This works and seems Pythonic to me:\n...\nfor file in files_to_process:\n try:\n df = pd.read_excel(file, usecols=colNames, sheet_name='Unbilled', index=0, header=4)\n except:\n print('The following file lacks sheet_name=Unbilled: ', file)\n pass\n...","Q_Score":0,"Tags":"python,excel,pandas,iterator,try-catch","A_Id":61511575,"CreationDate":"2020-04-29T20:59:00.000","Title":"iterating through list of excel files; how to skip opening file if excel file lacks sheet_name criteria","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I spend each month a lot of time extracting numbers from an application into an Excel-spreadsheet where our company saves numbers, prices, etc. This application is not open-source or so, so unfortunately, sharing the link might not help.\nNow, I was wondering whether I could write a Python program that would do this for me instead? But I'm not sure how to do this, particularly the part with extracting the numbers. Once this is done, transfering this to an Excel spreadsheet is particularly trivial.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":61559710,"Users Score":0,"Answer":"1)For this you can create a general function like getApplicationError(),\n2)in this method you can get the text of the Application Error(create xpath of the application error, and check that if that element is visible than get text) and throw an exception to terminate the Script and you can send that got message into Exception constructor and print that message with Exception.\nAs you are creating this method for general use so you need to call this method every posible place from where the chances are to get Application Error. like just after click(Create,Save,Submit,Delet,Edit, also just entering value in mendatory Fields)","Q_Score":0,"Tags":"python,text,extract","A_Id":61560410,"CreationDate":"2020-05-02T12:55:00.000","Title":"Python: Extracting Text from Applications?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A python script I write generates IDs that I need to add to a list, while another script checks, if the ID already exists in the list. There are no other tables, relations or anything, just a huge, growing list of 6-letter strings. I need to add and read values as fast as possible at the same time. What would be the database of choice in that case? A NoSQL database like redis?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":61567858,"Users Score":0,"Answer":"Yes, MongoDB or Redis, can do the work","Q_Score":0,"Tags":"python,database,redis,nosql","A_Id":61567870,"CreationDate":"2020-05-02T23:51:00.000","Title":"Which database for a huge list of one type of data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have big excel with lot of sheets and formulas interlinked within the sheets. I need to populate input in one sheet using the code and recalculate all the formulas to get my output from another sheet.\nI am able to do this with apache POI using Java but it is too slow in formula recalculation. Looking for libraries in Python to do the same.","AnswerCount":3,"Available Count":2,"Score":0.3215127375,"is_accepted":false,"ViewCount":12509,"Q_Id":61609377,"Users Score":5,"Answer":"The best ones as I worked with them are XlsxWriter and Xlwings.\nBoth of them are working smoothly and efficiently, and they have good compatibility between Python and Excel.\nXlwings has two versions, Free and Pro (paid version). Free version has the complete ability and can do almost anything you need to work with an Excel file. With the paid version, you can get more functionality and support, which developers (not regular users) do not need most of the time.\nOn the other hand, XlsxWriter is also an excellent choice, and its users' community is growing fast recently. It supports all you need to work with an excel file.\nBoth of them can be installed simply with pip and conda.\nThe other libraries, such as xlrd, xlwt are designed in the past for handling the old version (.xls) files. They are not comparable with the other two libraries that I mentioned.\nOpenPyXl also is a decent library that can handle most of your needs. The library needs more support to grow. In my opinion, it is not well mature yet.\nPandas and pyexcel libraries are also suitable for reading and writing data to an Excel file. I prefer Pandas because it is a mature and fast library that can handle big data. pyexcel is a wrapper API that is not capable as Pandas, and working with it is more complicated.\nPyXLL is a professional library that can handle almost everything a user wants in Excel with Python. One of the famous companies working on Python distributions, Enthought, is supporting the library. Unfortunately, there is no free or community version of it, and you can only choose a 30 days trial of the pro version. After that, you must pay at least $29 per month. It is powerful, but it is an expensive choice for a single developer.\nOf course, there are more Libraries, Wrapers and APIs for handling excel files, but I mentioned the most mature and popular libraries.","Q_Score":7,"Tags":"java,python,excel,apache-poi,openpyxl","A_Id":69057019,"CreationDate":"2020-05-05T08:56:00.000","Title":"What is the best library in python to deal with excel files?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have big excel with lot of sheets and formulas interlinked within the sheets. I need to populate input in one sheet using the code and recalculate all the formulas to get my output from another sheet.\nI am able to do this with apache POI using Java but it is too slow in formula recalculation. Looking for libraries in Python to do the same.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":12509,"Q_Id":61609377,"Users Score":1,"Answer":"I would like to add some more libraries to Mayanks\n\nmatplotlib for data visualisation\nNumpy\nOpenpyXl\nxlrd\nxlwt\nXlsxWriter\n\nYou can go through each and choose what suits best to your needs","Q_Score":7,"Tags":"java,python,excel,apache-poi,openpyxl","A_Id":61609928,"CreationDate":"2020-05-05T08:56:00.000","Title":"What is the best library in python to deal with excel files?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting the error shown below when running Python script from SQL Server. I already installed ML services and reconfigured parameter external scripts enabled to 1.\nThis is a SQL Server Developer edition installed on Windows 10.\n\nMsg 39111, Level 16, State 1, Procedure sp_execute_external_script, Line 1 [Batch Start Line 28]\n The SQL Server Machine Learning Services End-User License Agreement (EULA) has not been accepted.\n\nHow can I accept it? Can't find any information. I've found only accepting EULA on docker containers, but it's not the same within this situation.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1707,"Q_Id":61653555,"Users Score":0,"Answer":"Had the same issue, was solved with a system restart.","Q_Score":2,"Tags":"python,sql-server,sql-server-ml-services","A_Id":65905492,"CreationDate":"2020-05-07T08:46:00.000","Title":"Accept EULA error when running Python script in SQL Server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting the error shown below when running Python script from SQL Server. I already installed ML services and reconfigured parameter external scripts enabled to 1.\nThis is a SQL Server Developer edition installed on Windows 10.\n\nMsg 39111, Level 16, State 1, Procedure sp_execute_external_script, Line 1 [Batch Start Line 28]\n The SQL Server Machine Learning Services End-User License Agreement (EULA) has not been accepted.\n\nHow can I accept it? Can't find any information. I've found only accepting EULA on docker containers, but it's not the same within this situation.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1707,"Q_Id":61653555,"Users Score":1,"Answer":"We can face with this kind of error if we haven't restarted services after installing ML services. \nActually I've clicked on restart after installation in Configuration Manager several times, but I think due to local account permission it didn't restarted and didn't gave me any error messages. After restart with administrative account error is gone.","Q_Score":2,"Tags":"python,sql-server,sql-server-ml-services","A_Id":61678764,"CreationDate":"2020-05-07T08:46:00.000","Title":"Accept EULA error when running Python script in SQL Server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some excel files that I want to modify, the modification is just regarding cells. In these excel files are vba macros that I have to preserve after the modification. I was playing with xlwt library, after to make the modifications and save the files I lost the macros. I just wondering if I can do with pyxl. I would like to know if with one of these libraries could preserve this info or I should ise another one.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16,"Q_Id":61698555,"Users Score":0,"Answer":"I found a way using the library pywin32 to use excel trough windows as a com object.","Q_Score":0,"Tags":"python-3.x,openpyxl,xlwt","A_Id":61822530,"CreationDate":"2020-05-09T14:45:00.000","Title":"Is there a way in pyxl to keep existing macros?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've created an API using Flask-RESTFUL package for Python 3.7.\nI'd like to know what the proper approach would be for returning data to a user based on which columns he should have access to.\nFor example, if I have an \"orders\" table with (order_id, order_date, price, ebay_name, revenue), but want User A and User B to have access to different data. Let's say that on route \/get_data, I return all fields, but User A should have access to all data, while User B only can see the revenue field.\nMy current approach:\nWhile building the JWT token when a user logins in and authenticates, would it be acceptable to store the column names of the \"orders\" table in the actual token? Then, when the user goes to the \/get_data route, I would basically check the column names stored in the JWT and build the MySQL query with the column names found in the token (select all_columns_in_jwt from orders). I worry that exposing the table columns in the JWT token is not the best approach.\nAnother idea would be to check within a user permissions table each time the \/get_data route is hit.\nDoes anyone have a suggestion for implementing this in a more efficient way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":300,"Q_Id":61707553,"Users Score":0,"Answer":"I would do it using user permissions but you could also create separate procedures for either each individual user or groups of users based on their roles\/permissions which create\/replace specific views of the tables allowing the user to see only what you want them to see. For example, upon any changes to the table (or alternatively, upon login for a user\/member of a user group) you could run the procedure to generate (or replace if it already exists) a view on the table for either each user or each user group which you want to restrict access to, and then your API would select data from the view rather than directly from the table\nWith user permissions (this is very similar to how I implement it in my own apps) you could create a permissions table and then a table of the permissions which users possess. Then, on using an API, you could query the user's permissions and using a map which you store somewhere in your code base you could look up the columns based on a combination of the relevant user permission and the relevant table and use the resulting column set as what you select in your query","Q_Score":0,"Tags":"python,mysql,flask,jwt,flask-restful","A_Id":61708552,"CreationDate":"2020-05-10T05:22:00.000","Title":"proper way to grant permissions to specific data JWT MySQL Flask-Restful Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to use MariaDb for my flask app. \nSo I try to make connection like in documentation, are any orm which support this db?\nDoes peewee support MariaDb. How can i use peewee orm for connection to MariaDb?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":443,"Q_Id":61716850,"Users Score":3,"Answer":"Yes, it supports MariaDB in the same way that it supports MySQL. MariaDB speaks the mysql protocol and uses the mysql drivers. So just use MySQLDatabase.","Q_Score":0,"Tags":"python,mariadb,peewee","A_Id":61738558,"CreationDate":"2020-05-10T18:27:00.000","Title":"MariaDb and Peewee","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I store a license key database for my paid software in python and then when users has the license key the software key will check the license key in server's database and if it find ones then the software opens and then when one year is past by then the license key will expire\nI have looked at some examples on the net have come to the conclusion that python is not a great programming language to build software such as this so i am going to do that in JAVA.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":405,"Q_Id":61730238,"Users Score":0,"Answer":"Well as far as I know you have one possibilty to do this well with python, and that would be having a server store keys, because python code is very open so you cannot save it locally. especially storing locally that it is valid for one year will pretty easily be 'hacked', so just do so on database on a server.","Q_Score":0,"Tags":"python,tkinter,random,license-key","A_Id":61730897,"CreationDate":"2020-05-11T12:54:00.000","Title":"How can I store licence key in server database and take that in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am trying to create a binary file and save it into my database. I am using REDIS and SQLALCHEMY as a framework for my database. I can use send_file to send the actual file whenever the user accesses a URL but how do I make sure that the file is saved in the route and it could stay there every time a user accesses the URL.\n\nI am sending the file from a client-python it's not in my\n directory\n\nwhat I need in a nutshell is to save the file from the client-python to a database to \"downloadable\" it to the browser-client so it would actually be available for the browser-client is there any way of doing this? Maybe a different way that I didn't think about","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":500,"Q_Id":61735124,"Users Score":0,"Answer":"I had to encode the data with base64, send it to the database and then decode it and send the file as binary data.","Q_Score":0,"Tags":"python,database,flask,redis,sendfile","A_Id":61975147,"CreationDate":"2020-05-11T16:59:00.000","Title":"Python - save BytesIO in database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Each morning we receive 15-20 separate emails from different sites with data attached in excel format.\nOne person then cleans the data, collates and inputs into a single spreadsheet.\nThis is a daily and very time consuming task.\nIs there a way to automate this process using python\/sql?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":61746439,"Users Score":0,"Answer":"It depends on how the Excels are formatted. Are they all the same or does actual transformation need to happen to get them into a common format? Are they actual .xls(x) files or rather .csv?\nExcel itself should have enough tools to transform the data to the desired format in an automated way, at least if the actions are the same all the time.\nFrom what I understand of your question, it's not actually needed to have the data in a database, but just combine them into a new file? Excel has the option to import data from several different formats under the \"Data\" menu option.","Q_Score":0,"Tags":"python,sql,excel","A_Id":61746698,"CreationDate":"2020-05-12T07:51:00.000","Title":"How to automate data in email\/excel to SQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm connecting to teradata in python using the pyodbc module, and I keep getting this error. Anyone knows why? I'm using the below code:\nimport textwrap\nimport pyodbc\nimport teradata\nimport pandas as pd\ncnx_tera = ('DRIVER={\/Library\/Application Support\/teradata\/client\/16.20\/lib\/tdataodbc_sbu.dylib};'\n....)\ncnx = pyodbc.connect(cnx_tera)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":585,"Q_Id":61761307,"Users Score":0,"Answer":"I was facing a similar problem. Did resolve this problem using the Teradatasql package instead of ODBC, it's very simple to connect using teradatasql. Just put in the right parameters while setting up the connection for teradatasql.\nP.S.= Comment for future reference.","Q_Score":0,"Tags":"python-3.x,macos,odbc,teradata,pyodbc","A_Id":71101821,"CreationDate":"2020-05-12T20:27:00.000","Title":"pyodbc.Error: ('HY000', '[HY000] [Teradata][ODBC] (11560) Unable to locate SQLGetPrivateProfileString function. (11560) (SQLDriverConnect)')","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to run multiple Python versions on SQL Sever 2017?\nIt is possible to do on Windows (2 Python folders, 2 shortcuts, 2 environment paths). But how to launch another Python version if I run Python via sp_execute_external_script in SQL Management Studio 18?\nIn SQL server\\Launchpad\\properties\\Binary path there is the parameter -launcher Pythonlauncher. Probably, by changing this, it is possible to run another Python version.\nOther guess: to create multiple Python folders C:\\Program Files\\Microsoft SQL Server\\MSSQL14.MSSQLSERVER\\PYTHON_SERVICES. But how to switch them?\nOther guess: in C:\\Program Files\\Microsoft SQL Server\\MSSQL14.MSSQLSERVER\\MSSQL\\Binn\\pythonlauncher.config - in PYTHONHOME and ENV_ExaMpiCommDllPath parameters substitute the folder C:\\Program Files\\Microsoft SQL Server\\MSSQL14.MSSQLSERVER\\PYTHON_SERVICES\\ with the folder with new Python version.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":250,"Q_Id":61768356,"Users Score":1,"Answer":"The answer is:\n\nCopy in \n\n\nC:\\Program Files\\Microsoft SQL Server\\MSSQL14.MSSQLSERVER\\\n\nfolder as many Python versions as you want (Python version = folder with Python like PYTHON_SERVICES)\n\nStop Launchpad\nChange in \n\n\nC:\\Program Files\\Microsoft SQL\n Server\\MSSQL14.MSSQLSERVER\\MSSQL\\Binn\\pythonlauncher.config\n\nfile: in PYTHONHOME and ENV_ExaMpiCommDllPath parameters substitute the folder \n\nC:\\Program Files\\Microsoft SQL\n Server\\MSSQL14.MSSQLSERVER\\PYTHON_SERVICES\\\n\nwith the folder with new Python version.\n\nStart Launchpad","Q_Score":1,"Tags":"python,sql-server,microsoft-machine-learning-server","A_Id":61793948,"CreationDate":"2020-05-13T07:06:00.000","Title":"Run multiple python version on SQL Server (2017)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the file EnergyIndicators.xls, which I want to read into python so I can manipulate it.\nHow can I obtain the file path so I can read the file in using the:\npd.read_excel() function?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":426,"Q_Id":61773893,"Users Score":0,"Answer":"Firstly you need to know the path to the file in your file system.\nThen you just simply pass the path to xls file to read_excel function.\n\npd.read_excel('path\/to\/direct\/EnergyIndicators.xls')","Q_Score":1,"Tags":"python,pandas,file,filepath,xls","A_Id":61774139,"CreationDate":"2020-05-13T11:51:00.000","Title":"File path for excel file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a python script using cx_oracle module for perform oracle database connection. But during the execution, I found it needs oracle instant client to establish a connection. Currently, I'm developing the script in ubuntu but there is a chance to run the same in windows. So I'm confused about the implementation. Could someone please suggest the best way to connect oracle database irrespective the platform","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":59,"Q_Id":61778052,"Users Score":1,"Answer":"You will always need an OS-specific library or client of some kind. Either the Oracle Instant Client or a Java JDK\/JDBC library or both. If you want OS-independence then you would need to interact with the DB through REST calls or something like that instead of making a persistent connection. Otherwise you have to interact with the OS networking stack at some point, which requires OS-specific libraries.","Q_Score":0,"Tags":"python,oracle,cx-oracle","A_Id":61778880,"CreationDate":"2020-05-13T15:03:00.000","Title":"How to connect oracle database in python irrespective of operating system","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a python system that needs to handle many files. Each of the file has more than 10 thousand lines of text data. \nBecause DB (like mysql) can not be used in that environment, when file is uploaded by a user, I think I will save all the data of the uploaded file in in-memory-SQLite so that I can use SQL to fetch specific data from there. \nThen, when all operations by program are finished, save the processed data in a file. This is the file users will receive from the system.\nBut some websites say SQLite shouldn't be used in production. But in my case, I just save them temporarily in memory to use SQL for the data. Is there any problem for using SQLite in production even in this scenario?\nEdit:\nThe data in in-memory-DB doesn't need to be shared between processes. It just creates tables, process data, then discard all data and tables after saving the processed data in file. I just think saving everything in list makes search difficult and slow. So using SQLite is still a problem?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":360,"Q_Id":61791427,"Users Score":0,"Answer":"I'm not familiar with the specific context of your system, but if what you're looking for is a SQL database that is \n\nlight\nAccess is from a single process and a single thread. \nIf the system crashes in the middle, you have a good way to recover from it (either backing up the last stable version of the database or just create it from scratch). \n\nIf you meet all these criteria, using SQLite is production is fine. OSX, for example, uses sqlite for a few purposes (e.g. .\/var\/db\/auth.db).","Q_Score":0,"Tags":"python,sqlite","A_Id":61792249,"CreationDate":"2020-05-14T07:14:00.000","Title":"in-memory sqlite in production with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a python system that needs to handle many files. Each of the file has more than 10 thousand lines of text data. \nBecause DB (like mysql) can not be used in that environment, when file is uploaded by a user, I think I will save all the data of the uploaded file in in-memory-SQLite so that I can use SQL to fetch specific data from there. \nThen, when all operations by program are finished, save the processed data in a file. This is the file users will receive from the system.\nBut some websites say SQLite shouldn't be used in production. But in my case, I just save them temporarily in memory to use SQL for the data. Is there any problem for using SQLite in production even in this scenario?\nEdit:\nThe data in in-memory-DB doesn't need to be shared between processes. It just creates tables, process data, then discard all data and tables after saving the processed data in file. I just think saving everything in list makes search difficult and slow. So using SQLite is still a problem?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":360,"Q_Id":61791427,"Users Score":2,"Answer":"SQLite shouldn't be used in production is not a one-for-all rule, it's more of a rule of thumb. Of course there are appliances where one could think of reasonable use of SQLite even in production environments.\nHowever your case doesn't seem to be one of them. While SQLite supports multi-threaded and multi-process environments, it will lock all tables when it opens a write transaction. You need to ask yourself whether this is a problem for your particular case, but if you're uncertain go for \"yes, it's a problem for me\". \nYou'd be probably okay with in-memory structures alone, unless there are some details you haven't uncovered.","Q_Score":0,"Tags":"python,sqlite","A_Id":61791554,"CreationDate":"2020-05-14T07:14:00.000","Title":"in-memory sqlite in production with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my django application, I used to authenticate users exploiting base django rest framework authentication token. Now I've switched to Json Web Token, but browsing my psql database, I've noticed the table authtoken_token, which was used to store the DRF authentication token, is still there. I'm wondering how to get rid of it. I've thought about 2 options:\n\ndeleting it through migration: I think this is the correct and safer way to proceed, but in my migrations directory inside my project folder, I didn't find anything related to the tokens. Only stuff related to my models;\ndeleting it directly from the database could be another option, but I'm afraid of messing with django migrations (although it shoudn't have links with other tables anymore)\n\nI must clarify I've already removed rest_framework.authtoken from my INSTALLED_APPS","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1081,"Q_Id":61817823,"Users Score":1,"Answer":"You can choose the first option. There are 3 steps should you do to complete uninstall authtoken from your Django app\n\nRemove rest_framework.authtoken from INSTALLED_APPS, this action will tell your Django app to do not take any migrations file from that module\nRemove authtoken_token table, if you will\nFind the record with authtoken app name in table django_migrations, you can remove it.\n\nNote: There are several error occurs in your code, because authtoken module is removed from your INSTALLED_APPS. My advice, backup your existing database first before you do above step","Q_Score":2,"Tags":"python,django,django-rest-framework,django-migrations,django-database","A_Id":61818577,"CreationDate":"2020-05-15T11:10:00.000","Title":"Django: safely deleting an unused table from database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python to_sql function to insert data in a database table from Pandas dataframe. \nI am able to insert data in database table but I want to know in my code how many records are inserted . \nHow to know record count of inserts ( i do not want to write one more query to access database table to get record count)?\nAlso, is there a way to see logs for this function execution. like what were the queries executed etc.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":816,"Q_Id":61842924,"Users Score":0,"Answer":"There is no way to do this, since python cannot know how many of the records being inserted were already in the table.","Q_Score":4,"Tags":"python,pandas,pandas-to-sql","A_Id":61843712,"CreationDate":"2020-05-16T20:10:00.000","Title":"Pandas :Record count inserted by Python TO_SQL funtion","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible for 32-bit pyodbc and 32-bit Python to talk to 64-bit MS access database? i searched a lot but could be able to find a specific solution. I can neither change the 64 bit version of MS-ACCESS and nor change 32 bit version of Python and Pyodbc. Please help me..","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":204,"Q_Id":61848897,"Users Score":2,"Answer":"Nope.\nYou need to match bitness between the ODBC driver and application.\nA half-alternative would be an alternate driver, such as UCanAccess + JayDeBeApi","Q_Score":0,"Tags":"python,ms-access,pyodbc","A_Id":61849531,"CreationDate":"2020-05-17T08:21:00.000","Title":"Is it possible for 32-bit pyodbc and 32-bit Python to talk to 64-bit MS access database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using RDS client of AWS Data API to insert data into AuroraDB through a lambda function.\nI have included all parameters into the query, so the query is not escaped.\nI know the parameterized query prevents SQL injection, but I cannot upgrade all my code.\nSo I just want to escape the parameters while making the query.\nIs there any method that the RDS client provides for escaping?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":249,"Q_Id":61898551,"Users Score":0,"Answer":"There is no boto3 method (via the RDS client or otherwise) to achieve this that I am aware of. There are however ways of achieving an escaped string in python by using native string replace calls, raw strings, etc.","Q_Score":0,"Tags":"python,aws-lambda,boto3,amazon-rds","A_Id":61902128,"CreationDate":"2020-05-19T18:50:00.000","Title":"Is there any function that generate escaped sql query for AWS RDS Data API?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking into the ways for comparing records from same table but on different databases. I just need to compare and find the missing records.\nI tried out a few methods.\nloading the records into a pandas data frame, I used read_sql. But it is taking more time and memory to complete the load and if the records are large, I am getting a memory error.\nTried setting up a standalone cluster of spark and run the comparison, it is also throwing java heap space error. tuning the conf is not working as well.\nPlease let me know if there are other ways to handle this huge record comparison.\n--update\nDo we have a tool readily available for cross data source comparison","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":61906560,"Users Score":0,"Answer":"If your data size is huge you can use cloud services to run your spark job and get the results. Here you can use aws glue which is serverless and is charged as you go.\nOr if your data is not considerably large and is something one time job then you can use google colab which is free and run your comparision over it .","Q_Score":0,"Tags":"python-3.x,database,pyspark,apache-spark-sql","A_Id":61906771,"CreationDate":"2020-05-20T06:33:00.000","Title":"Possible ways of comparing large records from one table on a database and another table on another database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This should be pretty basic; I am not sure what the error I am making is. I am attempting to query a database using a Python variable in the query and I am able to query successfully with this:\nlocationIDSelectQuery = ('SELECT locationId FROM stateTemplate WHERE id = 1')\ncursor.execute(locationIDSelectQuery)\nand unsuccessfully with this:\nstateTableRowId = 1\ncursor.execute(\"SELECT locationId FROM stateTemplate WHERE id=?\", stateTableRowId)\nWhen I try this in the latter it doesn't work either (statetableRowID).\nSame error message in both instances:\nTraceback (most recent call last):\nline 29, in \n cursor.execute(\"SELECT locationId FROM stateTemplate WHERE id=?\", (stateTableRowId))\nValueError: Could not process parameters\nHow can I use a Python variable in my SQL query?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":61924935,"Users Score":0,"Answer":"I combined the recommendations of @zealous and @juanpa.arrivillaga to arrive at a working solution:stateTableRowId = 1\ncursor.execute(\"SELECT locationId FROM stateTemplate WHERE id=%s\", (stateTableRowId,))\nMy understanding is that it may be preferable for security reasons to use ? instead of %s. I have yet to get that to work with the database to which I'm connecting.","Q_Score":0,"Tags":"python,sql,variables,parameters","A_Id":61944382,"CreationDate":"2020-05-21T00:01:00.000","Title":"How Can I Use A Python Variable In A SQL Query So That Parameters Are Processed?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want write code fetch data from teradata using python. The code should work while running using spark on cluster as well as local. While running using spark I don't want to open connections on executors. So the plan is to run code on driver using teradatasql package. Since teradatasql packages so library I thought I don't have install teradata library on cluster. \nI packaged the dependencies i.e. teradatasql as egg file and passed it as --py-files. But while running on code teradatasql is not able to read library from egg file. \nOs error: teradatasql.so cannot open shared object file. Not a directory.\nI followed the below steps to package the egg file.\n1. pip install teradatasql --target.\/src # note all my code is in src folder. Doing this step will install teradatasql package in my src folder. it contains teradatasql.so library\n2. In setup.py packages=find_packages('src'), package_data={'teradatasql':['teradatasql.so']}\n3. python setup.py bdist_eggg","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":61931878,"Users Score":0,"Answer":"The teradatasql driver uses a shared library to communicate with the Teradata Database.\nThe teradatasql driver will not work if you repackage just the Python file portion of the driver. The error will occur that you got.\nThe intended use of the teradatasql driver is that you install it into your Python environment with the command: pip install teradatasql\nWe will not be able to effectively support you if you take apart and repackage the teradatasql driver. That is not a supported use case.","Q_Score":0,"Tags":"python,apache-spark,teradata","A_Id":61960405,"CreationDate":"2020-05-21T09:58:00.000","Title":"Running Teradatasql driver for python code using spark","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My table is sqlite3 is created with the following:-\n'CREATE TABLE IF NOT EXISTS gig_program ( gig_program_id VARCHAR(20) PRIMARY KEY );'\nWhen I try to insert data into the table using python 3.8 with the following:-\nsql = 'INSERT INTO gig_program ( gig_program_id ) VALUES ( \"20200524120727\" );'\ncur.execute(sql)\nthe following exception was thrown:-\nnear \"gig_program\": syntax error\nWhen I cut and past the insert command to the sqlite3 console, it works.\nI have also tried using another editor for the program (thinking that there may be hidden characters) but the result is the same.\nI would appreciate help. I have used similar methods in other parts of the program to insert data and they work without issue.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":61982167,"Users Score":0,"Answer":"Thank you for looking into my questions.\nI found that it was actually my mistake. The exception was actually for a second sql statement which I missed out the \"FROM\" word.\nThank you everyone for your time.\nHope everyone is doing well.","Q_Score":0,"Tags":"python-3.x,sqlite,syntax","A_Id":61983985,"CreationDate":"2020-05-24T05:56:00.000","Title":"Why does sqlite throws a syntax error in the python program?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"using pyodbc to query a MySQL database with SELECT. I need to determine if the query returned anything or not, the way I found that people were using is the rowcount, however this always returns -1 for me after some testing. I found this on the github wiki for cursor which I think describes my problem.\n\nrowcount\nThe number of rows modified by the last SQL statement.\nThis is -1 if no SQL has been executed or if the number of rows is unknown. Note that it is not uncommon for databases to report -1 immediately after a SQL select statement for performance reasons. (The exact number may not be known before the first records are returned to the application.)\n\nI am wondering if either there is a way around this or if there is another way to do it, thanks.","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":3298,"Q_Id":62029789,"Users Score":4,"Answer":"I always check the length of the return results\nres=newcursor.fetchall()\nif len(res)==0:##means no results","Q_Score":2,"Tags":"python,pyodbc","A_Id":62134714,"CreationDate":"2020-05-26T19:22:00.000","Title":"How to check if pyodbc cursor is empty?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"TLDR: This is not a question about how to change the way a date is converted to a string, but how to convert between the two format types - This being \"%Y\" and \"YYYY\", the first having a % and the second having 4 x Y.\nI have the following date format \"%Y-%M-%D\" that is used throughout an app. I now need to use this within a openpyxl NamedStyle as the number_format option. I cant use it directly as it doesn't like the format, it needs to be in \"YYYY-MM-DD\" (Excel) format.\n\nDo these two formats have names? (so I can Google a little more)\nShort of creating a lookup table for each combination of %Y or %M to Y and M is there a conversion method? Maybe in openpyxl? I'd prefer not to use an additional library just for this!\n\nTIA!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":226,"Q_Id":62042852,"Users Score":1,"Answer":"Sounds like you are looking for a mapping between printf-style and Excel formatting. Individual date formats don't have names. And, due to the way Excel implements number formats I can't think of an easy way of covering all the possibilities. NamedStyles generally refer to a collection of formatting options such as font, border and not just number format.","Q_Score":0,"Tags":"python,datetime,openpyxl","A_Id":62064090,"CreationDate":"2020-05-27T12:11:00.000","Title":"Convert the string \"%Y-%M-%D\" to \"YYYY-MM-DD\" for use in openpyxl NamedStyle number_format","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small Python project site where a visitor can practice writing SQL code. This code actually runs and returns values. I know that I need to prevent SQL injection, but I'm not sure the best approach since the purpose of the site is that users should be able to write and execute arbitrary SQL code against a real database. \nWhat should I look to do to prevent malicious behavior? I want to prevent statements such as DROP xyz;, but users should still be able to execute code. I think maybe the ideal solution is that users can only \"read\" from the database, ie. they can only run SELECT statements (or variations). But I'm not sure if \"read only\" captures all edge cases of malicious behavior.\n\nNeed to prevent malicious SQL querying, but also need to allow users to execute code\nUsing SQLite now but will probably move to postgres\nI'm strictly using SQL at this point but may want to add Python and other languages in the future\nThe site is built with Python (Flask) \n\nAny ideas or suggestions would be helpful","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":62077249,"Users Score":1,"Answer":"There is no way to prevent SQL injection for a site that takes SQL statements as user input and runs them verbatim. The purpose of the site is SQL injection. The only way you can prevent SQL injection is to not develop this site.\nIf you do develop the site, how can you prevent malicious SQL? Answer: don't let malicious users have access to this site. Only allow trusted users to use it.\nOkay, I assume you do want to develop the site and you do want to allow all users, without doing a lot of work to screen them.\nThen it becomes a task of limiting the potential damage they can do. Restrict their privileges carefully, so they only have access to create objects and run queries in a specific schema.\nOr better yet, launch a Docker container for each individual to have their own private database instance, and restrict the CPU and memory the container can use.","Q_Score":1,"Tags":"python,flask,sqlalchemy,sql-injection","A_Id":62090109,"CreationDate":"2020-05-29T01:34:00.000","Title":"Preventing SQL Injection for online SQL querying","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have had scores of servers running a Python script via Apache and mod_wsgi for years now. I recently am building on RHEL7 and have run into an issue where my Python script calling R procedures are bombing out only via Apache stating it cannot find my pip installed Python modules in my Apache log.\nimport pandas as pd\nModuleNotFoundError: No module named 'pandas'\nThis seems to only affect modules getting installed in \/usr\/local\/lib64\/python3.6\/site-packages which is where my custom modules are being installed with pip.\nEven if I append it, it ignores it.\nsys.path.append(r'\/usr\/local\/lib64\/python3.6\/site-packages')\nI manually built mod_wsgi from source.\nI'm ready to abandon mod_wsgi because I have to get my application deployed for my users.\nAny help is greatly appreciated.\nThanks,\nLou","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":169,"Q_Id":62078449,"Users Score":0,"Answer":"It was a file priv issue with \/usr\/lib64\/python3.6 and \/usr\/lib\/python3.6 directories and their child directories. Root ran fine, but running as Apache had no access. You had to chmod-R 755 on both directory trees. Worked fine with Apache after that. Sometimes it\u2019s the simple things we forget to check first.","Q_Score":0,"Tags":"python,python-3.x,apache,mod-wsgi","A_Id":62274064,"CreationDate":"2020-05-29T03:56:00.000","Title":"Running Python in mod_wsgi in Apache Cannot See Python Modules in \/usr\/local\/lib64\/python3.6\/site-packages","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a small .py file that contains some classes to be imported later repetitively into other work. The mentioned .py file uses some small database (in order of KB) in the form of tables. Do you think it is better to store this data in the same .py file or is it better to keep it in a separate .csv file? What are the differences in terms of performance and convenience for later use? A point that I might have to mention is that this data is numerical and is not prone to any possible change in the future.\nSingle items are to be accessed from this data at a time.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":62122636,"Users Score":0,"Answer":"It's better practice to write into a csv file as it improves readability of the entire code. It won't hurt you performance and will really benefit you.","Q_Score":0,"Tags":"python,csv","A_Id":62122677,"CreationDate":"2020-05-31T21:19:00.000","Title":"Should I be saving my data in a .csv file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a small .py file that contains some classes to be imported later repetitively into other work. The mentioned .py file uses some small database (in order of KB) in the form of tables. Do you think it is better to store this data in the same .py file or is it better to keep it in a separate .csv file? What are the differences in terms of performance and convenience for later use? A point that I might have to mention is that this data is numerical and is not prone to any possible change in the future.\nSingle items are to be accessed from this data at a time.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":71,"Q_Id":62122636,"Users Score":1,"Answer":"If readability of the python class is not a concern (although it's best practice to keep code as readable as possible), the big question here will be 'how much' data are we talking about? If you have gigabytes of data then you don't want to have this all sitting in memory at the same time (i.e. what would happen if you just kept it as a constant in a .py file). Instead, for such large data, you want to read it from disk (maybe as a csv) as and when you need it.\nOf course storing it in disk has a performance hit because reading from disk is slower than reading from memory. Whether or not the performance hit is okay for your application is up to you to decide.\nA good in-between to ensure readability and good performance might be (assuming you have enough memory) to store the data in a csv, read it all at once on start up and keep it in memory for repeated calls.","Q_Score":0,"Tags":"python,csv","A_Id":62122715,"CreationDate":"2020-05-31T21:19:00.000","Title":"Should I be saving my data in a .csv file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a small .py file that contains some classes to be imported later repetitively into other work. The mentioned .py file uses some small database (in order of KB) in the form of tables. Do you think it is better to store this data in the same .py file or is it better to keep it in a separate .csv file? What are the differences in terms of performance and convenience for later use? A point that I might have to mention is that this data is numerical and is not prone to any possible change in the future.\nSingle items are to be accessed from this data at a time.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":62122636,"Users Score":0,"Answer":"My Suggestion is to store data in Json format. Like in MongoDb, by adapting No-Sql format. This will help you in easily manage and control over the data.","Q_Score":0,"Tags":"python,csv","A_Id":62122740,"CreationDate":"2020-05-31T21:19:00.000","Title":"Should I be saving my data in a .csv file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting below error while running this code in Python, If anyone could advise me on this that would be appreciated. Thanks\ndataframe = pandas.read_sql(sql,cnxn)\nDatabaseError: Execution failed on sql 'SELECT * FROM train_data': ('HY000', \"[HY000] [Dremio][Connector] (1040) Dremio failed to execute the query: SELECT * FROM train_data\\n[30038]Query execution error. Details:[ \\nVALIDATION ERROR: Table 'train_data' not found\\n\\nSQL Query SELECT * FROM train_data\\nstartLine 1\\nstartColumn 15\\nendLine 1\\nendColumn 24\\n\\n[Error Id: 24c7de0e-6e23-44c6-8cb6-b0a110bbd2fd on user:31010]\\n\\n (org.apache.calcite.runtime.CalciteContextException) From line 1, column 15 to line 1, column 24: ...[see log] (1040) (SQLExecDirectW)\")","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":644,"Q_Id":62131393,"Users Score":0,"Answer":"this is being solved, it says that table does not exist, should give a valid table, in dremio it can be inside a specific space","Q_Score":0,"Tags":"python,sql,pyodbc,dremio","A_Id":62146385,"CreationDate":"2020-06-01T11:43:00.000","Title":"Dremio ODBC with Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Django 3.0 and I was wondering how to create a new database table linked to the creation of each user. In a practical sense: I want an app that lets users add certain stuff to a list but each user to have a different list where they can add their stuff. How should I approach this as I can't seem to find the right documentation... Thanks a lot !!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":62159202,"Users Score":0,"Answer":"This is too long for a comment.\nCreating a new table for each user is almost never the right way to solve a problem. Instead, you just have a userStuff table that maintains the lists. It would have columns like:\n\nuserId\nstuffId\n\nAnd, if you want the stuff for a given user, just use a where clause.","Q_Score":0,"Tags":"python,sql,django,database","A_Id":62159631,"CreationDate":"2020-06-02T18:46:00.000","Title":"A new table for each user created","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I get the below error while trying to fetch rows from Excel using as a data frame. Some of the columns have very big values like 1405668170987000000, while others are time stamp columns having values like 11:46:00.180630.\nI did convert the format of the above columns to text. However, I'm still getting the below error for a simple select statement (select * from df limit 5):\n\nOverflow Error: Python int too large to convert to SQLite INTEGER","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":62176746,"Users Score":0,"Answer":"SQLite INTEGERS are 64-bit, meaning the maximum value is 9,223,372,036,854,775,807.\nIt looks like some of your values are larger than that so they will not fit into the SQLite INTEGER type. You could try converting them to text in order to extract them.","Q_Score":0,"Tags":"python,python-3.x","A_Id":62176901,"CreationDate":"2020-06-03T15:32:00.000","Title":"overflow error in Python using dataframes","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I get the below error while trying to fetch rows from Excel using as a data frame. Some of the columns have very big values like 1405668170987000000, while others are time stamp columns having values like 11:46:00.180630.\nI did convert the format of the above columns to text. However, I'm still getting the below error for a simple select statement (select * from df limit 5):\n\nOverflow Error: Python int too large to convert to SQLite INTEGER","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":62176746,"Users Score":0,"Answer":"SQL integer values have a upper bound of 2**63 - 1. And the value provided in your case 1405668170987000000 is simply too large for SQL.\n\nTry converting them into string and then perform the required operation","Q_Score":0,"Tags":"python,python-3.x","A_Id":62176934,"CreationDate":"2020-06-03T15:32:00.000","Title":"overflow error in Python using dataframes","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a automation script, in which openpyxl writes some data into Excel file. \nAnd that Excel file has some Formulas.\nOn next step i want to fetch that formulated cell value in python using openpyxl or Pandas, but OpenpyXl return as None and pandas return as Nan .\nI know about Xlwings, but unfortunately xlwings doesn't work in Linux. \nIf there are any other workaround and working in Linux, please let me know. Thanks in Advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":379,"Q_Id":62201973,"Users Score":0,"Answer":"You probably need to save the document first and then reopen it. You could try using xlwings or the win32 module to save as.","Q_Score":0,"Tags":"python,pandas,excel-formula,openpyxl,xlwings","A_Id":62523683,"CreationDate":"2020-06-04T18:44:00.000","Title":"How to fetch Value from Excel cell with formula ? Openpyxl data_Only flag doesn't work properly","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"MongoEngine is good for defining a document for data validation. Raw pymongo is lacking in this area. Can I use MongoEngine to define a document first, then use pymongo to insertMany documents into an empty collection? If yes, will pymongo's insertMany() do data validation based on the document definition set by mongoengine?\nCan pymongo and mongoengine code be mixed together in the same python script?\nI am using python 3.7, mongodb v4.2.7.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":373,"Q_Id":62228701,"Users Score":1,"Answer":"Your three questions:\n'Can I use MongoEngine to define a document first, then use pymongo to insertMany documents into an empty collection? '\nYes.\n'If yes, will pymongo's insertMany() do data validation based on the document definition set by mongoengine?'\nNo.\n'Can pymongo and mongoengine code be mixed together in the same python script?'\nYes.\nMongoengine based on pymongo. If you do not need very complex function that mongoengine not had. I suggest you just use mongoengine to complete your work. This will save your time most of the time.","Q_Score":1,"Tags":"python,python-3.x,mongodb,pymongo,mongoengine","A_Id":62297013,"CreationDate":"2020-06-06T07:32:00.000","Title":"Can one use mongoengine to define a document first, then use pymongo to insert many documents?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to add a new table in existing database without affecting existing tables and its datas. After adding new model class into models.py, what are steps needs to follow to add new table?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1407,"Q_Id":62237970,"Users Score":1,"Answer":"Once you've added the new model in your models.py, you just need to run db.create_all(). You can add this in your code somewhere (probably directly) after you initialize your app\/database. Be sure to include with app.app_context() if you aren't calling create_all in a view. Hope this helps!","Q_Score":2,"Tags":"python,flask,flask-sqlalchemy","A_Id":62240624,"CreationDate":"2020-06-06T20:59:00.000","Title":"How to add new table in existing database of flask app using sqlalchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Anyone know how you get a dataframe from Quantopian to excel - I try - results.to_excel\nresults are the name of my dataframe","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":62246095,"Users Score":0,"Answer":"Try this :\nName of your DataFrame: Result.to_csv(\"result.csv\")\nhere Result is your DataFrame Name , while to_csv() is a function","Q_Score":0,"Tags":"python,quantopian","A_Id":65324001,"CreationDate":"2020-06-07T13:31:00.000","Title":"How to transfer data from Quantopian to Excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When installing the same Odoo app for multiple databases (tenants), say Sales App, does this mean Odoo will load the same App multiple times in memory, or the App will be loaded once in memory and shared across tenants\/DBs?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":79,"Q_Id":62271693,"Users Score":1,"Answer":"Odoo loads the app in a worker (assuming you are using workers) so the app is loaded once per worker. A worker can handle multiple databases if configured. But if you have multiple workers the app might get loaded in each worker as requests are reaching the different workers.\nSome part of the memory consumtion of odoo is the ORM caches and those are per database per worker (and some per user) so that part of the App will be in memory multiple times per worker as you have multiple databases.","Q_Score":1,"Tags":"python,python-3.x,odoo,odoo-13","A_Id":62276211,"CreationDate":"2020-06-08T21:47:00.000","Title":"Odoo - Apps in-memory footprint for multiple databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a dataflow job that reads from pubsub, transforms the PubsubMessage into a TableRow and writes this row to BQ using the FILE_LOAD-method (each 10 minutes, 1 shard). The job sometimes throws a ByteString would be too long-exception. This exception should be thrown when it concats the rows to the Google Cloud Storage (GCS) temp file as you cannot append to a GCS file. If I understand it correctly, it is ok to let this exception happen as the 'large' temp file will be used for loaded to BQ later on and appending will happen to a new file which should succeeded. I would however like to prevent this error from happening without increasing the number of load jobs as I'm getting close to my daily load jobs quota on the project.\nCan I:\n\nincrease the number of shards to 2? Or will that cause the writer to always use 2 shards even if it only needs to write a small number of rows?\nuse setMaxFileSize() along with the number of shards? Or will the writer still use 2 shards even if it doesn't really have too? \n\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":113,"Q_Id":62284178,"Users Score":1,"Answer":"Setting the number of shards to 2 will always use 2 shards.\nHowever, I don't think the \"ByteString would be too long\" error is coming from GCS. That error usually happens when the total output size of a bundle in Dataflow is too large (>2GB), which can happen when a DoFn's output is much larger than its input.\nOne option to work around this would be to break apart the bundles coming in from Pubsub with a GroupByKey. You can use a hash of the input or a random number as the key, and set your trigger to AfterPane.elementCountAtLeast(1) to allow elements to be output as soon as they arrive.","Q_Score":0,"Tags":"python,google-cloud-dataflow,apache-beam","A_Id":62288982,"CreationDate":"2020-06-09T13:47:00.000","Title":"BigqueryIO file loads: only use additional shard if required","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Adding UUID as an id column to the DataFrame and push to BigQuery using to_gbq maintains the uniqueness?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":240,"Q_Id":62298118,"Users Score":1,"Answer":"it is the same - UUID in Python generate such unique Id like UUID in BQ","Q_Score":0,"Tags":"python,google-bigquery","A_Id":62299551,"CreationDate":"2020-06-10T07:34:00.000","Title":"How safe to use Python UUID instead of BigQuery GENERATE_UUID() when inserting data?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to ask if it is possible to add more integerfields and charfields to a model if it has already been migrated to an SQLite database.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":62299206,"Users Score":0,"Answer":"Yes after every new addition\/deletion of fields to\/from your model you need to do makemigrations and migrate and Django will modify your sqlite database accordingly.\nIf there is existing data and you add a new field, it has to be either nullable or you need to provide a sensible default for existing data for that new column.\nIf you remove a field, that field will be removed but your data will remain intact without the removed column.","Q_Score":0,"Tags":"python,django","A_Id":62299258,"CreationDate":"2020-06-10T08:34:00.000","Title":"Is it possible to create more Data Fields after Migrations have taken place?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm new to gremlin, I'm trying to use python library gremlinpython to connect to Janus Graph\nand need to know if it is possible to rollback transaction.\nI've found that single traversal is equivalent to single transaction (tinkerPop docs), traversal is created after connecting to gremlin server:\ng = traversal().withRemote(...)\nand all operations with g are executed in a single transaction.\nBut I can't find what will happen if error occurs in any of operations.\nIs it possible to rollback all operations made with g?\ngremlin server allows to do smth like g.tx().rollback() or g.tx().commit() - to rollback or approve transaction, but is it possible to do this using gremlinpython?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":499,"Q_Id":62303567,"Users Score":3,"Answer":"If an error occurs then Gremlin Server will automatically rollback the transaction for you. If it is successful it will automatically commit for you. The semantics of \"rollback\" or \"commit\" are graph database dependent (i.e. some graphs may commit partial transactions even in the face of rollback) and in the case of JanusGraph will be further dependent upon the underlying storage engine (e.g. Cassandra, Hbase, etc).","Q_Score":1,"Tags":"transactions,gremlin,tinkerpop,janusgraph,gremlinpython","A_Id":62304384,"CreationDate":"2020-06-10T12:27:00.000","Title":"How to rollback gremlin transaction using gremlinpython?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'am pondering to make use of MariaDB as my new database (because MySQL is starting to ask money for some of the features). I will mainly use MariaDB to automatically update multiple tables, keep track of them, grab\/insert new info in them. I will most likely combine it with multiple other programs such as Python and SAS. \nNow I want to know. Which tools are really vital in using MariaDB? What GUI should I use for example, does MariaDB have her own workbench or should I use the MySQL workbench? I've read that the MySQL workbench is not 100% compatible with MariaDB. \nAny advice's please?\nI appreciate your help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":299,"Q_Id":62305103,"Users Score":0,"Answer":"MySQL Workbench is mostly compatible with MariaDB, as are most tools. There is no separate MariaDB Workbench.\nFor more recent versions, you will need mariabackup instead of xtrabackup.\nJust out of interest, what features of MySQL do you need that require subscription but are free in community edition of MariaDB?","Q_Score":1,"Tags":"python,mysql,database,sas,mariadb","A_Id":62305759,"CreationDate":"2020-06-10T13:41:00.000","Title":"Vital tools for using MariaDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to create a price table by date, I tried to google this for python and django, but still have no idea for this. I don't want to create one to one relationship object like an options. but I would like to create the database associating date and price. Sorry that it may be simple question..\nWould it be solution to create a database by using PostgreSQL, and read by django? or any resource \/ reference can help get me in right direction to access this problem? \nThanks so much","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":62341966,"Users Score":0,"Answer":"Well there is more to it then assigning a price to a date. You will need one or more tables that hold the establishment(hotels) data. These would include the room information as all rooms will not have the same price. Also the price will probably change over time, so you will need to track that. Then there is the reservation information to track. This is just some of the basics. It is not a simple task by any means. I would try a simpler project to start with to learn Django and how to get data in and out of it.","Q_Score":0,"Tags":"python,django,database,postgresql","A_Id":62349333,"CreationDate":"2020-06-12T10:06:00.000","Title":"Any idea to create the price table which associate with date in Django(Python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Python with cx_Oracle, and I'm trying to do an INSERT....SELECT. Some of the items in the SELECT portion are variable values. I'm not quite sure how to accomplish this. Do I bind those variables in the SELECT part, or just concatenate a string?\n\n v_insert = (\"\"\"\\\n INSERT INTO editor_trades\n SELECT \" + v_sequence + \", \" + issuer_id, UPPER(\" + p_name + \"), \" + p_quarter + \", \" + p_year +\n \", date_traded, action, action_xref, SYSDATE\n FROM \" + p_broker.lower() + \"_tmp\") \"\"\")\n\nMany thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":62351723,"Users Score":0,"Answer":"With Oracle DB, binding only works for data, not for SQL statement text (like column names) so you have to do concatenation. Make sure to allow-list or filter the variables (v_sequence etc) so there is no possibility of SQL injection security attacks. You probably don't need to use lower() on the table name, but that's not 100% clear to me since your quoting currently isn't valid.","Q_Score":1,"Tags":"python,dynamic,insert","A_Id":62366734,"CreationDate":"2020-06-12T20:05:00.000","Title":"Dynamic Select Statement In Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I had a PostgreSQL table with 5-6 columns and a few hundred rows. Would it be more effective to use psycopg2 to load the entire table into my Python program and use Python to select the rows I want and order the rows as I desire? Or would it be more effective to use SQL to select the required rows, order them, and only load those specific rows into my Python program.\nBy 'effective' I mean in terms of:\n\nMemory Usage.\nSpeed.\n\nAdditionally, how would these factors start to vary as the size of the table increases? Say, the table now has a few million rows?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":546,"Q_Id":62370940,"Users Score":3,"Answer":"Actually, if you are comparing data that is already loaded into memory to data being retrieved from a database, then the in-memory operations are often going to be faster. Databases have overhead:\n\nThey are in separate processes on the same server or on a different server, so data and commands needs to move between them.\nQueries need to be parsed and optimized.\nDatabases support multiple users, so other work may be going on using up resources.\nDatabases maintain ACID properties and data integrity, which can add additional overhead.\n\nThe first two of these in particular add overhead compared to equivalent in-memory operations for every query.\nThat doesn't mean that databases do not have advantages, particularly for complex queries:\n\nThey implement multiple different algorithms and have an optimizer to choose the best one.\nThey can take advantage of more resources -- particularly by running in parallel.\nThey can (sometimes) cache results saving lots of time.\n\nThe advantage of databases is not that they provide the best performance all the time. The advantage is that they provide good performance across a very wide range of requests with a simple interface (even if you don't like SQL, I think you need to admit that it is simpler, more concise, and more flexible that writing code in a 3rd generation language).\nIn addition, databases protect data, via ACID properties and other mechanisms to support data integrity.","Q_Score":4,"Tags":"python,sql,postgresql,psycopg2","A_Id":62371705,"CreationDate":"2020-06-14T09:54:00.000","Title":"Is it faster and more memory efficient to manipulate data in Python or PostgreSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to append something on entity while it exists and do not append the entity if it doesn't exist. How will I be able to achieve it? I tried following but it doesn't work the way I want.\ntask = table_service.get_entity('datas', '..com','asss','Hello')\ntable_service.insert_or_replace_entity('tasktable', task)\nIf the entity exists :\nI want to append that hello with something as:\n('datas', '..com','asss','Hello;123')\nIf the entity doesnt exist :\nI want to insert as:\n('datas', '..com','asss','Hello')","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":211,"Q_Id":62375424,"Users Score":0,"Answer":"If you want to append something to one existing entity property's value, it is impossible. Because Azure table storage does not provide any operation to do that. Azure Table storage just provides operations to manage entity and it does not provide operations to manage Azure table entity properties. So you just can set a new value for one existing entity property. \nRegarding how to do that, we can use update or merge operation. But please note that the two operations will cause different result. The update operation replaces the entire entity. Those properties from the previous entity will be removed if the request does not define or include them. The merge operation does not replace the existing entity. Those properties from the previous entity will be retained if the request does not define or include them.","Q_Score":0,"Tags":"python-3.x,azure,azure-cosmosdb,azure-table-storage","A_Id":62441229,"CreationDate":"2020-06-14T16:39:00.000","Title":"Append Entity on Microsoft Azure Table Storage Insert or Replace","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"This might be a simple one, but I couldn't figure it out. I couldn't connect to the database that I created 'clearview' in mysql through.\nPlease advise. Thanks!\nNameError Traceback (most recent call last)\n in \n 3 try:\n----> 4 connection = mysql.connector.connect(host='localhost',\n 5 database='clearview',\nNameError: name 'mysql' is not defined","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":3168,"Q_Id":62390013,"Users Score":2,"Answer":"Have you installed the mysql-connector package? Try installing it first with pip install mysql-connector and then import the package import mysql.connector before running your script.","Q_Score":0,"Tags":"python,mysql,connect,nameerror,connector","A_Id":62390161,"CreationDate":"2020-06-15T14:11:00.000","Title":"how to connect using python to mysql server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This might be a simple one, but I couldn't figure it out. I couldn't connect to the database that I created 'clearview' in mysql through.\nPlease advise. Thanks!\nNameError Traceback (most recent call last)\n in \n 3 try:\n----> 4 connection = mysql.connector.connect(host='localhost',\n 5 database='clearview',\nNameError: name 'mysql' is not defined","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3168,"Q_Id":62390013,"Users Score":0,"Answer":"Which version of python you are using???\na)first check you have import mysql.connector. If it is there and not working then can follow following steps.\nb) It may be that mysql is not properly installed .Check via pip search mysql-connector | grep --color mysql-connector-python .\nc) Install via : pip install mysql-connector-python-rf\nHope it will help.","Q_Score":0,"Tags":"python,mysql,connect,nameerror,connector","A_Id":62390227,"CreationDate":"2020-06-15T14:11:00.000","Title":"how to connect using python to mysql server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This might be a simple one, but I couldn't figure it out. I couldn't connect to the database that I created 'clearview' in mysql through.\nPlease advise. Thanks!\nNameError Traceback (most recent call last)\n in \n 3 try:\n----> 4 connection = mysql.connector.connect(host='localhost',\n 5 database='clearview',\nNameError: name 'mysql' is not defined","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3168,"Q_Id":62390013,"Users Score":0,"Answer":"All comments & suggestions appreciated. I did everything suggested here. And yes, I basically have to install the mysql.connector to every environment I created to be sure. I think perhaps, that's why it indicated that mysql is not defined. Lesson learned>..\nAs follow up question... is there such thing as installing every Python extensions globally so I would not reinstall it in several environments. never tried it before, as I perhaps tinkering with different environments not knowing which one I'm currently using. :)\nThanks guys!!!","Q_Score":0,"Tags":"python,mysql,connect,nameerror,connector","A_Id":62398489,"CreationDate":"2020-06-15T14:11:00.000","Title":"how to connect using python to mysql server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a python script on Heroku which runs every 10 minutes using the Heroku-Scheduler add-on. The script needs to be able to access the last time it was run. On my local machine I simply used a .txt file which I had update whenever the program was run with the \"last run time\". The issue is that Heroku doesn't save any file changes when a program is run so the file doesn't update on Heroku. I have looked into alternatives like Amazon S3 and Postgresql, but these seem like major overkill for storing one line of text. Are there any simpler alternatives out there?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":62394659,"Users Score":0,"Answer":"If anyone has a similar problem, I ended up finding out about Heroku-redis which allows you to make key-value pairs and then access them. (Like a Cloud Based Python Dictionary)","Q_Score":0,"Tags":"python,heroku,amazon-s3,heroku-postgres","A_Id":62411106,"CreationDate":"2020-06-15T18:28:00.000","Title":"Way to store a single line of changing text on Heroku","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm importing data from CSV file into Postgres table using copy from command like this\ncopy tbl_name(col1, col2, col3) from '\/sample.csv' delimiter ',';\nthe command is executed in a transaction(read-write).\nwhile this command is executing, I'm opening a new SQL session in the new terminal, but in this new session, I'm not able to perform select command. It will be stuck until the transaction is committed in the first session. \nThe same issue is happening when in python program I'm copying a file using copy_expert command of Psycopg2, even tho I have created connection_engine with pooling.\nIs it possible to prevent Postgres from blocking the er sessions while copy-ing data into the table?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":333,"Q_Id":62398551,"Users Score":0,"Answer":"That is impossible, unless you use SELECT ... FOR UPDATE, which tries to put a row lock on rows that are already locked by the COPY.\nIt is a principle in PostgreSQL that readers don't block writers and vice versa.","Q_Score":0,"Tags":"python,sql,postgresql,sqlalchemy,psycopg2","A_Id":62401805,"CreationDate":"2020-06-15T23:20:00.000","Title":"copy command blocks other sessions in postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm importing data from CSV file into Postgres table using copy from command like this\ncopy tbl_name(col1, col2, col3) from '\/sample.csv' delimiter ',';\nthe command is executed in a transaction(read-write).\nwhile this command is executing, I'm opening a new SQL session in the new terminal, but in this new session, I'm not able to perform select command. It will be stuck until the transaction is committed in the first session. \nThe same issue is happening when in python program I'm copying a file using copy_expert command of Psycopg2, even tho I have created connection_engine with pooling.\nIs it possible to prevent Postgres from blocking the er sessions while copy-ing data into the table?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":333,"Q_Id":62398551,"Users Score":1,"Answer":"This is not a general phenomenon. There is more going on here than you are telling us.\nMaybe the COPY is happening inside the same transaction as something else which acquires a strong lock (like TRUNCATE) on the same table that is being SELECTed from. Or maybe your SELECT is invoking some user-defined-function (perhaps directly, perhaps through a trigger or something) which is acquiring a stronger lock than SELECT usually requires.\nIn the absence of special conditions such as those, I have no problem running COPY and SELECT at the same time.","Q_Score":0,"Tags":"python,sql,postgresql,sqlalchemy,psycopg2","A_Id":62409233,"CreationDate":"2020-06-15T23:20:00.000","Title":"copy command blocks other sessions in postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Apparently it is supposed to be part of Python3 but it says \"bash: sqlite3: command not found\". \nI'm so new to all of this and I'm just trying to follow along with a tutorial on youtube. Any help would be much appreciated.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":5263,"Q_Id":62400753,"Users Score":3,"Answer":"On Linux (or in a Docker container running a Linux OS variant), you can install the command line interface via sudo apt-get install sqlite3 and then run sqlite3.","Q_Score":5,"Tags":"python,python-3.x,sqlite","A_Id":69657658,"CreationDate":"2020-06-16T04:05:00.000","Title":"bash: sqlite3: command not found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"all!\nWe currently use MTPuTTY to SSH into a Red Hat Linux server to run Python programs on a compute cluster. I was wanting to implement read-only access to a PostgreSQL database into these programs using the current credentials used for SSH. I know you can't access the password via Linux and it would be (very) bad practice to store the passwords in plaintext (does this still count for folders only accessible by the users?, ie dedicated network storage folders? I suppose that would depend on if admins have access to those folders as well).\nI was hoping there would be some way to use the current SSH session credentials to authenticate with the database and fetch records into the Python program, but can't seem to find a way to do that. Is there a way? Is there an alternative?\nOther options I've thought of:\n\nCreate server to take requests using my credentials, fetch records,\nreturn records to program.\nRun service\/scheduled task to fetch values every [x] minutes, store\nin a file accessible only by my group (though not sure how to do\nthis without storing my password in an accessible manner).\n\nJust really not seeing a safe way to automate this access without exposing passwords... Any help is much appreciated! Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":62417506,"Users Score":0,"Answer":"You should be able to make both Linux and PostgreSQL use PAM.\nBut why not just keep it simple and use peer authentication? The fact that you logged on to the linux account shows you knew the password to do that log on. Isn't that enough?","Q_Score":0,"Tags":"python,linux,postgresql,security,passwords","A_Id":62419280,"CreationDate":"2020-06-16T20:59:00.000","Title":"Using current Linux SSH Credentials to authenticate for database access in python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to save all data which will get from model-derivative API. The problem is every time I export data from model-derivative-API, I will get updated data from the database. I want to predict changes in data where later that will get connect to powerBi to see the changes in the model","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":176,"Q_Id":62433173,"Users Score":0,"Answer":"You used the tag autodesk-bim360, so I assume your files are stored in BIM 360 Docs.\nIn that case you could get the properties for each file version and then compare them yourself to find out what changed between the versions.","Q_Score":1,"Tags":"sql-server,sqlalchemy,python-requests,flask-sqlalchemy,autodesk-bim360","A_Id":62521612,"CreationDate":"2020-06-17T15:54:00.000","Title":"how to save updated data into sql server from model derivative api of autodesk forge","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My use case is fetching data from an external source. After fetching data.I'm mapping with multiple tables in MongoDb which has huge data and generating results. For this use-case which is faster Pymongo or MongoEngine?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":332,"Q_Id":62435850,"Users Score":0,"Answer":"pymongo is a driver. mongoengine is an ODM and it exists on top of the driver.\nAny operation going through mongoengine also goes through the driver. Therefore, execution time in pymongo is always going to be less than execution time in pymongo+mongoengine.\nWith that said:\n\nmongoengine provides functionality that pymongo does not implement (the object-data mapping). If you implement the equivalent functionality in your own application that uses pymongo directly, the result could be slower than using mongoengine.\nif a query you are sending is slow for the database to execute, the extra time that mongoengine spends doing its thing can be so small as to be irrelevant.","Q_Score":0,"Tags":"python,mongodb,pymongo,mongoengine","A_Id":62439621,"CreationDate":"2020-06-17T18:22:00.000","Title":"MongoDB : To run faster queries which is better Pymongo or MongoEngine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The situation is:\nI developped a webapp using django (and especially \"django-simple-history\").\nI have a postgres database \"db01\" with a history model \"db01_history\" which is generated\/filled using \"django-simple-history\".\nI accidentally deleted everything from \"db01\"and, sadly, I don't have any db backup.\nMy question is:\nIs there some way to replay all historical records \"db01_history\" (up to a specific ID) onto original database \"db01\" ?\n(In other words, is there a way to restore a db using its historical model up to a specific date\/ID ?)\nGiving db0_history -> db01","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":242,"Q_Id":62436082,"Users Score":1,"Answer":"Fortunately, django-simple-history keeps using your own model's field names and types (but does not keep some constraints). \nThe difference is that there are multiple historical objects for each of your deleted objects. If you use Django default primary key (id) it would be easy for you to group your tables by id and use the latest record as of history_date (the time of recorded history).\nAn exception is that if you use more direct database operations like updates or bulk_creates from model managers you don't have their histories.\nSo you can just configure your project to use a copy of the historical database only having the latest record for each object and then try to do python manage.py dumpdata > dump.json and then revert the database settings to the new database you like and do python manage.py loaddata dump.json.\nTo be concise, yes you may have all your data in your historical database.","Q_Score":0,"Tags":"python,django,database,postgresql,django-models","A_Id":62436912,"CreationDate":"2020-06-17T18:37:00.000","Title":"Is it possible to replay django-simple-history up to a specific ID to restore a delete database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two tables with same columns, and I concatenated them vertically. I want to remove duplicates based on col1 but keep records that have latest time_stamp","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17,"Q_Id":62436315,"Users Score":1,"Answer":"sort the dataframe on the value of time stamp descending and the default behaviour of the pandas drop_duplicates method will keep the latest","Q_Score":0,"Tags":"python-3.x,pandas,dataframe,join,duplicates","A_Id":62436554,"CreationDate":"2020-06-17T18:52:00.000","Title":"drop dupliactes but keep records based on a condtion","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a function within a project that could drop staging databases. I already use peewee throughout the project so it would make things easier to not have use pymysql . Is it possible? I've seen it i believe for dropping tables but not a db. \nJust double checking\nI did see a ticket in github from 2014 regarding this issue but wanted to see if there was any new info on this as a possibility.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":303,"Q_Id":62439627,"Users Score":0,"Answer":"Peewee has no provisions for either creating or deleting databases, no will it ever be likely to support that. Check your db vendor for the appropriate methods for doing this.","Q_Score":0,"Tags":"python,peewee","A_Id":62456709,"CreationDate":"2020-06-17T22:51:00.000","Title":"Can peewee drop a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background:\nI have multiple asset tables stored in a redshift database for each city, 8 cities in total. These asset tables display status updates on an hourly basis. 8 SQL tables and about 500 mil rows of data in a year.\n(I also have access to the server that updates this data every minute.)\n\nExample: One market can have 20k assets displaying 480k (20k*24 hrs) status updates a day.\n\nThese status updates are in a raw format and need to undergo a transformation process that is currently written in a SQL view. The end state is going into our BI tool (Tableau) for external stakeholders to look at.\nProblem:\nThe current way the data is processed is slow and inefficient, and probably not realistic to run this job on an hourly basis in Tableau. The status transformation requires that I look back at 30 days of data, so I do need to look back at the history throughout the query.\nPossible Solutions:\nHere are some solutions that I think might work, I would like to get feedback on what makes the most sense in my situation.\n\nRun a python script that looks at the most recent update and query the large history table 30 days as a cron job and send the result to a table in the redshift database.\nMaterialize the SQL view and run an incremental refresh every hour\nPut the view in Tableau as a datasource and run an incremental refresh every hour\n\nPlease let me know how you would approach this problem. My knowledge is in SQL, limited Data Engineering experience, Tableau (Prep & Desktop) and scripting in Python or R.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":215,"Q_Id":62441546,"Users Score":2,"Answer":"So first things first - you say that the data processing is \"slow and inefficient\" and ask how to efficiently query a large database. First I'd look at how to improve this process. You indicate that the process is based on the past 30 days of data - is the large tables time sorted, vacuumed and analyzed? It is important to take maximum advantage of metadata when working with large tables. Make sure your where clauses are effective at eliminating fact table block - don't rely on dimension table where clauses to select the date range.\nNext look at your distribution keys and how these are impacting the need for your critical query to move large amounts of data across the network. The internode network has the lowest bandwidth in a Redshift cluster and needlessly pushing lots of data across it will make things slow and inefficient. Using EVEN distribution can be a performance killer depending on your query pattern. \nNow let me get to your question and let me paraphrase - \"is it better to use summary tables, materialized views, or external storage (tableau datasource) to store summary data updated hourly?\" All 3 work and each has its own pros and cons. \n\nSummary tables are good because you can select the distribution of the data storage and if this data needs to be combined with other database tables it can be done most efficiently. However, there is more data management to be performed to keep this data up to data and in sync.\nMaterialized views are nice as there is a lot less management action to worry about - when the data changes, just refresh the view. The data is still in the database so is is easy to combine with other data tables but since you don't have control over storage of the data these action may not be the most efficient.\nExternal storage is good in that the data is in your BI tool so if you need to refetch the results during the hour the data is local. However, it is not locked into your BI tool and far less efficient to combine with other database tables. \n\nSummary data usually isn't that large so how it is stored isn't a huge concern and I'm a bit lazy so I'd go with a materialized view. Like I said at the beginning I'd first look at the \"slow and inefficient\" queries I'm running every hour first.\nHope this helps","Q_Score":1,"Tags":"python,r,amazon-redshift,database-administration","A_Id":62478268,"CreationDate":"2020-06-18T02:49:00.000","Title":"How to efficiently query a large database on a hourly basis?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got some question. I would like to create an app to run some sql query(ill use sqlite3) and then show data with tkinter, but ill like to add 1 more column to the results where user can input some data and next save it to the xlsx. What is the problem? I can't figure out method to add that column for user input. I'll show query results as a data frame.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":62485477,"Users Score":0,"Answer":"It will be better if you provide minimum reproducible example. You can create Entry widget to add extra column to tkinter. Get the data from .get() method and add it to sqlite database using Update. Then you can read whole database as pandas dataframe and export is to excel using to_excel method.","Q_Score":0,"Tags":"python,sql,excel,sqlite,tkinter","A_Id":62498724,"CreationDate":"2020-06-20T11:54:00.000","Title":"Python sqlite3 tkinter. Run query, add column and save to xlsx","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to flask. I am developing a web application using flask and postgres. I have already designed database. I know sqlalchemy is orm. Do I still need to use sqlalchemy in flask if my database was already designed without sqlalchemy. I have to use that database for fetching and updating values. While going further will sqlalchemy is useful to me? or can I simply use db connector and proceed?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":658,"Q_Id":62509725,"Users Score":1,"Answer":"Using sqlalchemy will help you fetching\/inserting data very easy, no matter you designed your db manually, you just need to define your design and then instead of writing multiple lines, you write a line and everything done.\nIt also help you handle errors and a lot more.\nStrongly suggest you to use it.","Q_Score":1,"Tags":"python,postgresql,flask,sqlalchemy","A_Id":62509888,"CreationDate":"2020-06-22T07:50:00.000","Title":"Do I need to use sqlalchemy if my database was already designed manually?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm implementing python an application to capture change on DocumentDB using Change Stream feature my design is watching change on all collections in the target database and publish to some message queue to do some processing.\nMy question is currently DocumentDB support MongoDB API version 3.6 which not support watch change on DB level. Is there a way to watch the change stream on the DB level on the current DocumentDB version.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":334,"Q_Id":62542385,"Users Score":2,"Answer":"You can enable change streams at the collection, database, and cluster level.\nHowever, at the moment to seek for the changes it happens at the collection level.\nYou need to setup your code to seek for changes in every collection that is being watched.","Q_Score":0,"Tags":"java,python,mongodb,aws-documentdb,changestream","A_Id":62543244,"CreationDate":"2020-06-23T19:29:00.000","Title":"Using DocumentDB change stream on multiple collections?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem with a python code executed from Excel using call from vba xlwing. Problem is that script takes a long time to execute the tasks (but it's normal due to amount of data).\nafter 90 secs excel shows a popup with error: excel is waiting for python complete ole action, and if click ok, after 10 secs message come back again. Is there any way to handle this error and fixed it? python code is running correctly, but it takes a long time to do all tasks.\nthanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":345,"Q_Id":62550707,"Users Score":0,"Answer":"My suggestion: read all data from excel into python, treat it, and create a new excel with the result, or overwrite the existing file. It will be much more smooth.","Q_Score":0,"Tags":"python-3.x,excel,vba,xlwings","A_Id":62550892,"CreationDate":"2020-06-24T08:22:00.000","Title":"vba call from excel to python code returns excel error waiting ole action","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"im new with using openpyxl.\nI am trying to get value from cell, however value in that cell is linked from another sheet and value what i get is\n\n*=t!U2:U1000*\n\ninstead of expected value\n\n*1000.2\u20ac*\n\nmy approach method to value is\nws1.cell(row = i, column = j).value\nthanks for advice","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":78,"Q_Id":62556007,"Users Score":-1,"Answer":"Answer:\nwhile loading file is needed add to parameter data_only\n\nwb1 = xl.load_workbook(filename, data_only=True)\n\nThis solved my problem.","Q_Score":0,"Tags":"python,python-3.x,excel,excel-formula,openpyxl","A_Id":62556891,"CreationDate":"2020-06-24T13:17:00.000","Title":"How to get value from cell which is linked from another sheet, openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a query with respect to using external libraries like delta-core over AWS EMR notebooks. Currently there isn\u2019t any mechanism of installing the delta-core libraries through pypi packages. The available options include.\n\nLaunching out pyspark kernel with --packages option\nThe other option is to change the packages option in the python script through os configuration, but I don\u2019t see that it is able to download the packages and I still get import error on import delta.tables library.\nThird option is to download the JARs manually but it appears that there isn\u2019t any option on EMR notebooks.\n\nHas anyone tried this out before?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":516,"Q_Id":62574687,"Users Score":0,"Answer":"You can download the jars while creating EMR using bootstrap scripts.\nYou can place the jars in s3 and pass it to pyspark with --jars option","Q_Score":0,"Tags":"python,amazon-web-services,amazon-emr","A_Id":62586305,"CreationDate":"2020-06-25T11:52:00.000","Title":"Accessing delta lake through Pyspark on EMR notebooks","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been looking across the internet to find some sort of library that will connect an Oracle database to a Python script, but so far I have been unsuccessful. If anyone has found a great library for Oracle, preferably first party, then please give me documentation. I'm working on a project now that particularly needs this integration. I've already seen lots of documentation for MySQL, which is maintained by Oracle, but that's about it.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":165,"Q_Id":62612782,"Users Score":0,"Answer":"Oracle provides the python library cx_Oracle. Like everything else produced by Oracle, the documentation leaves some to be desired and the drivers are an absolute pain in the ass, but it works.","Q_Score":0,"Tags":"python,sql,oracle","A_Id":62613581,"CreationDate":"2020-06-27T16:41:00.000","Title":"Is there a Python library for Oracle SQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create something which keeps track of which tenants live in what apartment and does some other things like calculating ROI and keeping track of costs.\nWhat is recommended to do in this case? Use a class which would be called Apartments and then have tenants that live within a certain instance \/ apartment? Or would it be better to keep track of something like that with a database? Or perhaps a combination of those two?\nI don't really know what the possibilities are and I tried googling \/ stackoverflowing it but I couldn't really find an answer to my question.\nPS. the programming language I'll be using is Python.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":295,"Q_Id":62719650,"Users Score":1,"Answer":"The first question you should be asking yourself is whether you want your data to be persistent meaning - do you want your data to be gone once you stop your application or not?\nIf you don't need persistence, then you don't need database.\nIf the question is - yes, I want persistence - then we can dig deeper. Do you need the data to be accessible by different users, possibly doing modifications to the data at the same time? If no, then don't use a database. Store the data in some file in a format that is easy to parse.\nIf - again - we say yes, then databases are one way to solve the issue. Next question would be - which database do you want? Relational, key-value, document, graph? (Usually if you don't know the answer to this question mean you want a relational one...)\nAs Sushant mentioned a minute ago - you will also need some ways to represent the data in your application. You will have some structures (let's call it a model), and you will load the data into your model and store the data from into the database. ORM tools do that for you, but if it is the first time for you to see such things, you might also be good without.\nHaving said that, you are up to a long journey. Remember to have fun.","Q_Score":0,"Tags":"python,database,class","A_Id":62719842,"CreationDate":"2020-07-03T16:50:00.000","Title":"Should I use a database or a class?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python application with flask but I can't connect to the mysql database that is on Azure.\nMy config.py\nSQLALCHEMY_DATABASE_URI = 'mysql:\/\/user@mysqlsvr:pass1234@mysqlsvr.mysql.database.azure.com:3306\/flask_db'\nAny suggestion?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":477,"Q_Id":62723170,"Users Score":1,"Answer":"I downloaded the certificate and it worked\nFollow\nSQLALCHEMY_DATABASE_URI = 'mysql:\/\/user@mysqlsvr:pass1234@mysqlsvr.mysql.database.azure.com:3306\/flask_db?ssl_ca=BaltimoreCyberTrustRoot.crt.pem'","Q_Score":2,"Tags":"python,flask-sqlalchemy,azure-mysql-database","A_Id":62727819,"CreationDate":"2020-07-03T22:02:00.000","Title":"SQLALCHEMY_DATABASE_URI with Azure Mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm running a Python script in an AWS Lambda function. It is triggered by SQS messages that tell the script certain objects to load from an S3 bucket for further processing.\nThe permissions seem to be set up correctly, with a bucket policy that allows the Lambda's execution role to do any action on any object in the bucket. And the Lambda can access everything most of the time. The objects are being loaded via pandas and s3fs: pandas.read_csv(f's3:\/\/{s3_bucket}\/{object_key}').\nHowever, when a new object is uploaded to the S3 bucket, the Lambda can't access it at first. The botocore SDK throws An error occurred (403) when calling the HeadObject operation: Forbidden when trying to access the object. Repeated invocations (even 50+) of the Lambda over several minutes (via SQS) give the same error. However, when invoking the Lambda with a different SQS message (that loads different objects from S3), and then re-invoking with the original message, the Lambda can suddenly access the S3 object (that previously failed every time). All subsequent attempts to access this object from the Lambda then succeed.\nI'm at a loss for what could cause this. This repeatable 3-step process (1) fail on newly-uploaded object, 2) run with other objects 3) succeed on the original objects) can happen all on one Lambda container (they're all in one CloudWatch log stream, which seems to correlate with Lambda containers). So, it doesn't seem to be from needing a fresh Lambda container\/instance.\nThoughts or ideas on how to further debug this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":192,"Q_Id":62725013,"Users Score":2,"Answer":"Amazon S3 is an object storage system, not a filesystem. It is accessible via API calls that perform actions like GetObject, PutObject and ListBucket.\nUtilities like s3fs allow an Amazon S3 bucket to be 'mounted' as a file system. However, behind the scenes s3fs makes normal API calls like any other program would.\nThis can sometimes (often?) lead to problems, especially where files are being quickly created, updated and deleted. It can take some time for s3fs to update S3 to match what is expected from a local filesystem.\nTherefore, it is not recommended to use tools like s3fs to 'mount' S3 as a filesystem, especially for Production use. It is better to call the AWS API directly.","Q_Score":0,"Tags":"python,amazon-s3,aws-lambda,python-s3fs","A_Id":62764794,"CreationDate":"2020-07-04T03:40:00.000","Title":"How to diagnose inconsistent S3 permission errors","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have three mongo nodes which I specify in the url which I use in the MongoClient. It looks something like this \"mongodb:\/\/A,B,C,D\". I need the read preference as Secondary, for which I used SECONDARY_PREFERRED. Everything works as expected, I am able to connect to the secondary node without any problem. I get the connection using -\nmongo_con = MongoClient(db_url, read_Preference=ReadPreference.SECONDARY_PREFERRED)\nI was wondering what the impact is if I don't send the 'replicaSet' optional parameter while getting the connection. What will be the difference in the connection, in case I use the repicaSet and send the correct replica set name, what if I send an incorrect replica set name? How will it impact my connection?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":62751643,"Users Score":0,"Answer":"I was wondering what the impact is if I don't send the 'replicaSet' optional parameter while getting the connection.\n\nThis depends on the driver (pymongo\/node etc.) you are using and how many seeds you specify in the connection string.\nWhen given a single seed, some drivers assume you want a direct connection to that server only and some drivers discover the topology of the deployment and, if it is a replica set, would discover all of the other nodes and \"connect to the replica set\".\nThe behavior is currently being standardized across drivers via the directConnection URI option.\nWhen you provide replicaSet URI option, you are forcing the second behavior of discovering the replica set in all drivers. Providing multiple seeds also works as long as you are not in a single-node replica set.\n\nWhat will be the difference in the connection, in case I use the repicaSet and send the correct replica set name, what if I send an incorrect replica set name? How will it impact my connection?\n\nIf you specify replicaSet and give an incorrect set name, the driver will filter out all servers that have the correct set name (or from the driver's perspective, your instructed replicaSet is the correct one and the one reported by servers would be wrong). Your application will fail to find any usable servers.","Q_Score":0,"Tags":"python,mongodb,pymongo","A_Id":62760451,"CreationDate":"2020-07-06T07:58:00.000","Title":"pymongo MongoClient without replicaSet parameter","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm adding a new column to a table of type JSONB and I can not find the formatting necissary to set the default value to anything other than a empty object.\nHow would I accomplish this?\nCurrently my code looks akin to:\nnew_column = db.Column(JSONB, server_default=db.text(\"'{}'\"), nullable=False)\nI've tried a few ways I thought might be the intuitive way of handeling it. But so far they just cause an error when being run.\nExample 1:\nnew_column = db.Column(JSONB, server_default=db.text(\"'{'enabled': True}'\"), nullable=False)\nExample 2:\nnew_column = db.Column(JSONB, server_default=db.text(\"'{enabled: True}'\"), nullable=False)","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":515,"Q_Id":62786493,"Users Score":0,"Answer":"Ilja Everil\u00e4's answer was the answer. In python I had to format the line of code in the style of:\nserver_default=db.text(\"'{\\\"enabled\\\": \\\"true\\\"}'\")\nTripple quotes work too if you don't want escape characters, but this looks nicer to my eyes\nStill unsure how I would pass the booleen value of true instead of a string. But that is another question for another time.","Q_Score":0,"Tags":"python,postgresql,sqlalchemy","A_Id":62835465,"CreationDate":"2020-07-08T02:09:00.000","Title":"How to set a more complex default value for a JSONB column in sqlAlchemy (postgreSQL)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Edit : My database is now connected I uninstalled and reinstalled the mysql-connector using anaconda prompt\nPreviously it was Bad Handshake\nand then later ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)\ni reinstalled a different version of sql and now it says\nAttributeError: module 'mysql.connector' has no attribute 'connect'","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":62828882,"Users Score":0,"Answer":"Attribute error is coming from an out of date package missing an attribute.\nTry running:\npip install mysql-connector-python","Q_Score":0,"Tags":"python,mysql,database,mysql-python","A_Id":62829050,"CreationDate":"2020-07-10T06:59:00.000","Title":"I'm having different errors while trying to connect python to mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Edit : My database is now connected I uninstalled and reinstalled the mysql-connector using anaconda prompt\nPreviously it was Bad Handshake\nand then later ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)\ni reinstalled a different version of sql and now it says\nAttributeError: module 'mysql.connector' has no attribute 'connect'","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":62828882,"Users Score":0,"Answer":"The firewall of your server may not be turned off\n1.1 If it is CentOS 7, you can use systemctl stop firewalld to temporarily turn off the firewall to solve the problem\nHave you authorized a mysql user who can log in remotely\n2.1 You can also use the authorization statement grant all on . to \"USERNAME\"@\"%\" identified by \"PASSWORD\" in mysql\n\n\u5e0c\u671b\u4f60\u80fd\u770b\u61c2\uff0c\u6211\u4e5f\u4e0d\u77e5\u9053\u8fd9\u4e2a\u7ffb\u8bd1\u51c6\u786e\u4e0d\u51c6\u786e\uff0c\u795d\u4f60\u597d\u8fd0","Q_Score":0,"Tags":"python,mysql,database,mysql-python","A_Id":62829039,"CreationDate":"2020-07-10T06:59:00.000","Title":"I'm having different errors while trying to connect python to mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"SQLalchemy is two packages in to one; Core & ORM. ORM is built on top of SQLalchemy.\nFor example, I\u2019m receiving data from an API and I\u2019m inserting it in to a SQL database via SQLalchemy. Should I use Core or ORM?\nWhen is it best to use one over the other?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":583,"Q_Id":62830345,"Users Score":0,"Answer":"Its mostly preference, but its possible that SQLAlchemy might (or already did?) deprecate Core at some time in the future. (They are deliberately ambiguous about it I guess)\nIf you're mostly using raw sql, it seems that SQLAlchemy is simpler and more straight-forward to implement.\nIf you wanna fiddle around all day with Models, Schemas, Classes, ModelSerializers and abstract everything and never deal with raw query statements again, you should probably go with ORM. But I think its too much busywork, especially if you are proficient at the sql language anyway.","Q_Score":2,"Tags":"python,sql","A_Id":62837453,"CreationDate":"2020-07-10T08:35:00.000","Title":"When to use SQLalchemy core vs SQLalchemy ORM?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that scans a stock market for transactions and saves them in the SQL database. The script works on its own if I run it directly python3 fetchTradesLuno24Hours.py and this updates the database. However, if I run it as a service it stops updating the database. If I run systemctl status lunoDatabase.service it shows that service successfully run. The service is triggered by lunoDatabase.timer that runs it every several hours. If I run systemctl status lunoDatabase.timer or systemctl list-timers I see that the timer works and the script is triggered successfully. The service reports that the python script run-time is as expected (around 6 minutes) and the code exited successfully.\nBefore I tried running python script in the infinite loop and that worked fine and the database was updated correctly from the service. When I added timer it stopped updating the database. I would like the service to update the SQL database and to be trigger by the timer. How can I fix that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":62832806,"Users Score":0,"Answer":"The problem was in the python script. Since I address the python file from the root folder, I should have specified the absolute path the database in database.py.\ndb = sqlite3.connect('home\/user\/bot\/transactions.db')\nand not\ndb = sqlite3.connect('transactions.db')\nThank you, everyone!","Q_Score":0,"Tags":"python,sql,linux,sqlite","A_Id":62834647,"CreationDate":"2020-07-10T10:55:00.000","Title":"Linux Service running Python script does not update SQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use PostgreSQL as my database in my Django project(my os is windows). I can't run PostgreSQL's commands in the command prompt while venv is activated and out of venv I have access to those commands because I've added PostgreSQL bin directory path to the PATH environmental variable. How can I do this with virtualenv so I can run those commands inside virtualenv?\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":200,"Q_Id":62849692,"Users Score":0,"Answer":"you can open a second cmd\nand use it for Postgre","Q_Score":0,"Tags":"django,windows,postgresql,python-venv","A_Id":62849993,"CreationDate":"2020-07-11T13:20:00.000","Title":"PostgreSQL commands not working in virtualenv","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using the teradatasql python package and i'm unable to find any documentation on how to pass named parameters into the cursor.execute()\nfor example:\nselect ?month_end_dt , ?month_begin_dt\nI'd like to be able to pass variables from a dictionary into the named parameters.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":269,"Q_Id":62875697,"Users Score":0,"Answer":"The teradatasql package does not support named parameters. It only supports unnamed question-mark parameter markers.","Q_Score":0,"Tags":"python,parameters,teradata","A_Id":62880653,"CreationDate":"2020-07-13T12:18:00.000","Title":"Teradatasql Python: Unable to pass Named Parameters into script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to understand how the bind works when using %(variable)s\nIn my case, my query is: engine.execute(\"DELETE FROM testing WHERE test_id in %(ids)s, ids=tuple([1,2,3])))\nIf I remove the (ids) leaving only %s then I get a not all arguments converted during string formatting\nWhy is this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":62885749,"Users Score":0,"Answer":"couple ways of using 'named parameter'. one of them is using dictionary\nengine.execute(\"SELECT * FROM testing WHERE test_id = %(ids)s, {\"ids\": ids_value,})","Q_Score":1,"Tags":"python,mysql,python-3.x,sqlalchemy,mysql-python","A_Id":62886066,"CreationDate":"2020-07-13T23:17:00.000","Title":"python mysql understanding parameter bind","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This sounds counter intuitive, but what would be the pros and cons of updating the airflow database by deploying a job to airflow?\nI am considering this as an option to set up role based accesses by directly making updates to the database, and because Airflow is a scheduler, it would make sense to do schedule this process on Airflow.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":62886697,"Users Score":0,"Answer":"We actually do this to purge down the logs table periodically along with some other general Airflow housekeeping. The downsides aren't too bad assuming you tested your code elsewhere first and you're not running the process on an extremely small schedule.\nI would recommend that you read the airflow.models module and classes, and how they're used, and that you leverage them as examples for your process; it'll help to make sure you're doing things correctly and save you from needless duplication.","Q_Score":0,"Tags":"python-3.x,airflow,rbac","A_Id":63004798,"CreationDate":"2020-07-14T01:25:00.000","Title":"Updating the airflow database by scheduling a job to Airflow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a aggregated data table in bigquery that has millions of rows. This table is growing everyday.\nI need a way to get 1 row from this aggregate table in milliseconds to append data in real time event.\nWhat is the best way to tackle this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":62947589,"Users Score":0,"Answer":"BigQuery is not build to respond in miliseconds, so you need an other solution in between. It is perfectly fine to use BigQuery to do the large aggregration calculation. But you should never serve directly from BQ where response time is an issue of miliseconds.\nAlso be aware, that, if this is an web application for example, many reloads of a page, could cost you lots of money.. as you pay per Query.\nThere are many architectual solution to fix such issues, but what you should use is hard to tell without any project context and objectives.\nFor realtime data we often use PubSub to connect somewhere in between, but that might be an issue if the (near) realtime demand is an aggregrate.\nYou could also use materialized views concept, by exporting the aggregrated data to a sub component. For example cloud storage -> pubsub , or a SQL Instance \/ Memory store.. or any other kind of microservice.","Q_Score":0,"Tags":"python,jenkins,google-bigquery,real-time,data-dump","A_Id":62949198,"CreationDate":"2020-07-17T05:23:00.000","Title":"How to get individual row from bigquery table less then a second?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have query generated by SQLAlchemy that is craeting long aliases. Is there a way to solve \"ORA-00972: identifier is too long\" from Python side?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":302,"Q_Id":62958462,"Users Score":0,"Answer":"What is the Oracle DB that you are using, can you please send us the sample of the the query that you are using, maybe in the query you can create an alias which is less than the maximum length for the column name.","Q_Score":0,"Tags":"python,oracle,sqlalchemy","A_Id":63005612,"CreationDate":"2020-07-17T16:50:00.000","Title":"SQLAlchemy Oracle: ORA-00972: identifier is too long","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been reading about in-memory databases and how they use RAM instead of disk-storage.\nI'm trying to understand the pros and cons of building an in-memory database with different programming languages, particularly Java and Python. What would each implementation offer in terms of speed, efficiency, memory management and garbage collection?\nI think I could write a program in Python faster, but I'm not sure what additional benefits it would generate.\nI would imagine the language with a faster or more efficient memory management \/ garbage collection algorithm would be a better system to use because that would free up resources for my in-memory database. From my basic understanding I think Java's algorithm might be more efficient that Python's at freeing up memory. Would this be a correct assumption?\nCheers","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":62979966,"Users Score":0,"Answer":"You choose an in-memory database for performance, right? An in-memory database written in C\/C++ and that provides an API for Java and\/or Python won't have GC issues. Many (most?) financial systems are sensitive to latency and 'jitter'. GC exacerbates jitter.","Q_Score":0,"Tags":"java,python,database,in-memory-database","A_Id":63001365,"CreationDate":"2020-07-19T11:57:00.000","Title":"In-memory database and programming language memory management \/ garbage collection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Even after installing xlrd module, I am not able to read excel files using pandas, every time it's showing file directory not found. Please help!\nI am using \" import Pandas as pd\"\n\" data=pd.read_excel(\"notebook.xlsx\")\nIt shows error as file not found","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":62988750,"Users Score":0,"Answer":"Pandas is not finding the excel file. Try to put the complete path on the read_excel function like read_excel(\"C:\/documents\/notebook.xlsx\").","Q_Score":0,"Tags":"excel,pandas,python-3.8","A_Id":62989065,"CreationDate":"2020-07-20T04:37:00.000","Title":"Query regarding pandas","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a field in mysql which stores the value as \"Gold Area (<90 g\/m\u00b2)\". I wrote a code to get the same using python code. However when I return the data the value is converted to \"Gold Area (<90 g\/m\\u00b2)\".\nI understand we can use the subscript utility while printing. But here my requirement is to send the value as a json response. How can i change the code to not convert the Superscript to \\u00b2 and keep it is from db which is m\u00b2\nany help is greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":83,"Q_Id":63055743,"Users Score":1,"Answer":"if you are using flask then you might look at an option\nJSON_AS_ASCII = False\nin the configuration","Q_Score":1,"Tags":"python,mysql,python-3.x,superscript","A_Id":64459055,"CreationDate":"2020-07-23T13:43:00.000","Title":"Superscript from database converted to \\u00b2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with flask(sqlalchemy) and postgres, i already declared a model with a datetime column that defaults to datetime.utcnow() courtesy of the datetime module in python. however i noticed on new row insertions the time never changes, i did a few digging and found i shouldn't be calling the function but rather passing it thus: datetime.utcnow\nSo, i now wish to alter the column to reflect this change without having to drop the table\/column.\nI already tried ALTER TABLE mytable ALTER COLUMN trans_time SET DEFAULT datetime.utcnow and i get the following error: ERROR: cannot use column reference in DEFAULT expression\nNote: I don't have migrations set up for this project so that would not help for now. i only need to do this via sql commands.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":197,"Q_Id":63079751,"Users Score":0,"Answer":"I would\n\nRename the original table\nCreate a new table with the original table name with the default value as you now want it\nPopulate the new table with data from the original one\nDrop the old table if everything looks OK.\n\nHope that works for you. Good luck!","Q_Score":0,"Tags":"python,sql,postgresql,datetime,flask","A_Id":63080384,"CreationDate":"2020-07-24T18:46:00.000","Title":"How to alter a postgres datetime column to default to datetime.utcnow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create some Python3 Azure MySQL jobs in azure functions using SQL Alchemy.\nFunction runs locally with func start without a problem.\nIt's in fresh venv, on fresh Linux VM to exclude any possible package dependencies.\nDeploying to Azure via func azure functionapp publish {app_name} --build remote without any problems.\nBut upon calling the function I'm getting:\n\"name '_mysql' is not defined\"\nIt seems like the MySQLdb module is not installed, but my requirements.txt contains\nmysqlclient==2.0.1 and it's installing properly. Even weirder, it works great when I'm running the function locally.\nThis is the full error, sorry for the formatting:\nResult: Failure Exception: NameError: name '_mysql' is not defined Stack: File \"\/azure-functions-host\/workers\/python\/3.8\/LINUX\/X64\/azure_functions_worker\/dispatcher.py\", line 262, in _handle__function_load_request func = loader.load_function( File \"\/azure-functions-host\/workers\/python\/3.8\/LINUX\/X64\/azure_functions_worker\/utils\/wrappers.py\", line 32, in call return func(*args, **kwargs) File \"\/azure-functions-host\/workers\/python\/3.8\/LINUX\/X64\/azure_functions_worker\/loader.py\", line 76, in load_function mod = importlib.import_module(fullmodname) File \"\/usr\/local\/lib\/python3.8\/importlib\/__init__.py\", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 1014, in _gcd_import File \"\", line 991, in _find_and_load File \"\", line 961, in _find_and_load_unlocked File \"\", line 219, in _call_with_frames_removed File \"\", line 1014, in _gcd_import File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 671, in _load_unlocked File \"\", line 783, in exec_module File \"\", line 219, in _call_with_frames_removed File \"\/home\/site\/wwwroot\/vm\/__init__.py\", line 4, in from __app__.vm.get_vm import insert_vms File \"\/home\/site\/wwwroot\/vm\/get_vm.py\", line 3, in from __app__.shared.db.models import VM File \"\/home\/site\/wwwroot\/shared\/db\/models.py\", line 3, in from __app__.shared.db.base import Base, engine File \"\/home\/site\/wwwroot\/shared\/db\/base.py\", line 12, in if not database_exists(url): File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/sqlalchemy_utils\/functions\/database.py\", line 462, in database_exists engine = sa.create_engine(url) File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/sqlalchemy\/engine\/__init__.py\", line 500, in create_engine return strategy.create(*args, **kwargs) File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/sqlalchemy\/engine\/strategies.py\", line 87, in create dbapi = dialect_cls.dbapi(**dbapi_args) File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/sqlalchemy\/dialects\/mysql\/mysqldb.py\", line 118, in dbapi return __import__(\"MySQLdb\") File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/MySQLdb\/__init__.py\", line 24, in version_info, _mysql.version_info, _mysql.__file__","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1297,"Q_Id":63083246,"Users Score":1,"Answer":"Ok, this was my error from the beginning - I've forgot to put the driver in the connection string for the DB - ex. mysql+pymysql:\/\/mysqladmin(...)","Q_Score":1,"Tags":"python,python-3.x,sqlalchemy,azure-functions","A_Id":63129640,"CreationDate":"2020-07-25T01:06:00.000","Title":"Python3 SQLAlchemy Azure Function '_mysql' is not defined","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have some data in Django database and i want to select them and copy them to an other external database in use the structure of tables in Django database and in external database is same\ni hope find any help please for make this logic ,thank you","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":202,"Q_Id":63086763,"Users Score":0,"Answer":"If you want to start the process via a button, I would recommend that you use Aayush's code within a django view that is called by your button. If the transfer takes too much time to wait for the site to refresh afterwarda, you can call the view via ajax or you can call the script as a parallel process via the view.","Q_Score":0,"Tags":"python,django,database","A_Id":63091540,"CreationDate":"2020-07-25T09:38:00.000","Title":"how to Select data from django tables and insert them to an another external database in use?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m using the GoogleCloudStorageToBigQueryOperator to create a small table in BigQuery using a csv file in GCS. I have the airflow in a VirtualBox on my local machine. Every time, this simple operation takes 15 minutes exactly to complete. I\u2019ve tried changing from Local to Celery Executor but it still takes 15 minutes. Any suggestions please to improve the performance?\nThanks a lot,\nSri.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":63088405,"Users Score":0,"Answer":"I've created a VM on the GCP and installed airflow on it. Moved the DAG on to it and the data got loaded within seconds.","Q_Score":0,"Tags":"python,google-cloud-platform,celery,airflow","A_Id":63123131,"CreationDate":"2020-07-25T12:38:00.000","Title":"Airflow - GCP Operator: GoogleCloudStorageToBigQueryOperator - Takes 15 minutes to complete","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a script that does some webscraping for news and then uploads the info I scrape into a PostgreSQL DB in RDS. My question is regarding the preferred method for ensuring that the same news article is not recorded more than once.\nEach time I scrape, the web scraper returns 40 news articles. I have it set so that each article + headline is added to a table where the headline column has a UNIQUE key constraint. So I have 2 options in order to make sure that each article is only recorded once:\n\nUse a simple try and except to try to insert every article + headline into the table -- error is returned if the headline already exists but it is ignored.\nOr, I can query for the 40 most recently added articles in the database, compare their headlines to the ones I pulled, and then only insert those that aren't already in the database.\n\nMy question is: which one would be better performance-wise? My guess is that with a low number of articles number 1 would be better but as the number of articles increases it would be better to use number 2, is that correct?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":172,"Q_Id":63099289,"Users Score":0,"Answer":"Identifying the duplicates locally to the scraper will be faster than making a round-trip to the database to do so, provided it isn't done in a dumb way. But the difference is very unlikely to be meaningful, compared to the overhead of doing the scraping in the first place.\nBut if the scraper has a limited memory for headlines, you will need to have a catch-and-ignore capability anyway.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":63101299,"CreationDate":"2020-07-26T11:08:00.000","Title":"PostgreSQL Unique Index performance","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 4 python scripts that I've mostly run from command-line. I've been trying to schedule them, but so far I haven't found a good way to do this. I have some requirements on how this all should work.\nMy scripts and what they do:\nscript number 1; Scans big number of records from database and does some processing.\nscript number 2; Does more processing, should run only after script number 1 is finished\nscripts number 3&4; these scripts are not related to 1 or 2, but they should be run hourly.\nAny recommendations what would the best approach for scheduling these scripts in Python?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":245,"Q_Id":63111353,"Users Score":0,"Answer":"I understand that the required job needs to schedule.\nFor scheduling jobs, It's good to use CI tools like Jenkins.\n\nmake a job for script 1 and 2 and run script 2 after completing script 1.\nmake two jobs separately for script 3 & 4 that running every hour.","Q_Score":2,"Tags":"python,python-3.x,scheduled-tasks","A_Id":63111492,"CreationDate":"2020-07-27T08:22:00.000","Title":"scheduling events in Python3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have over 200 tables which need to be migrated to S3 from RDBMS with no transformations So we are planning to migrate using Glue Job. So I want to create AWS Glue Job which can be re-usable and executed using parameter values so that i can run for multiple tables at a time(Multi-threading). Is this possible anyway in AWS Glue.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":562,"Q_Id":63237180,"Users Score":1,"Answer":"The quick answer is yes\n\nYou can reuse a singular glue job that can be resued where you can pass your source location and target database table name as job arguments to the glue job\nYour glue jobs supports concurrency which can be set in your glue job(whcih means you can have multiple invocations of the same job). This would be the easier option than implementing multi-threading in your job. But multi-threading would cretainly be possible as long as we use only default or pure python modules. There are certain account level limits (which can be increased) that you will need to keep in mind.\nYou can pass the arguments to the glue job when you invoke the glue job, by using whatever mechanism you want (eg: step functions\/lambdas...)","Q_Score":1,"Tags":"python,pyspark,aws-glue","A_Id":63238219,"CreationDate":"2020-08-03T21:28:00.000","Title":"Reusable AWS Glue Job","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to drop some columns and uppercase the data in snowflake tables.\nFor which I need to loop through all the catalogs\/ dbs, its respective schemas and then the tables.\nI need this to be in python to list of the catalogs schemas and then the tables after which I will be exicuting the SQL query to do the manipulations.\nHow to proceed with this?\n1.List all the catalog names\n2.List all the schema names\n3.List alll the table names\nI have established a connection using python snowflake connector","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1405,"Q_Id":63242243,"Users Score":0,"Answer":"Your best source for this information is in your SNOWFLAKE.ACCOUNT_USAGE share that Snowflake provides. You'l need to grant privileges to whatever role you are using to connect with Python. From there, though, there is are the following views: DATABASES, SCHEMATA, TABLES, and more.","Q_Score":1,"Tags":"python,sql,database,snowflake-cloud-data-platform,snowflake-schema","A_Id":63247222,"CreationDate":"2020-08-04T07:41:00.000","Title":"How to retrieve all the catalog names , schema names and the table names in a database like snowflake or any such database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Pandas and SQLAlchemy to push data from CSVs into a MySQL database. When calling df.to_sql() thousands of lines of logs clutter the command line. Is there a way turn off\/stop the logging?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":264,"Q_Id":63273105,"Users Score":1,"Answer":"Don't think that is the standard behaviour of to_sql() but rather the parameter echo=True set in your sqlalchemy engine. Changing it back to echo=False or removing it since it is false as a default should stop it from printing out the logs.","Q_Score":0,"Tags":"python,pandas,sqlalchemy","A_Id":63273278,"CreationDate":"2020-08-05T20:33:00.000","Title":"Is there a way to turn off df.to_sql logs?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have just started using Avro and I'm using fastavro library in Python.\n\nI prepared a schema and saved data with this one.\nNow, I need to append new data (JSON response from an API call ) and save it with a non-existent schema to the same avro file.\nHow shall I proceed to add the JSON response with no predefined schema and save it to the same Avro file?\n\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":63301046,"Users Score":0,"Answer":"Avro files, by definition, already have a schema within them.\nYou could read that schema first, then continue to append data, or you can read entire file into memory, then append your data, then overwrite the file.\nEach option require you to convert the JSON into Avro (or at least a Python dict), though.","Q_Score":0,"Tags":"python-3.x,avro,fastavro","A_Id":63349410,"CreationDate":"2020-08-07T11:38:00.000","Title":"Avro append a record with non-existent schema and save as an avro file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have customized the save() method of my Django model to write some data into a file. I want to write the data into a file in STATIC_ROOT so nginx can serve it. When I write data into a file inside Django project root everything is OK but when I try to write to STATIC_ROOT I get \"Database is locked error\".\nWhat is the problem with that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":63383050,"Users Score":0,"Answer":"I found the solution to this problem but I didn't get why this happens!\nTo solve the problem when I was overriding the save method of Django model, I first called the save method of parent model and then saved my file! Do in reverse order caused the error.","Q_Score":0,"Tags":"python-3.x,django,database,sqlite,django-models","A_Id":63398914,"CreationDate":"2020-08-12T18:53:00.000","Title":"Django sqlite \"Database is locked\" when writing to STATIC_ROOT","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a csv file encoded in utf-8 (filled with information from website through scraping with a python code, with str(data_scraped.encode('utf-8') in the end for the content)).\nWhen I import it to excel (even if I pick 65001: Unicode UTF8 in the options), it doesn't display the special characters.\nFor example, it would show \\xc3\\xa4 instead of \u00e4\nAny ideas of what is going on?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":63399685,"Users Score":0,"Answer":"I solved the problem.\nThe reason is that in the original code, I removed items such as \\t \\n that were \"polluting\" the output with the replace function. I guess I removed too much and it was not readable for excel after.\nIn the final version, I didn't use\nstr(data_scrapped.encode('utf-8') but\ndata_scrapped.encode('utf-8','ignore').decode('utf-8')\nthen I used split and join to rempove the \"polluting terms\":\nstring_split=data_scrapped.split()\ndata_scrapped=\" \".join(string_split)","Q_Score":0,"Tags":"python,excel,web-scraping,encoding","A_Id":63502540,"CreationDate":"2020-08-13T16:45:00.000","Title":"Cannot read imported csv file with excel in UTF-8 format after python scraping","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm new in python, i would like create an executable-installer of my django project, i would like that my client with double-click runs this program and open his browser with my project running on his local server, and that he can access to mysql database, Thanks.\nI've searched on internet about python to exe scripts, PyInstaller, but I can't find how to run the server and connect to the database in an executable.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":357,"Q_Id":63402826,"Users Score":0,"Answer":"First of all, you need to prepare your application and embed a production-ready web server such as uvicorn. Afterward, you must bundle this with a tool such as pyinstaller. This can be quite a hassle due to the large number of dependencies.","Q_Score":0,"Tags":"python,mysql,django","A_Id":63404018,"CreationDate":"2020-08-13T20:34:00.000","Title":"Is it possible to create an executable from my django project with connection to a mysql database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"A little bit of a background about my status:\nI'm currently using OVH Advance 5 Server (AMD Epyc 7451 - 24 c \/ 48 t - 128 GB Memory - 450x2 GB SSD) and I was wondering the specifications I should be using for Postgresql.\nI'll be using multiprocess running 24 Python scripts with 24 different pools (using asynpg to connect), and I usually use up about 40 GB of RAM or so - that means I have around 88 GB to work with.\nBefore I've never really touched any of the settings for Postgres; what kind of values should I be using for:\nShared Memory \/ Max Connections \/ Random Page Cost?\nReading up on it, it says it's recommended that Shared Memory should generally take up about 25% of the free RAM - but other sources say 2 - 4 GB is generally the sweet spot point, so any insight would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":206,"Q_Id":63430833,"Users Score":3,"Answer":"shared_buffers: start with 25% of the available RAM or 8GB, whatever is lower.\nYou can run performance tests to see if other settings work better in your case.\n\nmax_connections: leave the default 100. If you think that you need more than 50 connections, use the connection pooler pgBouncer.\n\nrandom_page_cost: if your storage is as fast with random I\/O as with sequential I\/O, use a setting of 1.1. Otherwise, stick with the default 4.","Q_Score":2,"Tags":"python,postgresql,asyncpg","A_Id":63445765,"CreationDate":"2020-08-15T21:08:00.000","Title":"Shared Memory Buffer Postgresql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to connect ElasticSearch to Superset for visualization. When I checked in Superset in Sources> Databases, it mentioned to use SQLAlchemy URI and Database for testing the connection.\nIn our case, ElasticSearch is connected with Python library and not using SQLAlchemy.\nIs there any way to connect Elastic Search with Superset using Python library and if so could you please help by mentioning the way to connect.\nThanks in advance.\nRegards,\nNaveen.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2033,"Q_Id":63464392,"Users Score":2,"Answer":"you can connect by the way mentioned in the docs.You need to add the dbapi python package mentioned in superset installer document and that will in turn help you connect to the elk using the url elasticsearch+http:\/\/{user}:{password}@{host}:9200\/\npip install elasticsearch-dbapi","Q_Score":1,"Tags":"python,elasticsearch,superset","A_Id":63639134,"CreationDate":"2020-08-18T08:08:00.000","Title":"Connecting ElasticSearch to Superset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scenario: Buildozer packaged python apk works fine on Android Emulator and shows Login screen. On hitting Login button I am getting details of logged in user from Mysql database\nMySql database server is a Ubuntu chromebook. Android Emulator is on Windows machine.\nI can access the database via from Windows machine using HeidiSql - i.e ip address and user name \/ password @ port 3306.\nHowever the app running on the emulator gives a permission denied error\nPlease advise how I can find root cause of the issue and rectify it","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":157,"Q_Id":63486031,"Users Score":0,"Answer":"The issue was happening because the buildozer spec file was missing the option\nandroid.permissions = INTERNET\nAfter putting this the sql queries started working.","Q_Score":0,"Tags":"android,mysql,python-3.x,kivy","A_Id":63639709,"CreationDate":"2020-08-19T11:29:00.000","Title":"Kivy+Python apk Mysql Remote Database access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a tabular dataset of fixed dimension (N x M). I receive a stream of updates from Kafka updating entries in this table. Ultimately, I'd like to have a pandas dataframe with a recent version of the table, and I'm considering a few options for doing that:\n\nMaintain it in memory as a table \/ dataframe. My concern here, is I don't know if I can avoid multithreading, since one process will perpetually be in a for loop of receiving messages.\n\nMaintain it in an external structure, and have a separate process independently read from it. Choices of external data stores:\na) SQLite - Might have concurrency issues, and updates for arbitrary rows are probably a bit messy.\nb) Redis - Easy to maintain, but hard to query \/ read the whole table at once (which is how I would generally be accessing the data).\n\n\nI'm a bit of a Kafka beginner, so any advice here would be appreciated. How would you approach this problem? Thanks!\nEDIT: I guess I could also just maintain it in memory and then just push the whole thing to SQLite?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":339,"Q_Id":63495473,"Users Score":1,"Answer":"My initial approach would be to ask: can I create a \"good enough\" solution to start with, and optimize it later if needed?\nUnless you need to worry about very sensitive information (like healthcare or finance data), or data that is going to definitely going to scale up very quickly, then I would suggest trying a simple solution first and then see if you hit any problems. You may not!\nUltimately, I would probably go with the SQLite solution to start with, as it's relatively simple to set up and it's a good fit for the use case (i.e. \"transactional\" situations).\nHere are some considerations I would think about:\nPros\/cons of a single process\nUnless your data is high-velocity \/ high-volume, your suggestion of consuming and processing the data in the same process is probably fine. Processing data locally is much faster than receiving it over the network (assuming your Kafka feed isn't on your local computer), so your data ingest from Kafka would probably be the bottleneck.\nBut, this could be expensive to have a Python process spinning indefinitely, and you would need to make sure to store your data out to a file or database in order to keep it from being lost if your process shut down.\nRelational database (e.g. SQLite)\nUsing a relational database like SQLite is probably your best bet, once again depending on the velocity of the data you're receiving. But relational databases are used all the time for transactional purposes (in fact that's one of their primary intended purposes), meaning high volume and velocity of writes\u2014so it would definitely make sense to persist your data in SQLite and make your updates there as well. You could see about breaking your data into separate tables if it made sense (e.g. third normal form), or you could keep it all in one table if that was a better fit.\nMaintain the table in memory\nYou could also keep the table in memory, like you suggested, as long as you're persisting it to disk in some fashion (CSV, SQLite, etc.) after updates. For example, you could:\n\nHave your copy in memory.\nWhen you get an update, make the update to your in-memory table.\nWrite the table to disk.\nIf your process stops or restarts, read the table from memory to start.\n\nPandas can be slower for accessing and updating individual values in rows, though, so it might actually make more sense to keep your table in memory as a dictionary or something and write it to disk without using pandas. But if you can get away with doing it all in pandas (re: velocity and volume), that could be a fine way to start too.","Q_Score":0,"Tags":"python,apache-kafka,redis,producer-consumer","A_Id":63496093,"CreationDate":"2020-08-19T21:55:00.000","Title":"Best data structure to maintain a table from a stream of Kafka update messages in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a web application in which users can upload excel files. I know I can use the OPENROWSET function to read data from excel into a SQL Server but I am refraining from doing so because this function requires a file path.\nIt seems kind of indirect as I am uploading a file to a directory and then telling SQL Server go look in that directory for the file instead of just giving SQL Server the file.\nThe other option would be to read the Excel file into a pandas dataframe and then use the to_sql function but pandas read_excel function is quite slow and the other method I am sure would be faster.\nWhich of these two methods is \"correct\" when handling file uploads from a web application?\nIf the first method is not frowned upon or \"incorrect\", then I am almost certain it is faster and will use that. I just want an experienced developers thoughts or opinions. The webapp's backend is Python and flask.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":63542212,"Users Score":0,"Answer":"If I am understanding your question correctly, you are trying to load the contents of an xls(s) file into a SQLServer database. This is actually not trivial to do, as depending on what is in the Excel file you might want to have one table, or more probably multiple tables based on the data. So I would step back for a bit and ask three questions:\n\nWhat is the data I need to save and how should that data be structured in my SQL tables. Forget about excel at this point -- maybe just examine the first row of data and see how you need to save it.\nHow do I get the file into my web application? For example, when the user uploads a file you would want to use a POST form and send the file data to your server and your server to save that file (for example, either on S3, or in a \/tmp folder, or into memory for temporary processing).\nNow that you know what your input is (the xls(x) file and its location) and how you need to save your data (the sql schema), now it's time to decide what the best tool for the job is. Pandas is probably not going to be a good tool, unless you literally just want to load the file and dump it as-is with minimal (if any) changes to a single table. At this point I would suggest using something like xlrd if only xls files, or openpyxl for xls and xlsx files. This way you can shape your data any way you want. For example, if the user enters in malformed dates; empty cells (should they default to something?); mismatched types, etc.\n\nIn other words, the task you're describing is not trivial at all. It will take quite a bit of planning and designing, and then quite a good deal of python code once you have your design decided. Feel free to ask more questions here for more specific questions if you need to (for example, how to capture the POST data in a file update or whatever you need help with).","Q_Score":0,"Tags":"python,sql-server,excel,pandas,flask","A_Id":63542301,"CreationDate":"2020-08-23T00:05:00.000","Title":"What is the correct way to upload files to a SQL Server inside my web application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are building a Cloud-based billing software. This software is web-based and should function like desktop software (Atleast). We will have 5000+ users billing at the same time. For now, we only have just 250 users. We are in a need of scaling now. We are using Angular as a Fronten, Python is used for Backend and React Native for Mobile App. PostgreSQL DB is used for Database. I have few doubts, to clarify before we scale.\n\nUsing PostgreSQL for DB will show any issues in the future?\n\nInstead of Integer's primary key, we are using UUID (For easy data migrations, but it uses more space). Is that will introduce any problems in the future?\n\nDo we have to consider any DB methods for this kind of scaling ? (Now, uses a single DB for all users)\n\nWe are planning to use one server with a huge spec (for all users). Is that will be good or do we have to plan for anything else ?\n\nUsing a separate application server and DB server is needed for our this scenario?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":41,"Q_Id":63554864,"Users Score":1,"Answer":"I'll try to answer the questions. Feel free to judge it.\nSo, you are building a cloud-based billing software. Now you have 250+ users and is expected to have at least 5000 users in the future.\nNow answering the questions you asked:\n\nUsing PostgreSQL for DB will show any issues in the future?\n\nans: PostgreSQL is great for production. It is the safe way to go. It shouldn't show any issues in the future, but depends highly on the db design.\n\nInstead of Integer's primary key, we are using UUID (For easy data migrations, but it uses more space). Is that will introduce any problems in the future?\n\nans: Using UUID has its own advantages and disadvantages. If you think scaling is a problem, then you should consider updating your revenue model.\n\nDo we have to consider any DB methods for this kind of scaling ? (Now, uses a single DB for all users)\n\nans: A single DB for a production app is good at the initial stage. When scaling especially in the case of 5000 concurrent users, it is good to think about moving to Microservices.\n\nWe are planning to use one server with a huge spec (for all users). Is that will be good or do we have to plan for anything else ?\n\nans: Like I said, 5k concurrent users will require a mighty server(depends highly on the operations though, I'm assuming moderate-heavy calculations and stuff) therefore, it's recommended to plan for Microservices architecture. Thant way you can scale up heavily used services and scale down the other. But keep in mind that, Microservices may sound great, but in practice, it's a pain to setup. If you have a strong backend team, you can proceed with this idea otherwise just don't.\n\nUsing a separate application server and DB server is needed for our this scenario?\n\nans: Short answer is Yes. Long answer: why do you want to stress your server machine when you have that many users.","Q_Score":1,"Tags":"python,sql,angular,database,postgresql","A_Id":63555334,"CreationDate":"2020-08-24T04:47:00.000","Title":"DB structure for Cloud based billing software, planning issues","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a newbie in the dev world and I made a big mistake this morning with my 5 database projects made in python. Basically I accidentally deleted the folder with my templates and my code. All of them are deployed on gcloud datastore, but I was wondering if there is any method to rescue or to have a backup folder with all the files. Thank you so much and I hope there is a solution","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":63599695,"Users Score":0,"Answer":"After deletion it is not possible to recover Datastore data unless a backup is available. Usually backups are not necessary as data is replicated across multiple data centers.","Q_Score":0,"Tags":"python,web-deployment,gcloud,datastore","A_Id":63602954,"CreationDate":"2020-08-26T14:20:00.000","Title":"Deleted accidentaly folder with projects deployed on gcloud datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to save data from an ESP32 in a network database. Therefore I need the module MySQLdb in Micropython on the ESP32. How can I install it there?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1336,"Q_Id":63660769,"Users Score":0,"Answer":"If your server has php in it then u can send your data to php and write a code to store that resived data in MySQL database.\nTry it and let me know.","Q_Score":1,"Tags":"mysql-python,esp32,micropython","A_Id":64249047,"CreationDate":"2020-08-30T18:25:00.000","Title":"How to install Micropython MySQLdb on ESP32","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to save data from an ESP32 in a network database. Therefore I need the module MySQLdb in Micropython on the ESP32. How can I install it there?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1336,"Q_Id":63660769,"Users Score":1,"Answer":"running just about any database client from an ESP32 to a hosted database somewhere would be a challenge (in processing power, security and maintaining state)\nI would suggest using the ESP32 to send data to a mqtt broker, and then have something else, (the server running the database, a serverless function , etc etc ) subscribe to that topic , and write any changes to the database.","Q_Score":1,"Tags":"mysql-python,esp32,micropython","A_Id":64306013,"CreationDate":"2020-08-30T18:25:00.000","Title":"How to install Micropython MySQLdb on ESP32","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting CharConversionException in python Jaydebeapi when trying to pull data ? Not sure where how to set db2.jcc.charsetDecoderEncoder property in python to resolve this.\nExact error - com.ibm.db2.jcc.am.SqlException: com.ibm.db2.jcc.am.SqlException: [jcc][t4][1065][12306][4.25.13] Caught java.io.CharConversionException. See attached Throwable for details. ERRORCODE=-4220, SQLSTATE=null\nPlease help in setting charsetDecoderEncoder property in python. Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":63693218,"Users Score":0,"Answer":"Since you've tagged this with WebSphere, I assume you're trying to use wsadmin to invoke a jython script, if so do something like wsadmin -D= -f myJythonScript.py.","Q_Score":0,"Tags":"python,db2,jpype,jaydebeapi","A_Id":63696179,"CreationDate":"2020-09-01T18:10:00.000","Title":"Jaydebeapi giving error in python for CharConversionException","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm developing a computerCheck program, it's Python based (for now).\nThe programs basically checks some Windows OS status, e.g. if the correct AV is running, if bitlocker is activated and so on....\nThe result of the check OK or NOT OK is reported into the database. However, since it's about 10 checks...I would like to report in a smart way back to the database. I don't want to have an entry for every check in the record, because this would be a problem when the number of checks change.\nSo I would like to send a \"smart\" kind of checksum...\nThe checksum should give which of the checks are NOT OK (e.g. check nr.1 is false, check nr.4 is false) and preferable a reason...like nr 1. status 2 (2 represents e.g. service not running..)\nNow, the big question is, is it possible to do it that way, so e.g. always sending a x character long code to the database and when reading the code back, you can \"unpack\" it to something human readible again....\nI hope it's clear what I'm looking for...\nThanks in advance!\n\/Jasper","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":63721560,"Users Score":1,"Answer":"You could create a string where every index represents one check. You will have more than enough chars to use as states. For example:\n\"0120\" -> check0 ok, check1 error state 1, check2 error state 2...\nNew checks can simply be appended to the string, removed checks need to be marked as no more existent:\n\"0X200\" -> check1 doesn't exist anymore and one new check appended at the end.","Q_Score":2,"Tags":"python,python-3.x,checksum","A_Id":63721863,"CreationDate":"2020-09-03T10:17:00.000","Title":"Smart way of error reporting into database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently looking for optimal way how to obtain a random data sample from the table (for instance in HIVE). I know that PRESTO provides either RANDOM() function or TABLESAMPLE BERNOULLI\/SYSTEM. Problem is when querying table with significant number of records, it takes a lot of time, which is not suitable with cooperation with JayDeBeApi which might close the connection waiting too long for the response.\nI would prefer to use TABLESAMPLE BERNOULLI\/SYSTEM which takes as an argument percentage of the records to be fetched. To compare with ORACLE, SAP or MSSQL databases which enable to pass the precise percentage, i.e. 0.003123412%, the PRESTO does not allows you, despite that the function are quite similar, and everything is converted in the range 1-100%.\nDoes anyone know some workaround how to solve this? I would prefer to avoid the limit clause in cooperation with TABLESAMPLE BERNOULLI\/SYSTEM which might not work as expected.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3569,"Q_Id":63726874,"Users Score":0,"Answer":"This can be obtained by passing numbers in scientific notation.","Q_Score":2,"Tags":"python,database,presto","A_Id":63727058,"CreationDate":"2020-09-03T15:27:00.000","Title":"SELECT random sample of data via PRESTO connector","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database using PostgreSQL but the published site is Vercel and when deploying for production is gives the error below. The error doesn't happen for Heroku\ndjango.db.utils.OperationalError: FATAL: no pg_hba.conf entry for host \"123.456.789.102\", user \"example_user\", database \"example_db\", SSL off\nHow do I solve this error, thank you. I have checked Google results aren't really helpful","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":206,"Q_Id":63797999,"Users Score":0,"Answer":"I found the answer out. It was just not to use Heroku but instead I used a service called ElephantSQL and used the details and it worked perfectly well","Q_Score":0,"Tags":"python,django,postgresql","A_Id":65493860,"CreationDate":"2020-09-08T16:27:00.000","Title":"How to solve error with syncing PostgreSQL with Vercel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using the Cassandra python driver working containing a table with 38500000 rows. I'm benchmarking various databases with a product I'm we're developing. To benchmark selection I use a filter that should return around half the rows with 3 selection columns (then I vary the selection columns to see which perform best). However when working with Cassandra I only get 57016 results each time per query (which must be some cap). I've tried using SimpleStatement and setting fetch_size = 38500000 but this didn't work. I was wondering if there's some parameter I should be changing as I can't find anything in the documentation apart from how Cassandra breaks down pages that go beyond the fetch_size.\n[EDIT]\nTo add some more information the query returns:\ncassandra.ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] message=\"Operation failed - received 0 responses and 1 failures\" info={'consistency': 'LOCAL_ONE', 'required_responses': 1, 'received_responses': 0, 'failures': 1}\nafter I used session.default_fetch_size = None. Then after increasing tombstones to tombstone_failure_threshold: 10000000 and letting gc_grace_seconds = 0 I still get the same error. I've also gone into Cluster.py and set the _default_timeout = 1000 but that didn't lead to anything.\n[EDIT 2]\nGoing through the debug console the error is:\n