[{"Question":"I am a kivy n00b, using python, and am not sure if this is the right place to ask.\nCan someone please explain how a user can input data in an Android app, and how\/where it is stored (SQL table, csv, xml?). I am also confused as to how it can be extended\/used for further analysis.\nI think it should be held as a SQL table, but do not understand how to save\/set up a SQL table in an android app, nor how to access it. Similarly, how to save\/append\/access a csv\/xml document, nor how if these are made, how they are secure from accidental deletion, overwriting, etc\nIn essence, I want to save only the timestamp a user enters some data, and the corresponding values (max 4).\nUser input would consist of 4 variables, x1, x2, x3, x4, and I would write a SQL statement along the lines: insert into data.table timestamp, x1, x2, x3, x4, and then to access the data something along the lines of select * from data.table and then do\/show stuff.\nCan someone offer suggestions on what resources to read? How to set up a SQL Server table in an android app?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":124,"Q_Id":65525770,"Users Score":1,"Answer":"This works basically the same way on android as on the desktop: you have access to the local filesystem to create\/edit files (at least within the app directory), so you can read and write whatever data storage format you like.\nIf you want to use a database, sqlite is the simplest and most obvious option.","Q_Score":0,"Tags":"python,kivy","A_Id":65525777,"CreationDate":"2020-12-31T21:45:00.000","Title":"save user input data in kivy and store for later use\/analysis python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how can I make a login form that will remember the user so that he does not have to log in next time.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":65554149,"Users Score":0,"Answer":"Some more information would be nice but if you want to use a database for this then you would have to create a entry for the user information last entered.\nAnd then on reopening the programm you would check if there are any entrys and if yes load it.\nBut I think that writing the login information to a file on you pc would be a lot easier. So you run the steps from above just writing to a file instead of a database.\nI am not sure how you would make this secure because you can't really encrypt the password because you would need a password or key of some type and that password or key would be easy to find in the source code especially in python. It would be harder to find in other compiler based programming languages but also somewhere. And if you would use a database you would have a password for that but that would also lay on the hardrive if not encrypted otherwise but there we are where we started.\nSo as mentioned above a database would be quite useless for a task like this because it doesn't improve anything and is a hassle for beginners to setup.","Q_Score":0,"Tags":"python-3.x,tkinter","A_Id":65554208,"CreationDate":"2021-01-03T19:40:00.000","Title":"How to do auto login in python with sql database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need your advice on something that I'm working on as a part of my work.\nI'm working on automating the Aurora Dump to S3 bucket every midnight. As a part of it, I have created a ec2 instance that generates the dump and I have written a python script using boto3 which moves the dump to S3 bucket every night.\nI need to intimate a list of developers if the data dump doesn't take place for some reason.\nAs of now, I'm posting a message to SNS topic which notifies the developers if the backup doesn't happen. But I need to do this with Cloudwatch and I'm not sure how to do it.\nYour help will be much appreciated. ! Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":65559529,"Users Score":0,"Answer":"I have created a custom metric to which I have attached a Cloudwatch alarm and it gets triggered if there's an issue in data backup process.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,automation,cloudwatch-alarms","A_Id":65574562,"CreationDate":"2021-01-04T08:15:00.000","Title":"Cloudwatch Alarm for Aurora Data Dump Automation to S3 Bucket","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"According to the documentation\nAWS_S3_MAX_MEMORY_SIZE(optional; default is 0 do not roll over)\nThe maximum amount of memory (in bytes) a file can take up before being rolled over into a temporary file on disk.\nCan someone explain this a bit more? Is this a way I could throttle upload sizes? What does \"being rolled over\" refer to?\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":151,"Q_Id":65602282,"Users Score":2,"Answer":"System memory is considered limited, while disk space is usually not (practically speaking). Storing a file in memory is a trade-off where you get better speed of access, but use up more of your memory to do so. Say you have a large file of 1GB, is it really worth using up so much memory just to access that file faster? Maybe if you have a lot of memory and that file is accessed very frequently, but maybe not. That is why there are configurable limits like this. At some point, the trade-off is not worth it.\n\"Rolling over\" would refer to when the in-memory file has gone over the set limit, and then gets moved into file-on-disk.","Q_Score":0,"Tags":"python,django,amazon-s3","A_Id":65602604,"CreationDate":"2021-01-06T19:33:00.000","Title":"What does AWS_S3_MAX_MEMORY_SIZE do in django-storages","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying execute Python script from RDS SQL Server 15 version but I didn't find any documentation around this in AWS Will it be possible to do this?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":253,"Q_Id":65602999,"Users Score":2,"Answer":"Unfortunately that is not possible as of now. RDS for SQL Server is just Relational Database Service and it does not allow you to execute any program on the RDS instance, except for T-SQL programmability stored within your SQL Server database (triggers, stored procedures, etc).","Q_Score":1,"Tags":"python,boto3,amazon-rds","A_Id":65636688,"CreationDate":"2021-01-06T20:32:00.000","Title":"Does RDS SQL Server support running python script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using sqlalchemy currently but I can't store multiple values in a column. The only values I can put in a db are strings, int, etc. but not lists. I was thinking what if I wanted a list of integers and I just made it a string in this format: \"1|10|91\" and then split it afterwards. Would that work or would I run out of memory or something?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":65690067,"Users Score":0,"Answer":"In python, you can convert a dictionary easy to a JSON file. A dictionary consists of a bunch of variables. You can then convert the JSON file easy to SQL.\nJSON files are often used to convert variables from one programming language to another.","Q_Score":0,"Tags":"python,string,list","A_Id":65690207,"CreationDate":"2021-01-12T18:43:00.000","Title":"Is using a string instead of a list to store multiple values going to work?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: mysqlclient-1.4.6-cp38-cp38-win32.whl is not a supported wheel on this platform.\n! Push rejected, failed to compile Python app.\n! Push failed","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":65699181,"Users Score":0,"Answer":"Try google, \"mysql build from wheel\".\nFind your operating system.\nDownload.\nTry again.","Q_Score":0,"Tags":"python,django,mysql-cli","A_Id":65702391,"CreationDate":"2021-01-13T09:36:00.000","Title":"Not Able To Deploy Django App Because of This Error How To Fix It","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to replace \"-inf\" with \"NA\" in my collection containing around 500 fields and ~10 million documents. The reason why I have mentioned string in my query is because I have changed the datatype of dataframe to string before saving in db.\nCan someone please suggest an efficient solution to do so?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":65703051,"Users Score":0,"Answer":"Your choices are:\n\nProgram something that will iterate every record and then every field in that record, perform any updates and save them back into the database.\n\nExport all the data in human readable format, performed a search-and-replace (using sed or similar), then load the data back in.\n\n\nThere's no magic command that will do what you want quickly and easily.","Q_Score":0,"Tags":"python-3.x,mongodb,pymongo","A_Id":65797073,"CreationDate":"2021-01-13T13:35:00.000","Title":"Replacing a string value with another string in entire Mongo collection using pymongo?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an db.sql file and I tried inserting the data from it to my local MYSQL using the command:\nmysql -u root -p chitfund < db.sql . It doesn't show any error and also doesn't print anything. But when I try to see the tables in my db, it shows now tables. I have the data in the form of .csv also. I tried inserting using mysql.connector , but it is not installing and throws an error. Is there any other way to insert the data using the sql or csv files.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":65758287,"Users Score":0,"Answer":"The problem was with my .sql file. It didn't have any insert or create statements. I assumed it has the statements and the data is not being inserted. After having all the insert statements for inserting the data, it worked properly","Q_Score":2,"Tags":"python,mysql,csv,mysql-connector-python","A_Id":65759059,"CreationDate":"2021-01-17T07:34:00.000","Title":"Insert data into mysql from sql file or using csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a query for SCD2 in MySQL using:\nDATA STRUCTURE:\nid, data, start_date, end_date\nif the record_id exist:\n-update the record's end_date,\n-and create a new record with the new data\nelse:\n-insert a new record.\nCan I use MySQL CASE to this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":33,"Q_Id":65790397,"Users Score":1,"Answer":"You dont need an end-date to store historic data;\nWhenever you want to store a new address, you simply add it with the start-date.\nYou retrieve history by\nSELECT * FROM table ORDER BY start-date\nOr just the last address by\nSELECT * FROM table ORDER BY start-date DESC LIMIT 1","Q_Score":0,"Tags":"mysql,python-3.x","A_Id":65803143,"CreationDate":"2021-01-19T11:13:00.000","Title":"Mysql do a query if case 1, do another query if case 2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically what I want to achieve is that,\nI have a Django project. and I want to store the db of the project on my Server(CPanel). and access it from the Laptop. I tried searching about Remote MySql on Django but couldnt find anything,\nI am not using Google Cloud or PythonAnywhere, or heroku,\nis there a way? Please.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":65856647,"Users Score":0,"Answer":"Usually, when you want to access DB remotely on the CPanel of a Virtual server, remote access should be enabled. In my experience, providers close remote connections on VServers unless you ask them to open them. if they open it, you can use putty or similar software to connect to the server and run DB commands.","Q_Score":0,"Tags":"python,mysql,django","A_Id":65856710,"CreationDate":"2021-01-23T07:04:00.000","Title":"I am looking for a way to connect to my database from my laptop, this database is on CPANEL, and i am making a Django Project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I installed django cookiecutter in Ubuntu 20.4\nwith postgresql when I try to make migrate to the database I get this error:\n\npython manage.py migrate\nTraceback (most recent call last): File \"manage.py\", line 10, in\n\nexecute_from_command_line(sys.argv) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/init.py\",\nline 381, in execute_from_command_line\nutility.execute() File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/init.py\",\nline 375, in execute\nself.fetch_command(subcommand).run_from_argv(self.argv) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 323, in run_from_argv\nself.execute(*args, **cmd_options) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 361, in execute\nself.check() File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 387, in check\nall_issues = self._run_checks( File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/commands\/migrate.py\",\nline 64, in _run_checks\nissues = run_checks(tags=[Tags.database]) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/checks\/registry.py\",\nline 72, in run_checks\nnew_errors = check(app_configs=app_configs) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/checks\/database.py\",\nline 9, in check_database_backends\nfor conn in connections.all(): File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 216, in all\nreturn [self[alias] for alias in self] File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 213, in iter\nreturn iter(self.databases) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/utils\/functional.py\",\nline 80, in get\nres = instance.dict[self.name] = self.func(instance) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 147, in databases\nself._databases = settings.DATABASES File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 79, in getattr\nself._setup(name) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 66, in _setup\nself._wrapped = Settings(settings_module) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 176, in init\nraise ImproperlyConfigured(\"The SECRET_KEY setting must not be empty.\") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY\nsetting must not be empty.\n\nI did the whole instructions in cookiecutter docs and createdb what is the wrong?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":65897801,"Users Score":0,"Answer":"Your main problem is very clear in the logs.\nYou need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.","Q_Score":0,"Tags":"python-3.x,django,django-rest-framework","A_Id":65898014,"CreationDate":"2021-01-26T08:12:00.000","Title":"Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Situation: A csv lands into AWS S3 every month. The vendor adds\/removes\/modifies columns from the file as they please. So the schema is not known ahead of time. The requirement is to create a table on-the-fly in Snowflake and load the data into said table. Matillion is our ELT tool.\nThis is what I have done so far.\n\nSetup a Lambda to detect the arrival of the file, convert it to JSON, upload to another S3 dir and adds filename to SQS.\nMatillion detects SQS message and loads the file with the JSON Data into Variant column in a SF table.\nSF Stored proc takes the variant column and generates a table based on the number of fields in the JSON data. The VARIANT column in SF only works in this way if its JSON data. CSV is sadly not supported.\n\nThis works with 10,000 rows. The problem arises when I run this with a full file which is over 1GB, which is over 10M rows. It crashes the lambda job with an out of disk space error at runtime.\nThese are the alternatives I have thought of so far:\n\nAttach an EFS volume to the lambda and use it to store the JSON file prior to the upload to S3. JSON data files are so much larger than their CSV counterparts, I expect the json file to be around 10-20GB since the file has over 10M rows.\nMatillion has an Excel Query component where it can take the headers and create a table on the fly and load the file. I was thinking I can convert the header row from the CSV into a XLX file within the Lambda, pass it to over to Matillion, have it create the structures and then load the csv file once the structure is created.\n\nWhat are my other options here? Considerations include a nice repeatable design pattern to be used for future large CSVs or similar requirements, costs of the EFS, am I making the best use of the tools that I are avaialable to me? Thanks!!!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":859,"Q_Id":65909077,"Users Score":0,"Answer":"Why are you converting CSV into JSON; CSV is directly being loaded into table without doing any data transformation specifically required in case of JSON, the lateral flatten to convert json into relational data rows; and why not use Snowflake Snowpipe feature to load data directly into Snowflake without use of Matallion. You can split large csv files into smaller chunks before loading into Snowflake ; this will help in distributing the data processing loads across SF Warehouses.","Q_Score":1,"Tags":"python,amazon-web-services,lambda,snowflake-cloud-data-platform,matillion","A_Id":65913376,"CreationDate":"2021-01-26T20:49:00.000","Title":"Data Ingestion: Load Dynamic Files from S3 to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Situation: A csv lands into AWS S3 every month. The vendor adds\/removes\/modifies columns from the file as they please. So the schema is not known ahead of time. The requirement is to create a table on-the-fly in Snowflake and load the data into said table. Matillion is our ELT tool.\nThis is what I have done so far.\n\nSetup a Lambda to detect the arrival of the file, convert it to JSON, upload to another S3 dir and adds filename to SQS.\nMatillion detects SQS message and loads the file with the JSON Data into Variant column in a SF table.\nSF Stored proc takes the variant column and generates a table based on the number of fields in the JSON data. The VARIANT column in SF only works in this way if its JSON data. CSV is sadly not supported.\n\nThis works with 10,000 rows. The problem arises when I run this with a full file which is over 1GB, which is over 10M rows. It crashes the lambda job with an out of disk space error at runtime.\nThese are the alternatives I have thought of so far:\n\nAttach an EFS volume to the lambda and use it to store the JSON file prior to the upload to S3. JSON data files are so much larger than their CSV counterparts, I expect the json file to be around 10-20GB since the file has over 10M rows.\nMatillion has an Excel Query component where it can take the headers and create a table on the fly and load the file. I was thinking I can convert the header row from the CSV into a XLX file within the Lambda, pass it to over to Matillion, have it create the structures and then load the csv file once the structure is created.\n\nWhat are my other options here? Considerations include a nice repeatable design pattern to be used for future large CSVs or similar requirements, costs of the EFS, am I making the best use of the tools that I are avaialable to me? Thanks!!!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":859,"Q_Id":65909077,"Users Score":0,"Answer":"I also load CSV files from SFTP into Snowflake, using Matillion, with no idea of the schema.\nIn my process, I create a \"temp\" table in Snowflake, with 50 VARCHAR columns (Our files should never exceed 50 columns). Our data always contains text, dates or numbers, so VARCHAR isn't a problem. I can then load the .csv file into the temp table. I believe this should work for files coming from S3 as well.\nThat will at least get the data into Snowflake. How to create the \"final\" table however, given your scenario, I'm not sure.\nI can imagine being able to use the header row, and\/or doing some analysis on the 'type' of data contained in each column, to determine the column type needed.\nBut if you can get the 'final' table created, you could move the data over from temp. Or alter the temp table itself.","Q_Score":1,"Tags":"python,amazon-web-services,lambda,snowflake-cloud-data-platform,matillion","A_Id":66094723,"CreationDate":"2021-01-26T20:49:00.000","Title":"Data Ingestion: Load Dynamic Files from S3 to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Situation: A csv lands into AWS S3 every month. The vendor adds\/removes\/modifies columns from the file as they please. So the schema is not known ahead of time. The requirement is to create a table on-the-fly in Snowflake and load the data into said table. Matillion is our ELT tool.\nThis is what I have done so far.\n\nSetup a Lambda to detect the arrival of the file, convert it to JSON, upload to another S3 dir and adds filename to SQS.\nMatillion detects SQS message and loads the file with the JSON Data into Variant column in a SF table.\nSF Stored proc takes the variant column and generates a table based on the number of fields in the JSON data. The VARIANT column in SF only works in this way if its JSON data. CSV is sadly not supported.\n\nThis works with 10,000 rows. The problem arises when I run this with a full file which is over 1GB, which is over 10M rows. It crashes the lambda job with an out of disk space error at runtime.\nThese are the alternatives I have thought of so far:\n\nAttach an EFS volume to the lambda and use it to store the JSON file prior to the upload to S3. JSON data files are so much larger than their CSV counterparts, I expect the json file to be around 10-20GB since the file has over 10M rows.\nMatillion has an Excel Query component where it can take the headers and create a table on the fly and load the file. I was thinking I can convert the header row from the CSV into a XLX file within the Lambda, pass it to over to Matillion, have it create the structures and then load the csv file once the structure is created.\n\nWhat are my other options here? Considerations include a nice repeatable design pattern to be used for future large CSVs or similar requirements, costs of the EFS, am I making the best use of the tools that I are avaialable to me? Thanks!!!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":859,"Q_Id":65909077,"Users Score":0,"Answer":"Why not split the initial csv file into multiple files and then process each file in the same way you currently are?","Q_Score":1,"Tags":"python,amazon-web-services,lambda,snowflake-cloud-data-platform,matillion","A_Id":65910045,"CreationDate":"2021-01-26T20:49:00.000","Title":"Data Ingestion: Load Dynamic Files from S3 to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a SQL Operator that creates a simple json. The end goal is that json being sent to a rest API. I'm finding the process of sending a HTTP POST in SQL code complicated, so if I can get the json kicked back to airflow I can handle it from there. Any help on either approach would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":65922499,"Users Score":0,"Answer":"So thanks to a coworker I was able to figure this out. The built in MsSqlHook has a get_first method that receives the first row from the results of the SQL code you give it. So in my case my SQL code returns a JSON in a single row with a single field, so using get_first I can retrieve that JSON and use a HTTPHook to send it to the rest API","Q_Score":2,"Tags":"python,json,sql-server,airflow","A_Id":65925867,"CreationDate":"2021-01-27T15:42:00.000","Title":"Using Airflow, does the MsSqlOperator accept responses from SQL Server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am faced with a challenge to move json files from AWS s3 bucket to sharepoint.\nIf anyone can share inputs on if this is doable and what's the simplest of approach to accomplish this(thinking python script in AWS lambda).\nThanks in Advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1549,"Q_Id":65922580,"Users Score":0,"Answer":"Using S3 boto API, you can download the files from any bucket to local drive.\nThen using Office365 API, you can upload that local file to Share point. Make sure you check the disk space in local before doing the download option.","Q_Score":0,"Tags":"python,amazon-s3,sharepoint","A_Id":70940346,"CreationDate":"2021-01-27T15:47:00.000","Title":"Approach to move files from s3 to sharepoint","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Python and I just wanted to know if there is any connector for Python 3.9\nI've looked at MySQL page but the last Python connector on the page (version 8.0.22) isn't compatible with the 3.9 version.\nAny help? Am I not finding it or does it not exist for now?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":4679,"Q_Id":65925851,"Users Score":-2,"Answer":"Don't call your python script \"mysql.py\", rename it and it will works.","Q_Score":1,"Tags":"python,python-3.x,mysql-python,mysql-connector","A_Id":68149729,"CreationDate":"2021-01-27T19:14:00.000","Title":"Is there a version of MySQL connector for Python 3.9?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've just gotten started with Datasette and found that while I have hundreds of .sqlite databases, only one was able to load (it was empty). Every other one has had this sort of error:\nError: Invalid value for '[FILES]...': Path '\/Users\/mercury\/Pictures\/Photos' does not exist.\nAny suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":989,"Q_Id":65949007,"Users Score":0,"Answer":"It turns out this was a simple error. The file path is required to be in quotes.","Q_Score":0,"Tags":"python,datasette","A_Id":65949008,"CreationDate":"2021-01-29T04:57:00.000","Title":"Error: Invalid value for '[FILES]...': Path '{path\/to\/data}' does not exist","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using a Excel template which have 6 tabs (All unprotected) and writing the data on each worksheet using openpyxl module.\nOnce the excel file is created and when tried to open the generated file, its not showing all data untill and unless I click \"Enable editing\" pop up.\nIs there any attribute to disable in openpyxl.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":470,"Q_Id":65950058,"Users Score":1,"Answer":"This sounds like Windows has quarantined files received over a network. As this is done when the files are received, there is no way to avoid this when creating the files.","Q_Score":0,"Tags":"python,python-3.x,openpyxl","A_Id":65952722,"CreationDate":"2021-01-29T06:56:00.000","Title":"How to ignore \"Enable Editing\" in excel after writing data using openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to loop through a table that contains covid-19 data. My table has 4 columns: month, day, location, and cases. The values of each column in the table is stored in its own list, so each list has the same length. (Ie. there is a month list, day list, location list, and cases list). There are 12 months, with up to 31 days in a month. Cases are recorded for many locations around the world. I would like to figure out what day of the year had the most total combined global cases. I'm not sure how to structure my loops appropriately. An oversimplified sample version of the table represented by the lists is shown below.\nIn this small example, the result would be month 1, day 3 with 709 cases (257 + 452).\n\n\n\n\nMonth\nDay\nLocation\nCases\n\n\n\n\n1\n1\nCAN\n124\n\n\n1\n1\nUSA\n563\n\n\n1\n2\nCAN\n242\n\n\n1\n2\nUSA\n156\n\n\n1\n3\nCAN\n257\n\n\n1\n3\nUSA\n452\n\n\n.\n.\n...\n...\n\n\n12\n31\n...\n...","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":232,"Q_Id":65972079,"Users Score":0,"Answer":"you can check the max value in your cases list first. then map the max case's index with other three lists and obtain their values.\nex: caseList = [1,2,3,52,1,0]\nthe maximum is 52. its index is 3. in your case you can get the monthList[3], dayList[3],\nlocationList[3] respectively. then you get the relevant day, month and country which is having the most total global cases.\ncheck whether this will help in your scenario.","Q_Score":0,"Tags":"python,list,nested-loops","A_Id":65972201,"CreationDate":"2021-01-30T19:10:00.000","Title":"Looping through multiple columns in a table in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using bulk_create to upload some data from excel to django db. Since the data is huge I had to use bulk_create instead of .create and .save. But the problem is that I need to show the user how many duplicate data has been found and has not been uploaded due to integrity error. Is there a way to get the number of errors or duplicate data while using bulk upload?","AnswerCount":1,"Available Count":1,"Score":-0.537049567,"is_accepted":false,"ViewCount":117,"Q_Id":65989590,"Users Score":-3,"Answer":"After, Reading data from csv file.\nFirst create a list before inserting data to system.\nThen convert that list to set after then again sort the data which is in set.\nHere , you gets every data exactly one time in sorted manner.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":65990514,"CreationDate":"2021-02-01T08:48:00.000","Title":"is there a way to get the count of conflicts while using Django ...bulk_create(.., ignore_conflicts=True)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python code that uses watchdog and pandas to automatically upload a newly added excel file once it has been pasted on a given path.\nThe code works well on my local machine but when I run it to access files on windows server 2012 r 2, I am getting a file permission error. what can be the best solution?\nNB: I am able to access the same files using pandas read_excel() without using the watchdog but I want to automate the process so that it auto reads the files every time files are being uploaded","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":66021822,"Users Score":0,"Answer":"Few possible reasons that you get a permission deny\n\nThe file has been lock because someone is opening it.\nYour account doesn't have the permission to read\/write\/execute","Q_Score":0,"Tags":"python,watchdog,python-watchdog","A_Id":66021964,"CreationDate":"2021-02-03T05:32:00.000","Title":"windows server file permission error when using watchdog","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im using AWS SageMaker Notebooks.\nWhat is the best way to execute notebook from sagemaker?\n\nMy idea is to have an S3 bucket.\nWhen a new file is putted there i want to execute a notebook that reads from S3 and puts the output in other bucket.\n\nThe only way i have from now is to start an S3 event, execute a lambda function that starts a sagemaker instance and execute the notebook. But is getting too much time to start and it doesnt work yet for me with a big notebook.\nMaybe is better to export the notebook and execute it from another place in aws (in order to be faster), but i dont know where.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":66061721,"Users Score":0,"Answer":"I have limited experience, but..If you are talking about training jobs, the only other way to launch one is to create you own container, push to ECR and launch directly the training job without dealing with notebooks. I am working on a similar project where an S3 upload triggers a lambda function which start a container (it's not sagemaker but the concept is more or less the same). The problem with this approach is that however AWS takes time to launch an instance, minutes I would say. Another approach could be to have a permanent running endpoint and trigger in some way the elaboration.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,aws-lambda,amazon-sagemaker","A_Id":66139522,"CreationDate":"2021-02-05T10:37:00.000","Title":"In AWS, Execute notebook from sagemaker","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a glue job that parses csv file uploaded to S3 and persist data to rds instance. It was working fine. But one day there occurred an error\n\nAn error occurred while calling\nz:com.amazonaws.services.glue.util.Job.commit. Not initialized.\n\nHow can I resolve this? I haven't made any changes in the script or anywhere. The python version used is 3, glue version 2. Somebody please help.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":668,"Q_Id":66153577,"Users Score":1,"Answer":"Resetting the Job Bookmark seemed to have fixed this for me. In the Console, select the Glue Job -> Action -> Reset job bookmark.","Q_Score":1,"Tags":"python-3.x,amazon-s3,amazon-rds,aws-glue","A_Id":69762043,"CreationDate":"2021-02-11T11:09:00.000","Title":"An error occurred while calling z:com.amazonaws.services.glue.util.Job.commit. Not initialized","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Please note that I am using pure Python and not something like Anaconda. Running on a new, updated Windows 10 machine with no virtual environment. After removing all previous Python installations, rebooting and performing a fresh install if Python 3.9.1. I run the python console and type:\nimport sqlite3\nI receive the following error:\nPython 3.9.1 (tags\/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport sqlite3\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"C:\\Program Files\\Python39\\lib\\sqlite3_init_.py\", line 23, in \nfrom sqlite3.dbapi2 import *\nFile \"C:\\Program Files\\Python39\\lib\\sqlite3\\dbapi2.py\", line 27, in \nfrom _sqlite3 import *\nImportError: DLL load failed while importing _sqlite3: The specified module could not be found.\n\n\n\nI verified that the sqlite3.dll file does exist in C:\\Program Files\\Python39\\DLLs","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":493,"Q_Id":66162281,"Users Score":0,"Answer":"It was an environment issue. Previous builds (using Kivy) had left .pyd and .pyc files in the application directory and when we ran the application, python would try to load those and the files they reference, rather than using the proper files in the Python39 directory. As soon as we deleted those artifacts, the app ran fine (and sqlite loaded fine).","Q_Score":0,"Tags":"python,windows,sqlite","A_Id":66162703,"CreationDate":"2021-02-11T20:25:00.000","Title":"Python 3.9.1 errors when attempting to import sqlite3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to call cnxn.commit() whenever a cursor has data however when the cursor is empty it throws cnxn.commit() mariadb.InterfaceError: Commands out of sync; you can't run this command now\nusing\ncursor.execute(\"call getNames\")","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":378,"Q_Id":66169037,"Users Score":0,"Answer":"The solution is that I should not call stored procedure but rather pass the query manually like cursor.execute(\"SELECT * FROM names\")","Q_Score":0,"Tags":"python,mariadb,database-cursor","A_Id":66169755,"CreationDate":"2021-02-12T09:09:00.000","Title":"mariadb.InterfaceError: Commands out of sync; you can't run this command now","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to import postgresql database (.sql) file from AmazonS3 to AWS RDS?\nI am very new to AWS, and Postgresql.\nI have created a database using PgAdmin4 and added my data to the database.\nI have created a backup file of my database i.e. .SQL file.\nI have created a database instance on AWS RDS.\nI have uploaded my database file and several documents s3 bucket.\nI tried to integrate AWS S3 and RDS database using AWS Glue, but nothing is working for me. I am not able to figure out how to integrate S3 and RDS for importing and exporting datafrom S3 to RDS and vice versa.\nCan you please tell me how can I set up RDS and S3?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":510,"Q_Id":66192696,"Users Score":0,"Answer":"What you can do is install a pure python library to interact with rds and run the commands via that library just like you would do with any normal python program. It is possible for you to add libraries like this to run in your glue job. In your case pg8000 would work like a charm","Q_Score":0,"Tags":"python-3.x,postgresql,amazon-s3,amazon-rds,aws-glue","A_Id":66200464,"CreationDate":"2021-02-14T05:24:00.000","Title":"How to import postgresql database (.sql) file from AmazonS3 to AWS RDS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My configuration (very basic):\n\n\n settings.py\n AWS_S3_REGION_NAME = 'eu-west-3'\n AWS_S3_FILE_OVERWRITE = False\n # S3_USE_SIGV4 = True # if used, nothing changes\n # AWS_S3_SIGNATURE_VERSION = \"s3v4\" # if used, nothing changes\n AWS_ACCESS_KEY_ID = \"xxx\"\n AWS_SECRET_ACCESS_KEY = \"xxx\"\n AWS_STORAGE_BUCKET_NAME = 'xxx'\n # AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com' # if used, no pre-signed urls\n AWS_DEFAULT_ACL = 'private'\n AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'}\n AWS_LOCATION = 'xxx'\n DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'\n \n INSTALLED_APPS = [\n ...,\n 'storages'\n ]\n \n models.py\n class ProcessStep(models.Model):\n icon = models.FileField(upload_to=\"photos\/process_icons\/\")\n\n\nWhat I get:\n\nPre-signed url is generated (both in icon.url and automatically on admin page)\nPre-signed url response status code = 403 (Forbidden)\nIf opened, SignatureDoesNotMatch error. With text: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n\nTried:\n\nchanging access keys (both root and IAM)\nchanging bucket region\ncreating separate storage object for icon field (same error SignatureDoesNotMatch)\nchanging django-storages package version (currently using the latest 1.11.1)\n\nOpinion:\n\nboto3 client generate_presigned_url returns url with invalid signature\n\nQuestions:\n\nWhat should I do?\nWhy do I get the error?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":313,"Q_Id":66200605,"Users Score":-2,"Answer":"Patience is a virtue!\nOne might wait for 1 day for everything to work","Q_Score":0,"Tags":"django,amazon-s3,acl,python-django-storages","A_Id":66224777,"CreationDate":"2021-02-14T22:09:00.000","Title":"django storages AWS S3 SigVer4: SignatureDoesNotMatch","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u00b4m trying to load a .csv in to my rocksdb database, but it fails and show me this error:\n Got error 10 'Operation aborted:Failed to acquire lock due to rocksdb_max_row_locks limit' from ROCKSDB\nI've tried with SET SESSION rocksdb_max_row_locks=1073741824; but same error always.\nAnyone can help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":540,"Q_Id":66241696,"Users Score":6,"Answer":"This should do the trick (before starting the insert)\n\nSET session rocksdb_bulk_load=1;","Q_Score":3,"Tags":"python,mariadb,rocksdb","A_Id":66831678,"CreationDate":"2021-02-17T12:10:00.000","Title":"ROCKSDB Failed to acquire lock due to rocksdb_max_row_locks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run\nsudo apt install mysql-workbench-community\nI get the error\nThe following packages have unmet dependencies:\nmysql-workbench-community : Depends: libpython3.7 (>= 3.7.0) but it is not installable\nE: Unable to correct problems, you have held broken packages.\nI then ran\nsudo dpkg --get-selections | grep hold\nWhich did not return anything\ntyping\npython3 -v\nProduces an error\nif I type\npython3 --version\nI get\nPython 3.8.5\nIf I try to run\nsudo apt install libpython3.7\nI get the error\nE: Package 'libpython3.7' has no installation candidate\nI cannot come up with a way to fix this I have recently updated from 19\nHelp much appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":507,"Q_Id":66260846,"Users Score":0,"Answer":"This was caused due to running an older version of MYSQL.\nfix was to remove the mysql repository for tools and install the work bench via snap.","Q_Score":0,"Tags":"python-3.x,mysql-workbench","A_Id":66271946,"CreationDate":"2021-02-18T13:22:00.000","Title":"cannot install mysql-workbench-community on ubuntu 20.04","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm having a hard time connecting my facial recognition system (realtime) to the database.\nI am using python language here. Try to imagine, when the system is doing REAL-TIME face detection and recognition, it will certainly form frame by frame during the process (looping logic), and I want if that face is recognized then the system will write 'known face' in the database. But this is the problem, because what if the upload to the database is done repeatedly because the same frame is continuously formed?\nthe question is, how do you make the system only upload 1 data to the database and if the other frames have the same image, the system doesn't need to upload data to the database?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":66298575,"Users Score":0,"Answer":"you dont show any code, but to do what you're asking you want to have a flag that detects when a face is found and sets the variable. Then clear the variable once the flag leaves the frame. to account for false positives you can wait 4-5 frames before clear the flags and see if the face is still in the frame (i.e someone turns their head and the tracking looses the face)","Q_Score":0,"Tags":"python,face-recognition,yolo","A_Id":66298603,"CreationDate":"2021-02-21T02:28:00.000","Title":"Connect face recognition model to database efficiently","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that runs a data migration script through a transaction interfacing with a MySQL DB. I am the process of moving this script over to NodeJS which is accessible through an API endpoint.\nThe problem I am having is that, since my Python data migration is wrapped in a transaction, then my Node process cannot interact with the new data.\nI have started to collect relevant information in my Python script and then send in over the POST body to my Node script for now, but this strategy has it's own complications with keep data in sync and then responding with the new information that I need to make sure to insert back in my Python process.\nIs there a better way that I can share the transaction data between my Python and my Node process?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":47,"Q_Id":66334825,"Users Score":1,"Answer":"Here are some ideas:\n\nStore data in a cache server that both the Python app and the Node.js app have access to. Popular software for this is Memcached or Redis.\n\nUse a message queue to send data back and forth between the apps. Some examples are RabbitMQ or ActiveMQ.\n\nCommit the data in the database using your Python app. Then make an http POST request to the Node.js app, to signal the Node.js app the data is ready (the POST request doesn't need to contain the data). The Node.js app does what it's going to do with the data before sending the http response. So the Python app knows that once it receives the response, the data has been updated by Node.js.","Q_Score":1,"Tags":"python,mysql,node.js,api","A_Id":66335631,"CreationDate":"2021-02-23T14:13:00.000","Title":"Sharing transaction between processes MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have set up a telegram bot to fetch data from my mysql db.\nIts running well until like after 1 day..And It just cannot connect:\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/mysql\/connector\/connection.py\", line 809, in cursor\nraise errors.OperationalError(\"MySQL Connection not available.\")\nI have checked that the script is perfect and I can even run it perfectly on the server, while at the same time if it run through the bot , it throws the above errors.\nEven so ,it will resume to normal after I reboot the apache server. Can anyone help??? Thanks first.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":349,"Q_Id":66344843,"Users Score":0,"Answer":"It turns out that its not related to my bot. But sql connection called by my django server (not orm, but mysql.connector.)\nI didn't close the connection properly (I closed cursor). After I closed the connection conn.close()immediately after the fetch, the problem vanished.\nYet I still dun understand why it doesn't cause any problem when i run the script manually. I feel its something about connection time. I am no expert of mysql, in fact I am just a amateur of programming. see anyone can give further solution. (I have changed the title in order to make my problem more relevant.)","Q_Score":0,"Tags":"python,mysql,telegram-bot","A_Id":66410549,"CreationDate":"2021-02-24T04:44:00.000","Title":"OperationalError(\"MySQL Connection not available.\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Ive tryed to print a value using this code.\n\nprint(sh1.cell(2,1).value)\n\nbeing sh1, a valid worksheet of the workbook, and the cell (2,1) containing a string\nbtw i'm using pycharm","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":66368595,"Users Score":0,"Answer":"I figure it out i was trying to retrive a value thas was added to the workbook after the last time i save the file, once i save it again, python retr","Q_Score":0,"Tags":"python-3.x,excel,cell,return-value,openpyxl","A_Id":66374330,"CreationDate":"2021-02-25T12:35:00.000","Title":"How do i retrieve a value from an excel cell through python using the Cell method?, using python3.9 and openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two applications that access the same DB. One application inserts data into a table. The other sits in a loop and waits for the data to be available. If I add a new connection and close the connection before I run the SELECT query I find the data in the table without issues. I am trying to reduce the number of connections. I tried to leave the connection open then just loop through and send the query. When I do this, I do not get any of the updated data that was inserted into the table since the original connection was made. I get I can just re-connect and close, but this is a lot of overhead if I am connecting and closing every second or 2. Any ideas how to get data that was added to a DB from an external source with a SELECT query without having to connect and close every time in a loop?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":66372325,"Users Score":0,"Answer":"Do you commit your insert?\nnormally the best way is you close your connection, and it is not generating very overhead if you open a connection for the select query.","Q_Score":0,"Tags":"python,mysql,sql,pymysql","A_Id":66372679,"CreationDate":"2021-02-25T16:17:00.000","Title":"pymysql SELECT * only detecting changes made externally after instantiating a new connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find information on how to populate database in sql server express using python and pyodbc. Most searches describe methods using sql server and NOT the express version. Any suggestions would be appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":66394387,"Users Score":0,"Answer":"There is no difference. SQL Express is just SQL Server and has exactly the same connection methods as any other SQL Server instance.\nYou should check that remote connections are enabled though -\n\nEnsure your local Windows Firewall allows Port 1433 inbound\nOpen SQL Configuration Manager on your computer and under Network Configuration, click on protocols for SQLEXPRESS and ensure that TCP\/IP is enabled.\nRight-click on TCP\/IP and selecvt Properties. On the IP Address tab under IPAll, set TCP Dynamic Ports to 0 and TCP Port to 1433 (assumes that you don't have any other SQL Server instance on your computer)\n\nYour instance will need a restart and you can check in the error log to ensure that it is listening on TCP\/IP and Port 1433","Q_Score":0,"Tags":"python,sql-server-express","A_Id":66395286,"CreationDate":"2021-02-27T00:21:00.000","Title":"python and sql server express data storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For speed of upload, I have a multiprocessing Python program that splits a CSV into many parts, and uploads each in a different process. Also for speed, I'm putting 3000 inserts together into each insert_many.\nThe trick is that I have some bad data in some rows, and I haven't yet figured out where it is. So what I did was a Try\/Except around the insert_many, then I try to insert again the 3000 documents, but one at a time, inside another Try\/Except. Then I can do a pprint.pprint on just the rows that have errors.\nHowever, I'm wondering if when the update of 3000 documents fails because of an error, in for example the 1000th row, does the entire 3000 fail? Or do the first 999 rows get stored and the rest fail? Or do the 2999 rows get stored, and only the one bad-data row fails?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":138,"Q_Id":66414955,"Users Score":2,"Answer":"When you do inserts via a bulk write, you can set the ordered option.\nWhen ordered is true, inserts stop at the first error.\nWhen ordered is false, inserts continue past each document that failed for any reason.\nIn either case, bulk writes are not transactional in the sense that a failure doesn't remove any previously performed writes (inserts or otherwise).\nIf you would like a transaction, MongoDB provides those as an explicit feature.","Q_Score":0,"Tags":"python-3.x,mongodb,pymongo","A_Id":66415007,"CreationDate":"2021-02-28T23:36:00.000","Title":"is PyMongo \/ MongoDB insert_many transactional?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a python script that is designed to process some data, create a table if not exists, and truncate the table before inserting a refreshed dataset. I am using a role that has usage, read, write, create table permissions, as well stage permissions set as follows:\ngrant usage, read, write on future stages in schema to role \nI am using the write_pandas function in python via the snowflake connector. The documentation says that this function uses PUT and Copy Into commands:\nTo write the data to the table, the function saves the data to Parquet files, uses the PUT command to upload these files to a temporary stage, and uses the COPY INTO command to copy the data from the files to the table. You can use some of the function parameters to control how the PUT and COPY INTO
statements are executed.\nI still get the error message that I am unable to operate on the schema, and I am not sure what else I need to add. Does someone have the list of permissions that are required to run the write_pandas command?","AnswerCount":5,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":7614,"Q_Id":66431601,"Users Score":5,"Answer":"write_pandas() does not create the table automatically. You need to create the table by yourself if the table does not exist beforehand. For each time you run write_pandas(), it will just append the dataframe to the table you specified.\nOn the other hand, if you use df.to_sql(..., method=pd_writer) to write pandas dataframe into snowflake, it will create the table automatically for you, and you can use if_exists in to_sql() to specify different behaviors - append, replace, or fail - if the table already exists.","Q_Score":3,"Tags":"python,permissions,snowflake-cloud-data-platform,database-schema,connector","A_Id":67380436,"CreationDate":"2021-03-02T00:27:00.000","Title":"write_pandas snowflake connector function is not able to operate on table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I manually uploaded a CSV to S3 and then copied it into redshift and ran the queries. I want to build a website where you can enter data and have it automatically run the queries when the data is entered and show the results of the queries.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":16,"Q_Id":66462945,"Users Score":1,"Answer":"Amazon Redshift does not have Triggers. Therefore, it is not possible to 'trigger' an action when data is loaded into Redshift.\nInstead, whatever process you use to load the data will also need to run the queries.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-redshift","A_Id":66464775,"CreationDate":"2021-03-03T18:38:00.000","Title":"Is it possible to upload a CSV to redshift and have it automatically run and export the saved queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a feasible package that helps encrypt an xlsx\/xls file without having to call win32 COM API (such as pywin32 and xlwings)?\nGoal is to achieve protecting data from viewing without the password.\nReason not to use pywin32 is that it'll trigger an excel instance to manipulate excel files. For my use cases, all scripts are centrally executed on server and server has issue with excel instance or is very slow when opening an excel.\nPreviously stuck with reading excel with pwd, but this has been resolved by msoffcrypto-tool package which doesn't depend on win32 COM api.\nPackages like openpyxl only provide workbook\/worksheet protection, which doesn't really stop others from viewing the data, so unfortunately this is no go.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":135,"Q_Id":66480173,"Users Score":0,"Answer":"Basically there's no effective workaroud now. Still have to use win32 API call to make it happen","Q_Score":0,"Tags":"python-3.x,excel,pywin32,win32com,password-encryption","A_Id":67614510,"CreationDate":"2021-03-04T17:35:00.000","Title":"Python Encrypt xlsx without calling win32 COM API","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating an ERP web application using Django. How can i connect multiple apps inside of a project with one database. I am using the PostgreSQL database and also how can i centralized the database for all modules of ERP. How can i perform operations in other module and see if user is authenticated or not","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":220,"Q_Id":66504952,"Users Score":0,"Answer":"Your apps use only the database(s) set up in your settings.py file.","Q_Score":3,"Tags":"python-3.x,django,postgresql","A_Id":66511623,"CreationDate":"2021-03-06T11:01:00.000","Title":"Multiple applications inside a Django project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two tables like :\nUser:\nId | Name | Age |\n1 | Pankhuri | 24\n2 | Neha | 23\n3 | Mona | 25\nAnd another\nPrice log:\nId | type | user_id | price | created_at|\n1 | credit | 1 | 100 | 2021-03-05 12:39:43.895345\n2 | credit | 2 | 50 | 2021-03-05 12:39:43.895345\n3 | debit | 1 | 100 | 2021-03-04 12:39:43.895345\n4 | credit | 1 | 100 | 2021-03-05 12:39:43.895345\nThese are my two tables from where i need to get heighst credit price user with their total price count acoording to last week date..\ni want a result like :\nlike if i want to get result for user id 1 then result should be like:\npankhuri on date 04\/03 price 100\nand on date 5\npankhuri on date 05\/03 price 200\nwant only heighst price user in retirn with their price total on date basisi.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":66576629,"Users Score":0,"Answer":"You can either use GROUP BY, ORDER BY data DESC\/ASC, MAX or using sub query.","Q_Score":0,"Tags":"python,django,lis","A_Id":66577080,"CreationDate":"2021-03-11T04:47:00.000","Title":"Python queryto get height price user with count","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 databases on different servers DB1 and DB2. DBlink is created on DB1 for DB2.\nI have only 1 table to be used which is present on DB1 for which I have to use dblink otherwise I can directly hit DB2. Is there any way to get exclude DB1 Dblink and have DB1 table data too?\nAlso, I don't have the right to create anything on DB2.\ne.g.\nselect * from tb1\njoin tb2\non tb1.col = tb2.col\nInstead of going through DB1 where dblink is present for DB2. I want to directly connect DB2 by getting of one table in DB2 or using python sqllite or sqlalchemy","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":130,"Q_Id":66644363,"Users Score":0,"Answer":"Is there any way to get exclude DB1 Dblink and have DB1 table data too?\n\nNo, there's not. Especially as you can't create anything on DB2 where all other tables reside. You have to access DB1, and the only way - as they are in different databases - is to use a database link.\nHowever! Some people say \"database\" while - in Oracle - it is a \"schema\" (which resides in the same database). I'd say that you don't have that \"problem\", i.e. that you know which is which. Though, if you don't then: if they are really just two different schemas in the same database, then you don't need a database link and can access DB1 data by other means (DB1 grants you select privilege; then reference its table by preceding its name with the owner name).","Q_Score":0,"Tags":"python,sql,oracle,sqlite","A_Id":66644530,"CreationDate":"2021-03-15T19:20:00.000","Title":"How to exclude DBLINK","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I am using AWS athena where I have Data Source set to AwsDataCatalog, database set to test_db, under which I have a table named debaprc.\nNow, I have superset installed on an EC2 instance (in virtual environment). On the Instance, I have installed PyAthenaJDBC and PyAthena. Now, when I launch Superset and try to add a database, the syntax given is this:\nawsathena+rest:\/\/{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com\/{schema_name}?s3_staging_dir={s3_staging_dir}\nNow I have 2 questions -\n\nWhat do I provide for schema_name?\nI tried putting test_db as schema_name but it couldn't connect for some reason. Am I doing this right or do I need to do stuff differently?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":481,"Q_Id":66660946,"Users Score":1,"Answer":"It worked for me adding port 443 to the connection string as below and you can use test_db as schema_name:\nawsathena+rest:\/\/{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com:443\/{schema_name}?s3_staging_dir={s3_staging_dir}","Q_Score":0,"Tags":"python,amazon-web-services,sqlalchemy,amazon-athena,apache-superset","A_Id":66715212,"CreationDate":"2021-03-16T18:09:00.000","Title":"Connecting athena to superset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a GUI app that will be used supposedly by mutliple users. In my app, I use QAbstractTableModel to display a MS Access Database (stored on a local server, accessed by several PCs) in a QTableView. I developped everything I needed for unique user interaction. But now I'm moving to the step where I need to think about multi-user interaction.\nFor exemple, if user A changes a specific line, the instance of the app on user's B PC needs to update the changed line. Another example, if user A is modifying a specific line, and user B also wants to modify it, it needs to be notified as \"already being modified, wait please\", and once the modification from user A is done, the user B needs to see this modification updated before he has any interaction.\nToday, because of the local nature of the MS Access database, I have to update the table view a lot of time, based on user interaction, in order to not miss any database modification from other potential users. It is kinda greedy in terms of performance and resources.\nI was thinking about using Django in order make the different app instances communicate with each other, but maybe I'm overthingking it and may be there is other solutions.\nDunno if it's clear, I'm available for more informations !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":66688187,"Users Score":0,"Answer":"Perhaps you could simply store a \"lastUpdated\" timestamp on the row. With each update, you update that timestamp.\nNow, when you submit an update, you include that timestamp, and if the timestamps don't match, you let the user know, and handle the conflict on the frontend (Perhaps a simple \"overwrite local copy, or force update and overwrite server copy\" option).\nThat's a simple and robust solution, but if you don't want users wasting time writing updates for old rows, you could use WebSockets to communicate from a server to any clients with that row open for editing, and let them know that the row has been updated.\nIf you want to \"lock\" rows while the row is already being edited, you could simply store a \"inUse\" boolean and have users check the value before continuing.","Q_Score":0,"Tags":"python,database,ms-access,pyqt5,multi-user","A_Id":66688336,"CreationDate":"2021-03-18T09:26:00.000","Title":"How to handle multi-user database interaction with PyQt5","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I efficiently transfer newly arrived documents from Azure CosmosDb with MongoDb api to Postgres at regular intervals?\nI am thinking of using a python script to query MongoDB based on timedate, but I am open to other suggestions.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":66695615,"Users Score":0,"Answer":"You can use Azure Data Factory to achieve this. Use Azure Cosmos DB MongoDB API as source and Postgres as sink. Then use watermark to record your last modify time. Finally, create a schedule trigger to execute it.","Q_Score":0,"Tags":"python,mongodb,postgresql,azure","A_Id":66965222,"CreationDate":"2021-03-18T16:57:00.000","Title":"Using python I need to transfer documents from Azure CosmosDB with MongoDB api to Postrgres on a daily basis probably using Azure functions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Previous version of code wrote fine with Python 2.7 to AWS MySQL Version 8 with the following:\n\n\n\"\"\"INSERT INTO test_data(test_instance_testid,\nmeas_time,\ndata_type_name,\nvalue,\ncorner_case,\nxmit,\nstring_value)\nVALUES('15063', '2021-03-19 20:36:00', 'DL_chamber_temp', '23.4',\n'None', 'None', 'None')\"\"\"\n\nBut now, porting to Python 3.7 to the same server I get this:\n\npymysql.err.InternalError: (1366, \"Incorrect integer value: 'None' for column 'xmit' at row 1\")\n\nThis makes sense since it is a str value 'None' and not Python type None (although it used to work).\nIt is legal to fill these columns as NULL values--that is their default in the test_data table.\nIf I change the code and set the values to Python None, I get a different error which I don't understand at all:\n\n\"\"\"INSERT INTO test_data(test_instance_testid,\nmeas_time,\ndata_type_name,\nvalue,\ncorner_case,\nxmit,\nstring_value)\nVALUES('15063', '2021-03-19 20:36:00', 'DL_chamber_temp', '23.4',\nNone, None, None)\"\"\"\n\n\npymysql.err.InternalError: (1054, \"Unknown column 'None' in 'field list'\")\n\nI really appreciate any help or suggestions.\nThanks, Mike\nThanks for the help! Yes, NULL does work, but I'm stuck on how to handle value types on the fly within my method. Depending on the call I need to write a quoted value in one case and non-quoted NULL on others. For some reason my old code (illogically!) worked. I've tried everything I can think of without any luck.\nAny thoughts on how to do this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":788,"Q_Id":66718431,"Users Score":0,"Answer":"make default value as NULL at PhpMyAdmin","Q_Score":0,"Tags":"python,mysql","A_Id":66739265,"CreationDate":"2021-03-20T05:09:00.000","Title":"MySQL Version 8 Python 3.7 Cannot write None value as NULL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm moving an app that was previously running on windows 10 to a docker container with python3.6 linux base image. One of the necessary changes was changing the driver used in sql connection string from \"SQL Server\" to ODBC Driver 17 for SQL Server, because I have to use unixodbc-dev. I installed msodbcsql17 and mssql-tools via my Dockerfile, and I execute a query via an sqlalchemy engine that retrieves values from a column of sql type \"date\". With the SQL Server driver, these dates get converted to strings (which is what the code expects), but with ODBC Driver 17 for SQL Server, they are returned as dates. I'm using pyodbc==4.0.25 and SQLAlchemy==1.3.5.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":204,"Q_Id":66761418,"Users Score":1,"Answer":"The legacy \"SQL Server\" ODBC driver hasn't been enhanced since SQL Server 2000, long before the newer date data type (and other temporal types) was introduced with SQL Server 2008. The driver will return unrecognized types as strings instead of the native type.\nIf the native type is a breaking change for the app code, the correct solution is to use proper types in app code and the newer driver. About all you can do is use the legacy driver in the interim.","Q_Score":0,"Tags":"sql-server,linux,python-3.6,unixodbc","A_Id":66761847,"CreationDate":"2021-03-23T10:32:00.000","Title":"Why is ODBC Driver 17 for SQL Server converting strings to dates automatically and how can I stop this?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm contributing to a project that is using sqlalchemy. This project has a model.py file where they define all their classes, for example Foobar(BASE). Now, I've created another module mymodel.py where I need to extend some of those classes. So for example, in mymodule.py I have Foobar(model.Foobar) which I use to extend the parent class with new properties and functions. The problem is that when using either of those classes, I get this error from sqlalchemy: sqlalchemy.exc.InvalidRequestError: Multiple classes found for path \"Foobar\" in the registry of this declarative base. Please use a fully module-qualified path..\nMy question then is, how can I fix this error without renaming my classes? Since they are defined in a different module, how do I use a \"fully module-qualified path\"?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":737,"Q_Id":66795228,"Users Score":0,"Answer":"As stated by SQLAlchemy there a two classes found named Foobar. One is model.Foobar and the second one is mymodel.Foobar. You need to use the fully module-qualified path which is mymodel.Foobar to reference the new class.","Q_Score":0,"Tags":"python-3.x,sqlalchemy,orm","A_Id":66800246,"CreationDate":"2021-03-25T07:52:00.000","Title":"SQLAlchemy how to use a \"fully module-qualified path\"?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It's probably a dumb question but i cant find the answer anywhere. So, I have made a simple site with Flask and it have a database in SQL(SQLite3). I'have never uploaded a site before and I don't know how to get the data after deployed. Plz help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":66803446,"Users Score":0,"Answer":"If your code already works locally, make sure it has a relative path like '.\/data\/sqllitedb.db'.\nPut it in a folder that isn't available to access from your website.\nWhen you deploy to your website, it should use the same relative path.\nSqlite is great because it is just a local file and as long as you use relative paths to your main site, you should be able to access it","Q_Score":2,"Tags":"python,sql,sqlite,flask","A_Id":66803891,"CreationDate":"2021-03-25T16:10:00.000","Title":"How will I access the data in my database after uploading my site to the web?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to print almost 100 rows using Prettytable to slack channel.\nBefore sending it to slack channel , I am modifying the table to string and sending ::\nFinaltable = '````' + table.get_string() + '```'\nBut the data is very dislocated. It works fine when the rows are 20-30.\nIs there any other module that can help me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":66849567,"Users Score":0,"Answer":"The issue is : we can send max of 30 rows (including headers) to slack. Anything more than that would cause an error.\nSo i am splitting the entire table into sets of 30 rows (headers + data). Works fine for me,\nThanks","Q_Score":0,"Tags":"python,prettytable","A_Id":66850151,"CreationDate":"2021-03-29T06:28:00.000","Title":"Prettytable module doesn't render properly when the numbers of rows are huge","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The OCI Phyton SDK has functions, like GenerateAutonomousDatabaseWalletDetails and generate_autonomous_database_wallet, to generate the database wallet.\nIs there any function that allows adding user credentials on the wallet for the available service names? Something similar to what can be done with mkstore and createCredentials option.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":104,"Q_Id":66878065,"Users Score":0,"Answer":"No, there is no function for adding user credentials. Perhaps somewhat aside, but the 'user' and 'password' would be specific to the database instance, would have to be pre-created in the database and given access privileges, are not tied to a cloud user identity it any way. The exception being the ADMIN user, who is both an IAM user and a database user.\nThe underlying connection APIs expect the 'user' and 'password' of these database users in addition to the TLS credentials and the encryption mechanics it provides.","Q_Score":0,"Tags":"oci-python-sdk","A_Id":66912928,"CreationDate":"2021-03-30T20:31:00.000","Title":"Is it possible to store user credentials on generated Oracle wallet?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to pgsql, can not configure this error while I'm trying to makemigration in the Django-rest app.\nwhat should I install? I've installed the requirements.txt which consists of :\nPyJWT==1.7.1 pytz==2021.1 sqlparse==0.4.1 psycopg2>=2.8 psycopg2-binary>=2.8 python_dotenv==0.16\nthe error:\n\ncould not connect to server: Connection refused (0x0000274D\/10061)\nIs the server running on host \"localhost\" (127.0.0.1) and accepting\nTCP\/IP connections on port 5432?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":69,"Q_Id":66899741,"Users Score":1,"Answer":"The Db server is offline so you'll need to enable this. for windows do the following:\nStart -> Control panel -> Administration -> Services -> postgresql-x64-12 - start or restartc\nYou should be fine to make the migrations once the server is running. It is possible you may need to check the config in your settings but after that you should be good to go :)","Q_Score":0,"Tags":"python,django,database,postgresql","A_Id":67177630,"CreationDate":"2021-04-01T06:55:00.000","Title":"Unable to makemigrations while using postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to run a django project on an EC2 server, however, when I run python3 manage.py runserver, it returns this error, django.core.exceptions.ImproperlyConfigured: SQLite 3.9.0 or later is required (found 3.7.17).. I then check to see what version of SQLite3 is running on my python installation on my EC2 server by running sqlite3.sqlite_version, and it returns 3.7.17. So I then try to update SQLite3 using the default AWS EC2 Amazon Linux package manager, yum, by running yum install sqlite. It then returns this, Package sqlite-3.7.17-8.amzn2.1.1.x86_64 already installed and latest version, even though it is not the latest version. How can I install the latest version of SQLite3 to fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":567,"Q_Id":66976802,"Users Score":0,"Answer":"I had the same problem. Since my app is very small with little dependency, I was able to quickly switch to EC2 sever running Ubuntu. It is necessary to learn how to use Ubuntu (apt).\nYou can find right now in the installation:\nPackage: sqlite3\nVersion: 3.31.1-4ubuntu0.2","Q_Score":2,"Tags":"python,django,amazon-web-services,sqlite,yum","A_Id":67312816,"CreationDate":"2021-04-06T21:39:00.000","Title":"How can I fix Django SQLite3 error on AWS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have documents in different MongoDB databases referencing each other (mongoengine's LazyRefereneceField), so each time I need to get the field's value, I need to connect and disconnect from the field's relevant database, which I find very inefficient.\nI've read about connection pooling, but I can't find a solution on how to implement it using MongoEngine. How can I create a connection pool and reuse connections from it every time I need to the value for a LazyReferenceField?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":366,"Q_Id":66989859,"Users Score":0,"Answer":"MongoEngine is managing the connection globally (i.e once connected, it auto-magically re-use that connection), usually you call connect just once, when the application\/script starts and then you are good to go, and don't need to interfere with the connection.\nLazyReferenceField is not different from any other field (ReferenceField, StringField, etc) in that context. The only difference is that it's not doing the de-referencing immediatly but only when you explicitly request it with the .fetch method","Q_Score":0,"Tags":"python-3.x,mongodb,connection-pooling,mongoengine","A_Id":66993289,"CreationDate":"2021-04-07T16:04:00.000","Title":"Use connection pool with MongoEngine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install mysqlclient in a offline CentOS7 server, so that I can connect my Django site to a MariaDB\nWhat I did was to download .wheel package \"mysqlclient-2.0.3-cp37-cp37m-win_amd64.whl\" from PyPI.\nThen I run the code\n pip install mysqlclient-2.0.3-cp37-cp37m-win_amd64.whl\nBut received the following message\nmysqlclient-2.0.3-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform.\n[error message][1]\n[1]: https:\/\/i.stack.imgur.com\/bhqUD.png\nI looked through all available answers and internet questions but did not able to find a similar problem. Could someone give me help?\nThank you very much","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":66996950,"Users Score":0,"Answer":"After following @Brain comment, I have solved the problem.\nI went to PyPI and downloaded the .tar.gz file;\nUploaded the file to the offline server. Unzipped the file and followed the INSTALL.rst\nAlthough building from the .tar.gz source code required some more efforts. Thanks","Q_Score":0,"Tags":"python,mysql,django,centos7,python-wheel","A_Id":67017824,"CreationDate":"2021-04-08T03:28:00.000","Title":"Can not install mysqlclient in a OFFLINE CentOS7 server with ERROR message \"not a supported wheel on this platform\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"After parsing Excel file to Python and evaluating the workbook using pycel, can the pycel object be saved as an Excel file maintaining all original formatting, etc? I.e. only values need to be updated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":157,"Q_Id":66998366,"Users Score":0,"Answer":"TL;DR\nNo, you cannot save a pycel object back into Excel.\nWhy not?\nThe basic problem is that pycel is based on openpyxl. Openpyxl is used to read (and write if needed) Excel spreadsheets. However, while openpyxl has the computed values available for formula cells for a workbook it read in, it does not really allow those computed values to be saved back into a workbook it writes. It doesn't really make sense to save a different computed value for a formula cell, since the cell's value will be recomputed once it is opened back up in Excel.\nWhile it is true that pycel has the information available to properly populate a new value when the workbook is written, it evidently is not a use case that was important to the openpyxl authors or contributors.\n Please note that the openpyxl maintainers gladly took pull requests to make it run better with pycel. It seems likely they would be open to discussing a PR for writing values into workbooks.","Q_Score":0,"Tags":"python,pycel","A_Id":67003765,"CreationDate":"2021-04-08T06:25:00.000","Title":"Can a pycel object be saved as an Excel Workbook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an existing collection which contains 30m of data.\nAt the moment, its primary key is default ObjectId but now I'd like to add another primary key to it due to performance and identification issues.\nMy research comes out with a solution of removing and inserting data all over again with a new primary key.\nI couldn't find any other info about simply adding a new primary key to existing database, wondering if this is not available feature in pymongo?\nI'm worried that this whole operation will cause a issues as the database is quite big and will be hard to recover.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":67014893,"Users Score":1,"Answer":"MongoDB does not have the concept of \"primary key\".\nEach document must have the _id field set and the _id values must be unique in a collection. You can't change this behavior.\nYou can add additional unique indexes but they wouldn't replace the _id requirements just outlined.\nYou are also misusing the \"primary key\" concept even in relational sense. There can be only one primary key in a table, hence a primary key cannot be \"added\".","Q_Score":0,"Tags":"python,database,mongodb,pymongo","A_Id":67014976,"CreationDate":"2021-04-09T03:58:00.000","Title":"MongoDB pymongo - update primary key","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"ImportError: Missing optional dependency 'xlrd'. Install xlrd >= 1.0.0 for Excel support Use pip or conda to install xlrd.\n$ pip3 install xlrd\nRequirement already satisfied: xlrd in \/usr\/local\/lib\/python3.9\/site-packages (2.0.1)\nI have installed xlrd but again asking me to install it. I am running in an empty circle here!\nUsing Python3, Mac terminal.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":67028012,"Users Score":0,"Answer":"Go into the settings (CTRL + ALT + s) and search for project interpreter you will see all of the installed packages. Click the + button at the top right and search for xlrd, then click install package at the bottom left.","Q_Score":0,"Tags":"python-3.x","A_Id":67160977,"CreationDate":"2021-04-09T20:42:00.000","Title":"\"install xlrd\" satisfied and at the same time not","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried many contortions on this problem to try to figure out what's going on.\nMy SQLAlchemy code specified tables as schema.table. I have a special connection object that connects using the specified connect string if the database is PostgreSQL or Oracle, but if the database is SQLite, it connects to a :memory: database, then attaches the SQLite file-based database using the schema name. This allows me to use schema names throughout my SQLAlchemy code without a problem.\nBut when I try to set up Alembic to see my database, it fails completely. What am I doing wrong?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":923,"Q_Id":67083616,"Users Score":1,"Answer":"I ran into several issues that had to be worked through before I got this working.\nInitially, Alembic didn't see my database at all. If I tried to specify it in the alembic.ini file, it would load the SQLite database using the default schema, but my model code specified a schema, so that didn't work. I had to change alembic\/env.py in run_migrations_online() to call my connection method from my code instead of using engine_from_config. In my case, I created a database object that had a connect() method that would return the engine and the metadata. I called that as connectable, meta = db.connect(). I would return the schema name with schema=db.schema(). I had to import the db class from my SQLAlchemy code to get access to these.\nNow I was getting a migration that would build up the entire database from scratch, but I couldn't run that migration because my database already had those changes. So apparently Alembic wasn't seeing my database. Alembic also kept telling me that my database was out of date. The problem there was that the alembic table alembic_version was being written to my :memory: database, and as soon as the connection was dropped, so was the database. So to get Alembic to remember the migration, I needed that table to be created in my database. I added more code to env.py to pass the schema to context.configure using the version_table_schema=my_schema.\nWhen I went to generate the migration again, I still got the migration that would build the database from scratch, so Alembic STILL wasn't seeing my database. After lots more Googling, I found that I needed to pass include_schemas=True to context.configure in env.py. But after I added that, I started getting tracebacks from Alembic.\nFortunately, my configuration was set up to provide both the connection and the metadata. By changing the target_metadata=target_metadata line to target_metadata=meta (my local metadata returned from the connection), I got around these tracebacks as well, and Alembic started to behave properly.\nSo to recap, to get Alembic working with a SQLite database attached as a schema name, I had to import the connection script I use for my Flask code. That connection script properly attaches the SQLite database, then reflects the metadata. It returns both the engine and the metadata. I return the engine to the \"connectable\" variable in env.py, and return the metadata to the new local variable meta. I also return the schema name to the local variable schema.\nIn the with connectable.connect() as connection: block, I then pass to context.configure additional arguments target_metadata=meta, version_table_schema=schema, and include_schemas=True where meta and schema are my new local variables set above.\nWith all of these changes, I thought I was able to work with SQLite databases attached as schemas. Unfortunately, I continued to run into problems with this, and eventually decided that I simply wouldn't work with SQLite with Alembic. Our rule now is that Alembic migrations are only for non-SQLite databases, and SQLite data has to be migrated to another database before attempting an Alembic migration of the data.\nI'm documenting this so that anyone else facing this may be able to follow what I've done and possibly get Alembic working for SQLite.","Q_Score":0,"Tags":"python,database,sqlite,sqlalchemy,alembic","A_Id":67083617,"CreationDate":"2021-04-13T23:30:00.000","Title":"How do I set up Alembic for a SQLite database attached as a schema?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python list of about 200 values being generated every 5 seconds that need to be stored in a database with the timestamp. Because of the time factor, I think it is better stored in a column but I have not been able to figure out how. I can't pass the list variable directly into the database because MySQL does not support it. Does anyone have suggestions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":737,"Q_Id":67106934,"Users Score":0,"Answer":"First, import time and mysql libraries. Generate your list and put\nit in an array.\nThere after, use a for each statement to push every listed item to\nmysql table.\nYou will invoke sql INSERT INTO function and define db destination\nand authentication within the python script.","Q_Score":0,"Tags":"python,mysql,arraylist,mariadb,iot","A_Id":67108481,"CreationDate":"2021-04-15T10:39:00.000","Title":"How do I write elements of a list into mysql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a Python application where the desired functionality is that the webcam is used to take in a live video feed and based on whether a condition is true, an image is clicked and uploaded to a database.\nThe database I am using is MongoDB. As far as I can understand, uploading images straight-up to a database is not the correct method. So, what I wanted to do is the following:\n\nan image is clicked from the webcam\nthe image is uploaded to an S3 bucket (from the same Python script, so using boto3 perhaps)\na URL of the uploaded image is retrieved (this seems to be the tricky part)\nand then this URL along with some other details is uploaded to the database. (this is the easy part)\n\nMy ideal workflow would be a way to take that image and upload it to an S3 bucket, retrieve the URL and then upload this URL to the database all in one .py script.\nMy question is: how do I upload an image to an S3 bucket and then retrieve its public URL all through boto3 in a Python script?\nI also welcome any suggestions for a better approach\/strategy for storing images into MongoDB. I saw on some pages that GridFS could be a good method but that it is not recommended for the image uploads happening frequently (and really that using AWS is the more preferable way).","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":302,"Q_Id":67120896,"Users Score":1,"Answer":"You don't need to 'retrieve' the public url, you get to specify the bucket and name of the s3 object when you upload it, so you already have the information you need to know what the public url will be once uploaded, its not like s3 assigns a new unique name to your object once uploaded.","Q_Score":0,"Tags":"python,mongodb,amazon-web-services,amazon-s3,boto3","A_Id":67125119,"CreationDate":"2021-04-16T07:29:00.000","Title":"How to upload an image to MongoDB using an S3 bucket and Boto3 in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can list the files in the source bucket but when I try to download them I am getting \"Client error 403\" , the source team has server side encryption AES256 enabled.\nSo when I try :\nclient.download_fileobj(bucket, file, f, ExtraArgs={\"ServerSideEncryption\": \"AES256\"})\nI am getting ValueError: Invalid extra_args key 'ServerSideEncryption', must be one of: VersionId, SSECustomerAlgorithm, SSECustomerKey, SSECustomerKeyMD5, RequestPayer\nHow can I fix this issue?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":465,"Q_Id":67129240,"Users Score":1,"Answer":"It should work without mentioning ExtraArgs={\"ServerSideEncryption\": \"AES256\"}.\nWhen SSE algorithm is AES256, you don't need to mention that while downloading object, only while uploading it.\nWhile downloading it, you need to make sure that the credentials, you are using to download the object, have access to the key that is used to encrypt the object.","Q_Score":0,"Tags":"python,amazon-s3,boto3,amazon-kms","A_Id":67163065,"CreationDate":"2021-04-16T16:49:00.000","Title":"Download files with server side encryption SSE AES256 using boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am tinkering with redis and mysql to see how caching can improve performance. Accesing data from Cache is\/should be faster than accessing it from database.\nI calculated the time required for both the case in my program and found out that accesing from cache was much slower than accesing from the database . I was\/am wondering what might be the cause(s).\nSome points to consider:\n\nI am using Azure Redis Cache.\nThe main application is on VM instance.\nI hosted MYSQL server on another VM instance.\nThe table is very small with 200-300 records.\nThere is no error in the time calculation logic.\n\nEDIT:\nLoad time for cache=about 1.2s\nLoad time for mysql= about 15ms\nTurns out my application and MySQL server were in a same region while the redis cache was in a different region across the globe causing much higher latency.\nBut I would still want someone to explain why the fetch time for sql was much more smaller.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":218,"Q_Id":67141393,"Users Score":0,"Answer":"If the table of 200-300 rows is fully cached in MySQL's \"buffer_pool\", then it won't take much time to fetch all of them and send them back to the client. 15ms is reasonable (though it depends on too many things to be more specific).\nIf you are fetching 1 row, and you have an index (esp, the PRIMARY KEY) to locate that one row, I would expect it to be even faster than 15ms.\nI'm summarizing a 40K-row table; it is taking under 2ms. But note: client and server are on the same machine. 15ms could represent the client and server being a few hundred miles apart.\nHow long does a simple SELECT 1 take? That will give you a clue of the latency, below which you cannot go without changing the physical location of machines.","Q_Score":0,"Tags":"mysql,redis,mysql-python,mysql-connector","A_Id":67152749,"CreationDate":"2021-04-17T18:02:00.000","Title":"Accessing Cache Slower than Accessing Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I always use SQL or NoSQL databases in my project and at my job, but now I am asked to use an object-oriented DB. I don't even know for what reason I should do that. Despite this fact, I google for OODBMS in python and can't see any easy way to use this approach. Now I think, that django ORM (and flask sql alchemy) are the simplest way to construct databases.\nSo, I have two questions:\n\nWhat are the main benefits of using OODBMS instead of, e.x., Django ORM?\n\nIs there a simple way to use OODBMS in flask and django?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":169,"Q_Id":67146040,"Users Score":2,"Answer":"For question 1: OODBMS offers many benefits and to mention a few:\n\nIt provides greater consistency between the database and the programming language.\n\nDoesn\u2019t bother you with object\u2013relational impedance mismatch.\n\nIt is a more expressive query language and it supports long\ndurations\/transactions.\n\nIt is also suitable for advanced database applications.\n\n\nFor question 2: ZODB is easier and simpler to use, Django is mostly good with ORM only.","Q_Score":0,"Tags":"python,django,flask,object-oriented-database","A_Id":67737819,"CreationDate":"2021-04-18T07:14:00.000","Title":"Python flask\/django object-oriented databases usage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Am working in ADF i need to export data from sql source to Excel destination, Is there way to use Excel(.xlsx) as destination in ADF ? Notebook ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":91,"Q_Id":67189420,"Users Score":0,"Answer":"Just taking a guess - use CSV as target and in properties - give .xlsx as file suffix -- it might work but you'll have to download the file from blob storage using Logic App etc.","Q_Score":0,"Tags":"python,sql,postgresql,scala,azure-data-factory","A_Id":67189882,"CreationDate":"2021-04-21T04:45:00.000","Title":"Am working in ADF i need to export data from sql source to Excel destination, Is there way to use Excel(.xlsx) as destination in ADF ? Notebook?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am identifying the face for attendance system and planning to store the info in MongoDB. I am not able to verify is it possible or not neither I am getting any close to it. Currently I am storing it in excel sheet and then transferring it into database but for real time feeding it won\u2019t be too good of a method I guess. If anyone knows in this can you please help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":67193457,"Users Score":1,"Answer":"yeah u can. i recommande u to work with Mongoose.\nMongoose is an Object Data Modeling (ODM) library for MongoDB and Node.js. It manages relationships between data, provides schema validation, and is used to translate between objects in code and the representation of those objects in MongoDB.","Q_Score":0,"Tags":"node.js,python-3.x,mongodb","A_Id":67193636,"CreationDate":"2021-04-21T09:56:00.000","Title":"Real time data feed using NodeJS and MongoDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So in my table there are a number and a timestamp row, there are multiple numbers per day\nits like:\n\n\n\n\nnumber\ntimestamp\n\n\n\n\n3\n20.02.2021 16:05:00\n\n\n7\n20.02.2021 16:10:00\n\n\n20\n20.02.2021 16:15:00\n\n\n5\n21.02.2021 16:00:00\n\n\n\n\nnow i want the average of the numbers of the day of 20.02.2021 but i don't know how i should do that with SQLAlchemy\nany suggestions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":67194732,"Users Score":0,"Answer":"Not sure if this will fully answer your question, but to get just the date portion of a timestamp you can recast the timestamp as date by:\nselect your_timestamp_column::date","Q_Score":0,"Tags":"python,postgresql,flask,sqlalchemy","A_Id":67195487,"CreationDate":"2021-04-21T11:13:00.000","Title":"How to query only the date in a timestamp","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python: how to get unique ID and remove duplicates from column 1 (ID), and column 3 (Description), Then get the median for column 2\n\n\n\n\nID\nValue\nDescription\n\n\n\n\n123456\n116\nxx\n\n\n123456\n117\nxx\n\n\n123456\n113\nxx\n\n\n123456\n109\nxz\n\n\n123456\n108\nxz\n\n\n123456\n98\nxz\n\n\n121214\n115\nabc\n\n\n121214\n110\nabc\n\n\n121214\n103\nabc\n\n\n121214\n117\nabz\n\n\n121214\n120\nabz\n\n\n121214\n125\nabz\n\n\n151416\n114\nzxc\n\n\n151416\n135\nzxc\n\n\n151416\n127\nzxc\n\n\n151416\n145\nzxm\n\n\n151416\n125\nzxm\n\n\n151416\n121\nzxm\n\n\n\n\nProcced table should look like:\n\n\n\n\nID\nxx\nxz\nabc\nabz\nzxc\nzxm\n\n\n\n\n123456\n110\n151\n0\n0\n0\n0\n\n\n121214\n0\n0\n132\n113\n0\n0\n\n\n151416\n0\n0\n0\n0\n124\n115","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":180,"Q_Id":67243543,"Users Score":0,"Answer":"Well you have e.g. 6 'ID' with value '123456'. If you only want unique 'ID', you need to remove 5 'ID' rows, by doing this you will not have duplicate 'Description' values anymore. The question is, do you want unique ID or unique Description values (or unique combination of both)?","Q_Score":0,"Tags":"python","A_Id":67244109,"CreationDate":"2021-04-24T13:54:00.000","Title":"Python: how to get unique ID and remove duplicates from column 1 (ID), and column 3 (Description), Then get the median for column 2 (Value) in Pandas","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a .db file from sqLite3.I want to use it in django project. With inspectdb, I created model from this db file. But I want to get data from this .db file. Is that possible ? And how can I retrieve data from db file ? Thanks.\nInspectdb only create model.py ,not retrieve data.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":67258335,"Users Score":0,"Answer":"create the Django project.\nMake migrations with similar schema\nreplace the new DB file with this one.","Q_Score":2,"Tags":"python,python-3.x,django-models,django-rest-framework","A_Id":67258531,"CreationDate":"2021-04-25T21:13:00.000","Title":"How to use .db file in django for getting data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I accidentally, deleted table of a model in db.sqlite. How can I recreate it?\nWhen I run command 'python manage.py makemigrations' it works but when I run'python manage.py migrate' it says 'No migrations to apply'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":67282506,"Users Score":0,"Answer":"Delete migration files and re-run python manage.py makemigrations","Q_Score":1,"Tags":"python,python-3.x,django,django-models,django-migrations","A_Id":67282651,"CreationDate":"2021-04-27T11:50:00.000","Title":"I accidentally, deleted table of a model in db.sqlite (manually). How can I recreate it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are running an API server where users submit jobs for calculation, which take between 1 second and 1 hour. They then make requests to check the status and get their results, which could be (much) later, or even never.\nCurrently jobs are added to a pub\/sub queue, and processed by various worker processes. These workers then send pub\/sub messages back to a listener, which stores the status\/results in a postgres database.\nI am looking into using Celery to simplify things and allow for easier scaling.\nSubmitting jobs and getting results isn't a problem in Celery, using celery_app.send_task. However, I am not sure how to best ensure the results are stored when, particularly for long-running or possibly abandoned jobs.\nSome solutions I considered include:\n\nGive all workers access to the database and let them handle updates. The main limitation to this seems to be the db connection pool limit, as worker processes can scale to 50 replicas in some cases.\n\nListen to celery events in a separate pod, and write changes based on this to the jobs db. Only 1 connection needed, but as far as I understand, this would miss out on events while this pod is redeploying.\n\nOnly check job results when the user asks for them. It seems this could lead to lost results when the user takes too long, or slowly clog the results cache.\n\nAs in (3), but periodically check on all jobs not marked completed in the db. A tad complicated, but doable?\n\n\nIs there a standard pattern for this, or am I trying to do something unusual with Celery? Any advice on how to tackle this is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":121,"Q_Id":67303047,"Users Score":2,"Answer":"In the past I solved similar problem by modifying tasks to not only return result of the computation, but also store it into a cache server (Redis) right before it returns. I had a task that periodically (every 5min) collects these results and writes data (in bulk, so quite effective) to a relational database. It was quite effective until we started filling the cache with hundreds of thousands of results, so we implemented a tiny service that does this instead of task that runs periodically.","Q_Score":1,"Tags":"python,celery","A_Id":67314568,"CreationDate":"2021-04-28T15:18:00.000","Title":"Persisting all job results in a separate db in Celery","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a collection that holds metadata for my navigation for a multi-tenant application so it's quite large (8mb).\nThis needs to be updated regularly and I'm concerned as to what strategy is the best to avoid having my pages breaking because of missing data due to update\/drop\/recreate operation on a collection?\nI'm new to Mongo Atlas and I'm unsure if it's better to drop and recreate or update the collection?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":67310278,"Users Score":0,"Answer":"8MB is small, so update should be no problem. Dropping the collection could delay your application, because typically you also have to create indexes.\nImportant note, if a Mongo client does not find the collection for insert, then a new collection is created automatically. This may conflict with your recreate procedure.","Q_Score":1,"Tags":"python,sql,mongodb,pymongo,mongodb-atlas","A_Id":67312552,"CreationDate":"2021-04-29T02:33:00.000","Title":"Is it better to drop or update a Mongodb collection to avoid downtime?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a postgres database for the first time. I am using python 3 in miniconda in Windows 10 and Lubuntu.\nI want to start my database server from my python script (on the cron). When it starts, nothing else get executed in my script. Do I need multi-threading or it's something else?\nthanks everyone","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":67384917,"Users Score":0,"Answer":"I tried subprocess.run() instead of os.popen() and it works","Q_Score":0,"Tags":"python-3.x,postgresql,multithreading","A_Id":67973111,"CreationDate":"2021-05-04T12:36:00.000","Title":"Do I need multi-threading when running a database server from a python script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using pyodbc, I wrote a Python program to extract data from Oracle and loads into SQL Server. The extraction from Oracle was instant, but there are some tables taking very long time to load, especially the tables with many columns (over 100+ columns) with a few of those columns at VARCHAR(4000) size (I am running pyodbc's executemany for the INSERT).\nTurning fast_executemany = True seem to make the INSERT even slower. When turned off, loading a table of 40k rows took about 3minutes; and when turned on, loading the same amount of rows took about 15minutes.\nNot sure if this means anything, but I did turned on SQL Profiler during each try and here is what I found: When it is turned off, the backend is doing a bunch of \"sp_prepexec\" and \"sp_unprepare\" for each inserts; and when it is turned on, the backend just did one time of \"sp_prepare\" and then a bunch of \"sp_execute\".\nAny idea why the fast_executemany is not speeding up the INSERT, and in fact is even much longer?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":67394803,"Users Score":0,"Answer":"Update: I was able to resolve my problem by limiting how many rows get inserted each time. I set my batch size for each INSERT operation to only 1000 rows at a time, and now the same INSERT operation of 40k rows took about 40secs, comparing to without setting a batch size of which took 15minutes.\nI am guessing fast_executemany puts everything into memory before executing that INSERT, but if my column sizes are huge and if there are many rows to be inserted for each operation, it will put lot of burden on the memory and hence get much slower (?).","Q_Score":0,"Tags":"python,sql-server,pyodbc","A_Id":67403290,"CreationDate":"2021-05-05T03:24:00.000","Title":"Turning fast_executemany on is even slower than when it is off","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have my data tables in the Glue Metadata catalog. I need to use this data in my glue job's python shell script. When I create the glue job it gives me the option to select the connection type at last. Is it essential to add a connection? If the tables are in glue catalog what would be the connection type?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":67437393,"Users Score":1,"Answer":"If the tables are in the Glue Catalog you don't need any connections. As long as your data sources \/ data sinks are Glue \/ S3 you don't need a connection apart from a VPC S3 Endpoint.\nIf you want to connect to let's say Redshift or a MySQL database you would need a connection.","Q_Score":0,"Tags":"python,amazon-web-services,aws-glue,aws-glue-data-catalog","A_Id":67437554,"CreationDate":"2021-05-07T15:05:00.000","Title":"Is it essential to have a connection in a aws glue job?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a collection c1 with a value like {'Race': 'blck'} and I want to use another collection c2 with fields {'raw': 'blck', 'mapped_race': 'black'} to update the document in c1 with a new field like {'Race_Standardized': 'black'}. This would be accomplished by matching the value of Race in c1 to the document in c2 on the raw value.\nUpdate would make c1 document have fields {'Race': 'blck', 'Race_Standardized': black'}.\nHow do I go about doing this in an aggregation pipeline? (I'm working in PyMongo.)","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":67480283,"Users Score":0,"Answer":"Since Mongo is a nosql DB there is no join like we have in relational db. However this is overcome by the $lookup feature within the aggregate pipeline. I am yet to try this out within a pymongo framework but in mongo you will have to use a combination of $lookup , $unwind and the $out to update the field. The $lookup will be like a left out join in SQL world and that returns an array - we have to use $unwind to get the specific field and then $out to update back or write to a new collection. I found this link helpful [https:\/\/developer.mongodb.com\/community\/forums\/t\/update-a-collection-field-based-on-another-collection\/4875]","Q_Score":1,"Tags":"python,mongodb,mongodb-query,aggregation-framework,pymongo","A_Id":67480521,"CreationDate":"2021-05-11T03:44:00.000","Title":"Mongo create new field by mapping value from one field to value in a field in another collection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an Avro schema (avsc file) and data published in kafka topics in Avro format which I need to convert to a flat SQL table.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":67493277,"Users Score":0,"Answer":"If the data was produced by one of the Confluent Avro serializers, you can use their Kafka Connect JDBC Sink to write to a SQL database of your choice (such as sqlite if you literally want files)\nOtherwise, you're going to need to write your own Python code for this use-case","Q_Score":0,"Tags":"python,sql,apache-kafka,avro","A_Id":67495244,"CreationDate":"2021-05-11T19:35:00.000","Title":"Is there a way to convert an Avro file published to a Kafka topic to a flat SQL table in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I changed my conda system from anaconda to miniconda. Everything was working normally with anaconda but after the change xlwings stopped working.\nNow I am getting the following error Run-time error '53': File not found.\nOn debugging through the VBA interface, I found that the routine was searching in the folder C:\\Users\\USERNAME\\AppData\\LocalTemp\\ (not the project's folder) and coming up with a very long .log filename which changes each time I attempt to run it (eg. xlwings-374ABEE7-4C51-8622-AB5B-D42C5294C2B8.log)\nIs this a bug which needs to be corrected? or have I done something incorrectly?\n\nSystem: Windows 10; MS Office 365; xlwings ver: 0.23.2","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":79,"Q_Id":67551250,"Users Score":1,"Answer":"I had a similar issue that drove me crazy.\nAfter hours of debug and testing, I randomly click on the RunPython: Use UDF server checkbox and it worked","Q_Score":1,"Tags":"python,xlwings","A_Id":69653177,"CreationDate":"2021-05-15T21:11:00.000","Title":"xlwings 0.23.2 - Run-time error '53': File not found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using VirtualBox as my VM, and on it I have SQL server, and my python script runs on local host.\nMy connection string looks like this engine = create_engine('mssql+pyodbc:\/\/'+username+':'+password+'@127.0.0.1:1433\/'+database+'?driver=SQL+Server+Native+Client+11.0')\nI'm getting \"Data source name not found and default driver not specified\" error.\nI've tried a lot of stuff, and I can't make it work still.\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":67586606,"Users Score":0,"Answer":"If you running Python script on System\nThe pythons script gets connected to localhost server of the system not Virtual Box","Q_Score":0,"Tags":"python,sqlalchemy,pyodbc","A_Id":67586676,"CreationDate":"2021-05-18T12:53:00.000","Title":"Can't connect my python script to SQL database that's on VM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using VirtualBox as my VM, and on it I have SQL server, and my python script runs on local host.\nMy connection string looks like this engine = create_engine('mssql+pyodbc:\/\/'+username+':'+password+'@127.0.0.1:1433\/'+database+'?driver=SQL+Server+Native+Client+11.0')\nI'm getting \"Data source name not found and default driver not specified\" error.\nI've tried a lot of stuff, and I can't make it work still.\nThanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":79,"Q_Id":67586606,"Users Score":0,"Answer":"The problem was in database drivers. I worked on a pc with SQL Server Native Client 11.0 driver, but I didn't have it on the pc where I was deploying the script and on which the VM was on.","Q_Score":0,"Tags":"python,sqlalchemy,pyodbc","A_Id":67848754,"CreationDate":"2021-05-18T12:53:00.000","Title":"Can't connect my python script to SQL database that's on VM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i need to create an sqlite table with the name of a global string variable, but i cant seem to find a way to insert the variable to the CREATE TABLE command.\nis there a way to do so, or after creating a table with a placeholder name rename it to the variable?\nthe variable is an user input so i cannot name it in advance.\nim coding in python.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":67590934,"Users Score":0,"Answer":"F-strings like the one in Stefan's answers are great, and you can find a lot of use cases when working with SQL queries. Just be aware f-strings are available starting from Python 3.6 - if you are on an older version you will need to use the old %-formatting or str.format() methods, or simple string concatenation as already noted","Q_Score":0,"Tags":"python,sqlite","A_Id":67591329,"CreationDate":"2021-05-18T17:10:00.000","Title":"string variable as sqlite table name","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an excel sheet(.xlsx file) with the following data:\n\n\n\n\nDate 1\nDate 2\n\n\n\n\n03\/26\/2010\n3\/31\/2011\n\n\nNULL\nNULL\n\n\n03\/26\/2010\n3\/31\/2011\n\n\nNULL\nNULL\n\n\n03\/26\/2010\n3\/31\/2011\n\n\nNULL\nNULL\n\n\n01\/01\/2010\n6\/30\/2010\n\n\n01\/01\/2010\n6\/30\/2010\n\n\n01\/12\/2011\n4\/15\/2012\n\n\n\n\nWhen I convert it to dataframe using\npd.read_excel(\"file.xlsx\",header=0,dtype=str,engine='openpyxl')\nIt is reading all data properly except for the row items 3,4,5,6 which are being read as below:\n\n\n\n\nDate 1\nDate 2\n\n\n\n\n03\/26\/2010\n3\/31\/2011\n\n\nNULL\nNULL\n\n\n01\/01\/2010\n6\/30\/2010\n\n\n01\/01\/2010\n6\/30\/2010\n\n\n01\/12\/2011\n4\/15\/2012\n\n\nNULL\nNULL\n\n\n\n\nIt is causing an unnecessary data shift and hence affecting my furthur steps. Any reasons why only at this place it is happening and nowhere else in the data?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":543,"Q_Id":67666768,"Users Score":0,"Answer":"The problem is now resolved.\nIt was the issue with the index given by pandas to the Dataframe.\nMy table had headers, but the pandas' index starts from 0 for the first row data.\nSo I was being shown the next index number's data, which deceived me into thinking that read_excel has a bug.\nThanks for your support.","Q_Score":0,"Tags":"python,excel,pandas","A_Id":67668432,"CreationDate":"2021-05-24T04:42:00.000","Title":"Pandas read_excel function incorrectly reading data from excel file","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using sqlalchemy core to execute string based queries. I have set charset to utf8mb4 on the connection string like this:\n\"mysql+mysqldb:\/\/{user}:{password}@{host}:{port}\/{db}?charset=utf8mb4\"\nFor some simple select queries (e.g, select name from users where id=XXX limit 1), when the resultset has some unicode characters (e.g, ', \u00ec, etc), it errors out with the following error:\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x9a in position 11: invalid start byte\nBut the error itself is not reproducible. When I run the same query from a python shell, it works without errors. But it errors out on a web request or background job.\nI'm using Python 3.8 and sqlalchemy 1.3.24.\nI have also tried explicitly specifying charset: utf8mb4 as a connect_args property with create_engine().\nThe underlying database is mysql 5.7 and all the unicode columns have utf8mb4 explicitly set as the characters set in the schema.\nUpdate: The database is actually AWS RDS Aurora MySQL.\nAppreciate any insights on the error or how to reproduce it reliably.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":124,"Q_Id":67680277,"Users Score":1,"Answer":"Can you try with use_unicode=true parameter in the url?","Q_Score":3,"Tags":"python,mysql,sqlalchemy,python-unicode","A_Id":67689186,"CreationDate":"2021-05-25T00:28:00.000","Title":"UnicodeDecodeError on sqlalchemy connection.execute() for select queries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some help with this problem:\nI have two tables and I want to filter the rows on the second table (Table B) so that it only shows the one's that have matching 'names' with the Table A.\nAn exemple:\nTable A\n\n\n\n\nA\nb\nc\n\n\n\n\nAnne\nTwo\nThree\n\n\nAnne\nFour\nFive\n\n\nJhon\nFour\nFive\n\n\nOlivia\nFour\nFive\n\n\n\n\nTable. B\n\n\n\n\nA\nMoney\nRent\n\n\n\n\nAnne\nTwo\nThree\n\n\nAnne\nFour\nFive\n\n\nAnne\nFour\nFive\n\n\nKristian\nFour\nFive\n\n\nPaul\nFour\nFive\n\n\nOlivia\nFour\nFive\n\n\nOlivia\nFour\nFive\n\n\n\n\nThe result that I want to achieve is\n\n\n\n\nA\nMoney\nRent\n\n\n\n\nAnne\nTwo\nThree\n\n\nAnne\nFour\nFive\n\n\nAnne\nFour\nFive\n\n\nJhon\nNan\nNan\n\n\nOlivia\nFour\nFive\n\n\nOlivia\nFour\nFive","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":67725254,"Users Score":0,"Answer":"Assuming you are using pandas, you could try this: dfC = dfB[dfB['A'].isin(dfA['A'].unique())]\nThis way, you will be filtering table B, based on it's A column, looking for values that are in the column A of table A.","Q_Score":3,"Tags":"python,jupyter-notebook","A_Id":67725441,"CreationDate":"2021-05-27T15:23:00.000","Title":"keeping table rows that match the other table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some help with this problem:\nI have two tables and I want to filter the rows on the second table (Table B) so that it only shows the one's that have matching 'names' with the Table A.\nAn exemple:\nTable A\n\n\n\n\nA\nb\nc\n\n\n\n\nAnne\nTwo\nThree\n\n\nAnne\nFour\nFive\n\n\nJhon\nFour\nFive\n\n\nOlivia\nFour\nFive\n\n\n\n\nTable. B\n\n\n\n\nA\nMoney\nRent\n\n\n\n\nAnne\nTwo\nThree\n\n\nAnne\nFour\nFive\n\n\nAnne\nFour\nFive\n\n\nKristian\nFour\nFive\n\n\nPaul\nFour\nFive\n\n\nOlivia\nFour\nFive\n\n\nOlivia\nFour\nFive\n\n\n\n\nThe result that I want to achieve is\n\n\n\n\nA\nMoney\nRent\n\n\n\n\nAnne\nTwo\nThree\n\n\nAnne\nFour\nFive\n\n\nAnne\nFour\nFive\n\n\nJhon\nNan\nNan\n\n\nOlivia\nFour\nFive\n\n\nOlivia\nFour\nFive","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":67725254,"Users Score":0,"Answer":"SELECT * FROM table_B WHERE A IN (SELECT A FROM table_A)","Q_Score":3,"Tags":"python,jupyter-notebook","A_Id":67725455,"CreationDate":"2021-05-27T15:23:00.000","Title":"keeping table rows that match the other table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got Pycharm installed on my chromebook by enabling linux apps. I've started to learn Python using a tutorial by 'Programming by Mosh'. In one of the projects that he does in the tutorial, he adds an .xlsx file to a project in Pycharm. Mosh (he uses a Mac) did this by right clicking on project and then clicking 'Reveal in Finder' and then pasting the file onto the window that opens. Could you explain how I can do this on my chromebook, because I can't seem to find the 'Reveal in Finder' option.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":67748429,"Users Score":0,"Answer":"Both the operating systems are quite different. If you want to create an Excel file(.xlsx is used by Microsoft Excel), you can use Office Online and then download it to your project directory in Chromebook.\nI would actually prefer you to use Google sheets instead if you're using Chrome OS.","Q_Score":0,"Tags":"python,excel,pycharm,google-chrome-os","A_Id":67748653,"CreationDate":"2021-05-29T06:28:00.000","Title":"Adding a .xlsx file to Pycharm on a chromebook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got Pycharm installed on my chromebook by enabling linux apps. I've started to learn Python using a tutorial by 'Programming by Mosh'. In one of the projects that he does in the tutorial, he adds an .xlsx file to a project in Pycharm. Mosh (he uses a Mac) did this by right clicking on project and then clicking 'Reveal in Finder' and then pasting the file onto the window that opens. Could you explain how I can do this on my chromebook, because I can't seem to find the 'Reveal in Finder' option.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":67748429,"Users Score":0,"Answer":"Go to the dropdown menu of 'Project' at the top and then click 'Project files'. Then just paste your file into it.","Q_Score":0,"Tags":"python,excel,pycharm,google-chrome-os","A_Id":67748811,"CreationDate":"2021-05-29T06:28:00.000","Title":"Adding a .xlsx file to Pycharm on a chromebook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some excel files with several workbooks in each. All of the columns are defined and I now need several users to go in and add data. Each user will make edits and send back the doc with their info added. Then someone will copy and paste all the changes into the master workbooks. Very time consuming process and I was wondering if I can make this quicker with python. My idea is this load the master and the edited spreadsheet into a dataframe and loop through each cell in both do the following.\n\nIf a cell in the master is blank but has data in the edited file, then copy the info in that cell to the master.\nIf the cell in the master has data but it is not the same as the edited file then overwrite the cell in the master with that info.\nIf the cell in the master has the exact same info as the edited file then do nothing to the cell.\nIf the cell in the master and the edited file are both blank do nothing to the cell.\n\nIs this a possible solution or is my logic way off and there is an easier to do this? These excel docs are huge so the copy paste method takes a very long time and I would like to speed up this process for my team. So basically I want to add each users edits into the master without overwriting any other users edits in the process. Thanks in advance for your help with this.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":67791826,"Users Score":1,"Answer":"This is certainly possible within Python but if you are looking for the continued ease of use for your team, building it as a macro in VBA may be better. User could go into the submitted document, click a button on the ribbon, and it would work through the process with a relatively nice UI. While Python may be nominally faster, you could be asking your team to interact with a terminal interface (using Py2exe or pyinstaller) or you may need to install Python on their workstations (to use xlwings, for example).\nI'm not sure of the exact layout of your data, but here's an example of how the VBA process could work:\n\nUser opens edited file and clicks macro button on ribbon\nMacro asks for user to select Master Document (assuming Master Document is not always the same)\nBased on Sequence\/ID column (usually column A), insert a helper column to the right of the data that validates whether their entered data is the same or different than the Master Document (could use CONCAT() to create one cell of data for comparison if there is multiple columns they're editing)\nFor items that are different (would include both where they're blank or changed), copy those respective ranges over\n\nThere are many ways to break down the logic into executable code, so that is just an example of one possibility. Python is a powerful language but for automating Excel data entry or administrative tasks like this request, VBA is often the easiest way forward.","Q_Score":0,"Tags":"python,excel,automation","A_Id":67794218,"CreationDate":"2021-06-01T15:31:00.000","Title":"Merging excel edits from several users into a master excel workbook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to databricks and sql and want to add some data there.\nI am using python notebook in azure databricks. I have created a very big empty delta table. Columns here:\nId| A| B| C| D| E| F| G| H| I| J| K (A,B,C.... are column names)\nI will parse log files as they will appear them in blob and create dataframes. The dataframes could like this.\nDF1\nA| B| C| D| (A,B,C.... are column names)\nDF2\nA| B| D| E| (A,B,C.... are column names)\nDF3\nA| B| D| F| (A,B,C.... are column names)\nI want to insert all of these data frames in the delta table. In addition I will also need to add Id(log_file_id). Is there a way to insert data in this manner to the tables?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":67871278,"Users Score":0,"Answer":"Create an empty dataframe having all the columns let say X. Then you can concatenate all the other dataframes into X. Then save X directly.","Q_Score":0,"Tags":"python,pandas,azure,databricks","A_Id":67879315,"CreationDate":"2021-06-07T12:02:00.000","Title":"databricks: Add a column and insert rest of the data in a table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am a beginner in python, and I have the following (embarassing) issue:\nI am trying to import an excel file using xlrd but I have the \"FilenotfoundError\".\nAnyone is able to help a newbie?\nThanks a lot,\nMatteo\n--\nimport xlrd\n#load the data file\npath =(r\"C:\\Users\\MCECCHI\\Desktop\\oil_exxon.xls\")\nwb = xlrd.open_workbook(path)\nsheet = wb.sheet_by_index(0)\nprint(sheet.nrows)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":67874917,"Users Score":0,"Answer":"You should try to pip install xlrd and It can be done like this\nProgram to extract number\nof rows using Python\nimport xlrd\nGive the location of the file\nloc = (r\"C:\\Users\\MCECCHI\\Desktop\\oil_exxon.xls\")\nwb = xlrd.open_workbook(loc)\nsheet = wb.sheet_by_index(0)\nsheet.cell_value(0, 0)\nExtracting number of rows\nprint(sheet.nrows)","Q_Score":0,"Tags":"python,excel,xlrd","A_Id":67875247,"CreationDate":"2021-06-07T15:57:00.000","Title":"Python (replit): Import excel file with xlrd error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can dolphindb write and query data at the same time by invoking python api?\nBy invoking dolphindb's python api, Can it write and query data at the same time?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":67885313,"Users Score":0,"Answer":"The python api of dolphindb supports multiple threads to write data and check data at the same time, but you must avoid writing data to the same partition at the same time, otherwise an error will be reported.","Q_Score":0,"Tags":"python,sql,dolphindb","A_Id":69732093,"CreationDate":"2021-06-08T09:49:00.000","Title":"Can dolphindb write and query data at the same time by invoking python api?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to create a function that will accept a dataframe and will parse that dataframe into a sql server table. I am stuck as to what needs go in the select statement below the insert query.\ndf- dataframe\ndesttable - destination table that needs to be parsed.\ntablecols - An array of the table columns for the table\n\n # Insert DataFrame to Table\n def InsertintoDb(self, df, desttable, tablecols):\n tablecolnames = ','.join(tablecols)\n qmark = ['?' for s in tablecols]\n allqmarks = ','.join(qmark)\n #rowappendcolname = ','.join(['row.' + s for s in tablecols])\n for index, row in df.iterrows():\n cursor.execute(\n '''INSERT INTO [Py_Test].[dbo].''' + desttable + ''' ( ''' + tablecolnames + ''')VALUES (''' + allqmarks + ''')''',\n )\n self.conn.commit()\n\n Any help is much appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":67896137,"Users Score":0,"Answer":"As suggested by the gentleman in the comment, I was able to do it using df.to_sql . Here is the working code -\n\nclass DbOps:\n def __init__(self):\n self.username = ''\n self.password = ''\n self.ipaddress = 'localhost'\n # self.port = 5439\n self.dbname = ''\n\n # A long string that contains the necessary Postgres login information\n self.engine = sqlalchemy.create_engine(\n f\"mssql+pyodbc:\/\/{self.username}:%s@{self.ipaddress}\/{self.dbname}?driver=SQL+Server+Native+Client+11.0\" % urlquote(f'\n {self.password }'))\n\n def InsertintoDb(self, df, desttable, tablecols):\n df.to_sql(desttable, self.engine, index=False, if_exists='append')","Q_Score":0,"Tags":"python-3.x,pandas,dataframe,pyodbc","A_Id":67906973,"CreationDate":"2021-06-09T00:50:00.000","Title":"Function to parse a dataframe into a SQL table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to do a daily ingesting job that takes a CSV file from blob storage and put it integrate it into a PostgreSQL database. I have the constraint to use python. Which solution do you recommend me to use for building\/hosting my ETL solution ?\nHave a nice day :)\nAdditional information:\nThe size and shape of the CSV file are 1.35 GB, (1292532, 54).\nI will push to the database only 12 columns out of 54.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":67900171,"Users Score":0,"Answer":"You can try to use Azure Data Factory to achieve this. New a Copy Data activity, source is your csv and sink is PostgreSQL database. In the Mapping setting, just select the columns you need. Finally, create a schedule trigger to run it.","Q_Score":0,"Tags":"python,postgresql,azure,csv","A_Id":68157073,"CreationDate":"2021-06-09T08:15:00.000","Title":"Where to host a data ingestion ETL ? input data (csv file) automatically from Azure blob storage to Azure Posgresql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"May I know how to merge 2 excel into 1 like this in python.\nI've tried Pandas to merge it by \"name\" and keep the sequence \"index\", but no luck. as there are more than 1 location. so, the result should have 2or more location in row.\nMany thanks\n\n\n\n\nindex\nname\nprice\n\n\n\n\n1\napple\n2\n\n\n2\norange\n3\n\n\n3\ngrape\n7\n\n\n4\nbanana\n1\n\n\n5\nkiwi\n2.5\n\n\n6\nlemon\n1\n\n\n\n\n\n\n\nindex\nname\nlocation\n\n\n\n\n1\napple\nUS\n\n\n2\napple\nUK\n\n\n3\nbanana\nColumbia\n\n\n4\nbanana\nCosta Rica\n\n\n5\nkiwi\nItaly\n\n\n6\nlemon\nUS\n\n\n\n\n\n\n\nindex\nname\nprice\nlocation_1\nlocation_2\n\n\n\n\n1\napple\n2\nUS\nUK\n\n\n2\norange\n3\nN\/A\nN\/A\n\n\n3\ngrape\n7\nN\/A\nN\/A\n\n\n4\nbanana\n1\nColumbia\nCosta Rica\n\n\n5\nkiwi\n2.5\nItaly\n\n\n\n6\nlemon\n1\nUS","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":67901251,"Users Score":0,"Answer":"you can try pd.concat to combine them.","Q_Score":0,"Tags":"python,excel","A_Id":67901292,"CreationDate":"2021-06-09T09:23:00.000","Title":"Python Merge 2 excel table in row, with not unique key","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 buckets on the S3 service. I have a lambda function \"create-thumbnail\" that triggered when an object is created into an original bucket, if it is an image, then resize it and upload it into the resized bucket.\nEverything is working fine, but the function doesn't trigger when I upload files more than 4MB on the original bucket.\nFunction configurations are as follow,\n\nTimeout Limit: 2mins\nMemory 10240\nTrigger Event type: ObjectCreated (that covers create, put, post, copy and multipart upload complete)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":368,"Q_Id":67917878,"Users Score":0,"Answer":"Instead of using the lambda function, I have used some packages on the server and resize the file accordingly and then upload those files on the S3 bucket.\nI know this is not a solution to this question, but that's the only solution I found\nThanks to everyone who took their time to investigate this.","Q_Score":2,"Tags":"python-3.x,amazon-web-services,amazon-s3,aws-lambda","A_Id":67985482,"CreationDate":"2021-06-10T08:53:00.000","Title":"AWS S3 lambda function doesn't trigger when upload large file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How to fetch the table data along with column names using snowflake connector cursor.\nWell I am able to get it using dictcursor but it becomes complex to consolidate the result set as it gives all data as key pair.\nI wonder if there is any straight forward way.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1085,"Q_Id":67945837,"Users Score":0,"Answer":"I had the same question, using the python snowflake connector in Jupyter notebooks. I work with dataframes, so working from @SimonD's answer above I adapted the section with cursor.description to:\nhdrs = pd.DataFrame(cursor.description)\ndf = pd.DataFrame(sql_data)\nFrom my data, the resulting hdrs dataframe has an attribute 'name' that I can use to set column names for the df dataframe, like so:\ndf.columns = hdrs['name']","Q_Score":0,"Tags":"python,snowflake-cloud-data-platform","A_Id":68761182,"CreationDate":"2021-06-12T05:16:00.000","Title":"Getting column header from snowflake table using python snowflake connector","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a Python CLI app that has to manage some data on a sqlite db (creating, updating and deleting records). I want the users to be able to install the app and use it right away. So my question is, can I just upload an empty sqlite db to GitHub? Or should I just upload a schema file and during installation build the db in a build step? I suppose if going the second way, users should have sqlite pre-installed or else the installation will fail. What I want is for them to just install the app, without worrying about dependencies and such.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":111,"Q_Id":67955165,"Users Score":0,"Answer":"If your sqlite db have some pre tables and records, you should upload it to vc in order to be used by the users. but if you need a clean db for each instance of your project I suggest creating db during the initialization process of your app.\nAlso if your app needs some pre-data inside the db, one of the best practices is to put the data into a file like predata.json and during initialization, create db and import it into the db.","Q_Score":0,"Tags":"python,git,sqlite,github,python-poetry","A_Id":67955222,"CreationDate":"2021-06-13T05:29:00.000","Title":"Should an embedded SQLite DB used by CLI app be uploaded to version-control (Git)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I ran a program and got an error saying cx_Oracle.DatabaseError: DPI-1050: Oracle Client library is at version 11.1 but version 11.2 or higher is needed\nThe issue is I have other programs running on version 11.1 but I need to update to 11.2 or higher to run specific code.\nIf I upgrade my cx_Oracle will this break any other instances of code running with respect to version 11.1?\nIf you cannot do it, or if there is a better way, what would be the best way to deal with this?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":67976057,"Users Score":2,"Answer":"I don't think anyone can give you assurances that upgrading your Oracle Client library won't break any of your code! It is going to depend highly on what kind of code you have and whether you have done anything unexpected in that code. So you need to perform the upgrade and test your applications yourself. A great deal of effort is made to ensure a seamless upgrade experience but a seamless upgrade cannot be guaranteed!\nWith the instant client you have the option of easily installing a separate version for your new application. You just need to make sure that you select the correct configuration for each different application. This gives you the option of testing each application independently.","Q_Score":0,"Tags":"python,cx-oracle","A_Id":67977200,"CreationDate":"2021-06-14T19:19:00.000","Title":"Will updating cx_Oracle break any running code?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have old django project and new django project. I created dump file from database of old django. And also I made changes in tables and created new tables.\nNow I want to load that dump file to my new django app. I am facing errors when I firstly migrate then restore data or firstly restore then migrate..\nWhen I do migration first, it says tables already exist.\nWhen I do restore first , it says django.db.utils.ProgrammingError: relation \"django_content_type\" already exists\nI use migrate --fake error goes but new tables are not created in database.\nI spent 3-4 days but could not succeed.\nPlease, help me if you can.\nPS: my database is postgresql","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":668,"Q_Id":67980803,"Users Score":0,"Answer":"This is not straightforward and will need some manual interventions and it depends on what do you want to do in the future\n\nIf the tables that already exist in the database have a stable design and won't be changed or you can do the changes manually using SQL statements then set managed = False to the models' meta, this will make Django skip making migrations for those models\n\nIf you want to keep the power of migration in the new project for all models then this will more complex\n\nDelete all your migrations\nYou need to make your models equivalent to your database, you can set managed=False for new models like Users\nRun python manage.py makemigrations, this will create the structure of the initial database.\nFake running the migrations python manage.py migrate --fake\nDump the records of django_migrations table\nCreate a new empty migration (with --empty) and add the SQL statements of the django_migrations table to it using migrations.RunSQL()\nnow fake again so you skip that new migration.\nNow you are ready to use migrations as usual.\n\n\n\nWhen installing new database, you will just need to run python manage.py migrate","Q_Score":1,"Tags":"python-3.x,django,postgresql,migration,django-3.0","A_Id":67981473,"CreationDate":"2021-06-15T06:01:00.000","Title":"How to transfer data from old database to new modified database in django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using python3 to learn flask. When i connected it to a Mysql database using xampp, it shows the above mentioned error. is it a version problem or something else?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":67986927,"Users Score":0,"Answer":"You need to use one of the following commands. Which one depends on what OS and software you have and use.\neasy_install mysql-python (mix os)\npip install mysql-python (mix os\/ python 2)\npip install mysqlclient (mix os\/ python 3)\napt-get install python-mysqldb (Linux Ubuntu, ...)\ncd \/usr\/ports\/databases\/py-MySQLdb && make install clean (FreeBSD)\nyum install MySQL-python (Linux Fedora, CentOS ...)\nFor Windows, see this answer: Install mysql-python (Windows)","Q_Score":0,"Tags":"python-3.x,flask-sqlalchemy","A_Id":67987048,"CreationDate":"2021-06-15T13:11:00.000","Title":"'ModuleNotFoundError: No module named 'MySQLdb' '","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I assume the entire thing \"Sending data to a webhook when a row is updated in XYZ table\" wouldn't be possible via just MySQL query. However, I am trying to figure out a way to automate the process. Can anyone share some example ways this can be done?\nHere's what I intend to automate:\n\nWhenever any row is updated in table XYZ\nThe script should send out data from table ABC, DEF (tables that are connected to the earilier XYZ table) to a webhook URL.\n\nThe biggest issue I have is that this MySQL database is locally stored so I have to run the script locally, otherwise, I'd used Zapier for this.\nJust need some light on what programs or scripts I should be using.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":775,"Q_Id":68066609,"Users Score":1,"Answer":"You are correct that you cannot do this with pure MySQL: unless you add a somewhat dodgy extension to your MySQL server it has no way to originate any operation.\nYou could create a trigger on UPDATE (and perhaps another on INSERT) on that XYZ table. The trigger would INSERT a row into a new table called, maybe webhook_queue.\nA separate webhook program would, running every few seconds, SELECT, then DELETE all rows from that webhook_queue table, then send each webhook. There's obviously a latency problem with this approach: webhooks won't be sent until the webhook program wakes up and does its work.\nIf that won't work for you, you probably have to modify your application code to invoke the webhook as it UPDATEs each row.","Q_Score":1,"Tags":"python,mysql,database,windows,automation","A_Id":68067013,"CreationDate":"2021-06-21T10:50:00.000","Title":"MySQL function\/script to send data to a webhook when table XYZ is updated","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am looking to call an excel file located within a docker container into python. How would i go about doing this? I can't seem to find the correct file path.\nWhat I have done is copied the excel files from a local directory into a existing docker container. I have done this because airflow cannot find files in my local directory. I now need a means for python to find these files.\nAny help would be greatly appreciated.\nSteven.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":68072903,"Users Score":0,"Answer":"Try using volumes in docker so that you should be able to access the file","Q_Score":1,"Tags":"python,excel,docker,airflow","A_Id":68072954,"CreationDate":"2021-06-21T18:19:00.000","Title":"Is there a way to call an excel file located within a docker container into python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to python and was trying to create a python bot, I wanted a optimized way to modify and access my bot configs per server. I had 2 ideas on how\/when to fetch configs from the database for optimization.\n\nthis is what you would normally do - just fetch data variables(fetch a variable at a time) for each command, this would keep the bot simple and minimize unused recources.\n\nIn this one, whenever the user uses a command for the first time, it fetches the entire config table and stores it in a loaded dict from which you can access the config from. you can also update the config in the dict and every 30m-1hr it will log the values in the table and empty the dict. The benefit of this one is less sql calls but potentially less scalability because of unused objects in the dict.\n\n\nCan someone help me decide which one is better, i dont know normally how you would make discord bots or the convention.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":264,"Q_Id":68077892,"Users Score":0,"Answer":"Your second approach is called caching the data. You're basically creating a cached database in your application (the dictionary) and save a bunch of usually necessary data to access them quickly. It is what every (almost every) major service (like Steam) does in order to minimize the main database calls.\nI think this is the better practice however it has its drawbacks.\nFirst, from time to time, you have to compare the cached data with what you have in the original database because your bot will not have a single user and while the cached data is available to one user, another user might alter the data in the original database.\nSecond, it is harder to implement than the first approach. You need to determine which data to store, which data to update rapidly and also you need to implement an alarm system for the users to update their cache whenever the main data is altered in the database.\nIf I were you and I just wanted to mess around with bots, I would go with just fetching the data each time from the database. It's easier and it is good enough for most applications.","Q_Score":0,"Tags":"python,discord.py","A_Id":68078074,"CreationDate":"2021-06-22T05:16:00.000","Title":"How to implement sql databases in discord.py bot","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python and gspread to upload local .csv data to a google SpreadsheetA.\nI have a separate google SpreadsheetB that uses =IMPORTRANGE to import the data from SpreadsheetA and create a pivot table and corresponding chart (both located on SpreadsheetB).\nIf I were to manually adjust any data in SpreadsheetA (e.g., alter value of any cell, add a value to an empty cell, etc), then the data in SpreadsheetB\u2014with its corresponding pivot table and chart\u2014update dynamically with the new data from SpreadsheetA.\nHowever, when SpreadsheetA is updated with new data programmatically via Python, IMPORTRANGE in SpreadsheetB does not capture the new data.\nAny ideas as to why this happens and how I might be able to fix?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":110,"Q_Id":68085843,"Users Score":0,"Answer":"Although probably not the ideal, my solution to this was to use gspread to add a new worksheet to spreadsheetA, which somehow manages to kickstart importrange() in SpreadsheetB.\nI would still love to see a cleaner solution, if anyone knows of one\u2014but this has continued to work since implementing a week ago.","Q_Score":0,"Tags":"python,python-3.x,google-sheets,google-sheets-formula,google-sheets-api","A_Id":68170377,"CreationDate":"2021-06-22T14:51:00.000","Title":"Google Sheets IMPORTRANGE not working dynamically when worksheet is Programmatically updated via Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have pandas data frame with int64 , object , and datetime64[ns] data types. How to preserve those data types when exporting pandas DataFrame.to_Excel option?\nI want exported Excel file columns looks like this:\nint64 Number format in Excel\nobject Text format in Excel\ndatetime64[ns] Date format in Excel\nRight now all of my Excel column format shows as General","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":614,"Q_Id":68111864,"Users Score":0,"Answer":"I have pandas data frame with int64 , object , and datetime64[ns] data types. How to preserve those data types when exporting pandas DataFrame.to_Excel option?\n\nThe short answer is that you can't.\nExcel does't have as many datatypes as Python and far fewer than Pandas. For example the only numeric type it has is a IEEE 754 64bit double. Therefore you won't be able be able to store a int64 without losing information (unless the integer values are <= ~15 digits). Dates and times are are also stored in the same double format and only with millisecond resolution. So you won't be able to store datetime64[ns].\nYou could store them in string format but you won't be able to use them for calculations and Excel will complain about \"Numbers stored as strings\".","Q_Score":1,"Tags":"python,excel,pandas","A_Id":68128988,"CreationDate":"2021-06-24T08:06:00.000","Title":"How to keep data frame data types when exporting to Excel file?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a pipeline with a python script on Azure Web Service.\nMy script uses psycopg2 to connect to the postgres database\nbut I am getting an error trying to import psycopg2 saying\nfrom psycopg2._psycopg import ( # noqa\nImportError: \/home\/site\/wwwroot\/antenv\/lib\/python3.7\/site-packages\/psycopg2\/_psycopg.cpython-37m-x86_64-linux-gnu.so: undefined symbol: PQencryptPasswordConn\nany help would be apprciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":68117531,"Users Score":0,"Answer":"PQencryptPasswordConn was introduced in PostgreSQL v10. So you must be trying to use psycopg2 with a libpq that is older than that.\nThe solution is to upgrade your PostgreSQL client.","Q_Score":0,"Tags":"python,postgresql,azure,psycopg2","A_Id":68117655,"CreationDate":"2021-06-24T14:18:00.000","Title":"Can't Install psycopg2 on Azure","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Basically I am writing a script to reset a django webapp completely. In this script, I want to reset the database, and there is a command to do it from django extensions. Unfortunately, I haven't been able to run it programatically. It works fine when I run it via command line, but it just won't execute when I try programatically.\nI have tried using os.system and subprocess.\nI have also tried using management.call_command('reset_db'), but it keeps saying that there isn't a command called reset_db. I have checked to make sure the django_extensions is in my installed apps, so I have no idea why that isn't working.\nDoes anyone know how I could fix this? Thank you!\nAlso I am using python3, the most recent version of django I believe, and it is a MYSQL server that I am trying to delete.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":68168855,"Users Score":0,"Answer":"I can't know without seeing your way of invocation directly, but my guess is the script's not running in the virtualenv. Here are some debug notes:\n.\/manage.py --help | grep reset_db: Does this output anything?\n.\/manage.py shell_plus\nThen try:\n\nfrom django.core.management import call_command\ncall_command('reset_db', '--help')\n\nAnything then?\nAlso within .\/manage.py shell_plus, try import django_extensions\nOutside of the shell, try this: pip show django, pip django-extensions.\nIf it doesn't show those (e.g. WARNING: Package(s) not found: django-extension) and you think they're already installed, try this:\nwhich python, which pip. Are you using venv, virtualenv, virtualenvwrapper, pipenvorpoetry`?\nTry env | grep VIRT, do you see a VIRTUAL_ENV? If not you may need to make one.\nWhen you run the script, you need to have your environmental variables set so you hook in to your site packages. In poetry we can do poetry run .\/manage.py ourscript or poetry run .\/ourscript.py without needing to be sourced. But we can also easily drop into virtualenv via poetry shell.\nIf you created an environment like virtualenv -ppython3.8 .venv, you can either do:\nsource .venv\/bin\/activate, .\/myscript.py, rr you can try .venv\/bin\/python .\/myscript.py","Q_Score":0,"Tags":"python-3.x,django,django-extensions","A_Id":69920136,"CreationDate":"2021-06-28T19:10:00.000","Title":"Having trouble running the django extensions command reset_db programatically","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a simple excel file I am trying to figure out how I can get my app to read the excel file, so I can use the data to display in a template. I have looked into xlrd but does anyone have any suggestions on the best way to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":153,"Q_Id":68183801,"Users Score":2,"Answer":"Using pandas's read_excel is very easy and efficient and will also give you liberty to manipulate the columns(Fields)","Q_Score":1,"Tags":"python,django,python-2.7,django-models","A_Id":68183833,"CreationDate":"2021-06-29T18:17:00.000","Title":"Django read excel (xlsx) file to display in a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've a problem where one API implemented in a django (3.2) web app running with gunicorn (gevent) has to fetch different prices from multiple APIs and store those prices in the database (Postgres 13) before returning to the client.\nI'd like to put the inserts in the same transaction, so if something unexpected happens, nothing will be inserted.\nI am now going forward by first calling all apis, each one inside a green thread (gevent) and after all of them return, I bulk insert the results.\nBut turns out I got really curious if I it is possible for different threads ( green or not) to share the same transaction. I saw that psycopg2 can execute in a non blocking way.\nThe issue now is everytime I start thread in django the new thread is inside a new transaction. I will dig into the django db backend source to understand what is happening, but maybe someone can clear this out.\nTldr; is possible to different threads execute queries inside the same transaction?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":209,"Q_Id":68186732,"Users Score":1,"Answer":"You definitely do not want to attempt to share a single transaction\/postgres connection between multiple threads without some locking mechanism to make sure they don't interleave activity on the connection in some nasty way that causes errors.\nInstead, a simpler and safer solution is to start your green threads from the main request thread and then gevent.join([, ...]) all of them from that same main request thread. Each green thread would go get the data from the API and just return it as the exit of each thread.\nThen have the main request thread go through each exited green thread object (greenlet) and get the return value for each via Greenlet.get(). Then do the inserts on the main request thread using its normal transaction\/connection.\nUPDATE\nIf you want to get even more sophisticated to achieve better performance, you could use a Pool and have each greenlet put its result on a Queue that's read from the main thread. That way you start saving results to the database as they become available rather than waiting until they all complete.","Q_Score":4,"Tags":"python,django,postgresql,multithreading,gevent","A_Id":68200527,"CreationDate":"2021-06-29T23:26:00.000","Title":"Share django transaction across threads","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have credentials ('aws access key', 'aws secret key', and a path) for a dataset stored on AWS S3. I can access the data by using CyberDuck or FileZilla Pro.\nI would like to automate the data fetch stage and using Python\/Anaconda, which comes with boto2, for this purpose.\nI do not have a \"bucket\" name, just a path in the form of \/folder1\/folder2\/folder3 and I could not find a way to access the data without a \"bucket name\" with the API.\nIs there a way to access S3 programatically without having a \"bucket name\", i.e. with a path instead?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":167,"Q_Id":68188131,"Users Score":1,"Answer":"s3 does not have a typical native directory\/folder structure, instead, it is defined with keys. If the URL starts with s3:\/\/dir_name\/folder_name\/file_name, it means dir_name is nothing but a bucket name. If you are not sure about bucket name but have s3 access parameters and path, then you can\n\nList all the s3_buckets available -\ns3 = boto3.client('s3')\nresponse = s3.list_buckets()\n\nUse s3.client.head_object() method recursively for each bucket with your path as key.","Q_Score":0,"Tags":"python,amazon-s3,path,boto,bucket","A_Id":70870783,"CreationDate":"2021-06-30T03:38:00.000","Title":"python boto - AWS S3 access without a bucket name","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Created Pythonshell with simple script, like just requests.get(). Elasticsearch cluster is in VPC.\nI tried using self-referencing groups, endpoints but nothing worked. Also custom connection with JDBC fails Could not find S3 endpoint or NAT gateway for subnetId (but it exists).\nI see that for Spark jobs ESConnector is available but can not find any working way to make it with Pythonshell jobs. Is there any way to allow such connection?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":68210056,"Users Score":0,"Answer":"Solved, I was missing route to NAT gateway in private subnet.","Q_Score":0,"Tags":"python,amazon-web-services,elasticsearch,aws-glue,amazon-vpc","A_Id":68223839,"CreationDate":"2021-07-01T12:19:00.000","Title":"AWS Glue pythonshell job - how to connect to elasticsearch in VPC?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm starting to use python now and I want to automatize one process on some engineering reports I have to do. To do the reports I use MSWord and Excel, using Excel as database and copying manually the graphics and tables to Word to generate the actual report file.\nUsing pyautogui I've already been able to locate the areas I want to copy, and copy them from Excel. My problem now is that I don't know a way I can make python alternate from Excel to Word and paste, then go back to the Excel file and copy the new info, go back to Word and paste, etc until all infos are compiled so I can save a .pdf file.\nIs there any other library I should be using or a way to do it on pyautogui?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":68214032,"Users Score":0,"Answer":"I suggest using something like openpyxl to open the Excel file directly rather than trying to manipulate it through the Excel UI. Similary, use a library like pyPDF or something similar to write directly to a PDF file.\nIf you insist on manipulating Excel and Word directly. You don't have to switch back and forth. First get all the data you need from Excel and store it somewhere. Then open Word and do what you need to with the stored data.","Q_Score":0,"Tags":"python","A_Id":68214165,"CreationDate":"2021-07-01T16:45:00.000","Title":"How to exchange between apps using Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"there.I am moving forward to use google cloud services to store Django media files. But one thing that stops me is about the Google and Amazon free tier. I had read the google cloud docs but I am confuse about many things. For free tiers, New customers also get $300 in free credits to run, test, and deploy workloads. What I want to know is if they are gonna automatically charge me for using the cloud-storage after 3 months of trial is over because I am gonna put my bank account. This case is same on Aws bucket which allows to store mediafiles for 1 year after then what's gonna happen. Are they auto gonna charge me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":68214391,"Users Score":0,"Answer":"I have never used google cloud before. For AWS free tier you can use the storage with the limited features they allow to free tier. Regarding charges, you can definitely setup a cloudwatch alert in AWS which will alert you if your usage is beyond the free tier limit or you are about to get charged. So you can set that up and be assured you won't get surprise before you get alerted for the same.\nHope this helps. Good luck with your free tier experience.","Q_Score":0,"Tags":"python,django,amazon-web-services","A_Id":68215549,"CreationDate":"2021-07-01T17:14:00.000","Title":"Queries related to Google cloud storage and Aws bucket for file storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a table in SQL Server where I need to insert data on regular base. Each day I perform same task importing data manually, it makes me feel tedious so I need your help. Is it possible to send data from CSV file to SQL Server's existing table without doing manual procedure.\nOr using python to create a scrip that send data from CSV file to SQL Server at fixed time automatically.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":128,"Q_Id":68265568,"Users Score":2,"Answer":"First you have to create a python script that inserts data into SQL server after reading CSV file. Then you should create a CRON job on your server that runs this script regularly. This might be a possible solution for your problem.","Q_Score":1,"Tags":"sql,sql-server,python-3.x,database","A_Id":68265757,"CreationDate":"2021-07-06T06:28:00.000","Title":"Send data from CSV file to SQL Server automatically?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python connector to deploy a sql create table script to snowflake using snowchange and my arguments are passed correctly in CLI.\nNot sure why but I receive this error while running the command. because the variables are properly declared.\nsql script:\nCREATE OR REPLACE TABLE {{ db_raw }}.schemaname.TEST1 (\nTABLENAME VARCHAR(100),\nSOURCE_SYS VARCHAR(100),\nSCHEMA_NAME VARCHAR(100)\n);\nmy script looks something like below.\npip install --upgrade snowflake-connector-python\npython $(System.DefaultWorkingDirectory)\/snowchange\/snowchange\/cli.py -a $(SNOWFLAKE_ACCOUNT_NAME) -u $(SNOWFLAKE_DEVOPS_USERNAME) -r $(SNOWFLAKE_ROLENAME) -w $(SNOWFLAKE_WAREHOUSE) -c TST_ENT_RAW.SNOWCHANGE.CHANGE_HISTORY --vars '{\"DB_CURATED\": \"$(SNOWFLAKE_DB_CURATED)\", \"DB_RAW\": \"$(SNOWFLAKE_DB_RAW)\", \"db_curated\": \"$(SNOWFLAKE_DB_CURATED)\", \"db_raw\": \"$(SNOWFLAKE_DB_RAW)\"}' -v -ac\nerror:\nusage:\n2021-07-06T12:24 snowchange: error: argument --vars: invalid loads value: \"'{DB_RAW:\"","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":36,"Q_Id":68270782,"Users Score":1,"Answer":"You seem to have both uppercase and lowercase var names listed. (\"DB_RAW\" and \"db_raw\") Your script seems to refer to db_raw. JSON is case sensitive. Have you tried removing \"DB_RAW\"?\n--vars '{\"DB_CURATED\": \"$(SNOWFLAKE_DB_CURATED)\", \"db_curated\": \"$(SNOWFLAKE_DB_CURATED)\", \"db_raw\": \"$(SNOWFLAKE_DB_RAW)\"}'","Q_Score":0,"Tags":"python,snowflake-cloud-data-platform","A_Id":68271509,"CreationDate":"2021-07-06T12:42:00.000","Title":"snwochange error while running an sql statement","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to insert a picture inside a cell using pptx python?\nI'm also thinking of finding the coordinate of the cell and adjust the numbers for inserting the picture, but can not find anything.\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":117,"Q_Id":68289163,"Users Score":1,"Answer":"No, unfortunately not. Note that this is not a limitation of python-pptx, it is a limitation of PowerPoint in general. Only text can be placed in a table cell.\nThere is nothing stopping you from placing a picture shape above (in z-order) a table cell, which will look like the picture is inside. This is a common approach but unfortunately is somewhat brittle. In particular, the row height is not automatically adjusted to \"fit\" the picture and changes in the content of cells in prior rows can cause lower rows to \"move down\" and no longer be aligned with the picture. So this approach has some drawbacks.\nAnother possible approach is to use a picture as the background for a cell (like you might use colored shading or a texture). There is no API support for this in python-pptx and it's not without its own problems, but might be an approach worth considering.","Q_Score":0,"Tags":"python,python-pptx","A_Id":68291203,"CreationDate":"2021-07-07T15:46:00.000","Title":"Python pptx insert a picture in a table cell","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The title is my question. I can't think of anything that is useful to store jobs to external database.\nCan you guys provide some use cases?\nthank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":68384355,"Users Score":0,"Answer":"If you schedule jobs dynamically (at run time instead of adding them all at application startup), you don't want to lose them when you restart the application.\nOne such example would be scheduling notification emails to be sent, in response to user actions.","Q_Score":0,"Tags":"python,mongodb,redis,jobs,apscheduler","A_Id":68391755,"CreationDate":"2021-07-14T19:57:00.000","Title":"Why would i want to store apscheduler jobs to JobStore (Redis, Mongo, etc.)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am writing data into a CSV file. file contains data related to student marks like 6\/10,\nwhich means 6 out of 10. here the issue is when I open this file with Microsoft excel 6\/10 becomes 6-Oct. if anyone have an idea how can we stop to converting a string to date.\nneed solution from code side not an excel side","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":68407855,"Users Score":1,"Answer":"That's not really a Python string issue, but an issue with Excel trying to be smarter than it should be.\nIf you open Excel and write 6\/10 into a cell, it will convert it to a date by default.\nOne solution (both in Excel and from other XLS generating software) is to explicitely set the cell format to TEXT. Not sure how to do this in your current attempt, but e.g. openpyxl offers such an option.\nOther solutions include to prefix\/wrap your content:\n\nuse ' (single quote) prefix\n'6\/10 will display as 6\/10.\n\nuse (single space) prefix\n 6\/10 will display as 6\/10.\nSame as above, however the space will be part of the displayed cell data, you probably don't want this.\n\nWrap in =\"your_content_here\"\n=\"6\/10\" will display as 6\/10\n\nPrefix with 0 (zero and space)\n0 6\/10 will display as 3\/5 (which mathematically is the same as 6\/10)\nNow this one works only for fractions, such as your 6\/10. In theory, this should change the cell format to number\/fraction, while internally it's the numeric value 0.6.","Q_Score":0,"Tags":"python,python-3.x,django","A_Id":68407954,"CreationDate":"2021-07-16T10:55:00.000","Title":"Why Python String auto convert into Date in microsoft excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file having 100000 rows. COPY_FROM() successfully inserted all the rows into the database table within seconds. But the row order found in database table is not similar to the row order found in the CSV file. Some of the rows in between seems to be shuffled. Did not find any solutions. Please help me out.\nCSV file\n\nR1\nR2\nR3\nR4\n\nPG table\n\nR1\nR3\nR2\nR4","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":68407946,"Users Score":3,"Answer":"That is normal and is to be expected. PostgreSQL inserts the rows wherever it finds room in the table, and when you query the table without ORDER BY, you won't necessarily always get the rows in the same order.\nIf you want your rows ordered, query them with an ORDER BY clause.","Q_Score":0,"Tags":"python,postgresql","A_Id":68408072,"CreationDate":"2021-07-16T11:03:00.000","Title":"Psycopg2 - COPY_FROM does not maintain row order","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently trying to use pg_dump and pg_restore to be able to dump select rows from a production server to a testing server. The goal is to have a testing server and database that contains the subset of data selected, moreover through a python script, I want the ability to restore the database that original subset after testing and potentially modifying the contents of the database.\nFrom my understanding of pg_dump and pg_restore, the databases that they interact with must be of the same dbname. Moreover, a selection criteria should be made with a the COPY command. Hence, my idea is to have 2 databases in my production server, one with the large set of data and one with the selected set. Then, name the smaller set db 'test' and restore it to the 'test' db in the test server.\nIs there a better way to do this considering I don't want to keep the secondary db in my production server and will need to potentially make changes to the selected subset in the future.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":68414169,"Users Score":0,"Answer":"From my understanding of pg_dump and pg_restore, the databases that they interact with must be of the same dbname.\n\nThe databases being worked with only have to have the same name if you are using --create. Otherwise each programs operates in whatever database was specified when it was invoked, which can be different.\nThe rest of your question is too vague to be addressable. Maybe pg_dump\/pg_restore are the wrong tools for this, and just using COPY...TO and COPY...FROM would be more suitable.","Q_Score":0,"Tags":"python,database,postgresql,schema,psql","A_Id":68414553,"CreationDate":"2021-07-16T19:06:00.000","Title":"pg_dump and pg_restore between different servers with a selection criteria on the data to be dumped","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am forced to ask this question\nMy mentor has given me a task to extract data from files with pure python, there were some txt file which were easy but there is a file with xlsx extension and I can't find any where if it is possible to extract the data from it with pure python (I have been searching for more than 3 weeks now).\nPlease if it is not possible tell me so that I can show this to her with confidence because my mentor keeps insisting that it is possible and I should do it with pure python but she refuses to give me any clues and tips.\nAnd If it is possible tell me how to do it or where to read more about it.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":545,"Q_Id":68466534,"Users Score":1,"Answer":"Previous answers regarding unpacking\/unzipping the XLSX file is the correct starting point. Thereafter you'll need to know how the extracted files work together. It's rather convoluted.\nThe best thing to do is be specific about exactly what data you want to extract then I'm sure you'll get some sample code that shows how you achieve your objective","Q_Score":0,"Tags":"python,python-3.x,excel","A_Id":68466771,"CreationDate":"2021-07-21T08:45:00.000","Title":"Is it possible to read information from an xlsx file without any libraries in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am forced to ask this question\nMy mentor has given me a task to extract data from files with pure python, there were some txt file which were easy but there is a file with xlsx extension and I can't find any where if it is possible to extract the data from it with pure python (I have been searching for more than 3 weeks now).\nPlease if it is not possible tell me so that I can show this to her with confidence because my mentor keeps insisting that it is possible and I should do it with pure python but she refuses to give me any clues and tips.\nAnd If it is possible tell me how to do it or where to read more about it.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":545,"Q_Id":68466534,"Users Score":2,"Answer":"The short answer is no, the long answer is, you can unpack the .xls file and iterate through the resulting .xml \"by hand\".","Q_Score":0,"Tags":"python,python-3.x,excel","A_Id":68466747,"CreationDate":"2021-07-21T08:45:00.000","Title":"Is it possible to read information from an xlsx file without any libraries in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I came up with a situation where I need to create a table on a database (Connection 1) and need to fill it up with data from another connection (Connection 2).\nRight now I am creating these tables and filling them up with a select query in the same server using this example: CREATE TABLE table 1 as SELECT * FROM database.dataTable, However I am stuck to Create table1 in a database in (Connection 1) and filling it up from (Connection 2).\nTo explain my self better I need to do something like this CREAT TABLE table1(Connection 1) as SELECT * FROM database.dataTable(Connection 2). I am using python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":68545903,"Users Score":0,"Answer":"Create table as select is done as a single operation. You can't do that with 2 connections. If you still need to do it by using 2 connections, you will have to create the table in one then, once the table exists, fill it with the second one. It doesn't look optimal.","Q_Score":0,"Tags":"python,mysql,python-3.x,database","A_Id":68546008,"CreationDate":"2021-07-27T13:35:00.000","Title":"Creating a table in a database of a connection and retrieving data from another connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to connect MySQL zip archive via path variable with datagrip?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":31,"Q_Id":68562114,"Users Score":1,"Answer":"It is impossible because you need the running server","Q_Score":0,"Tags":"mysql,python-3.x,datagrip","A_Id":68564984,"CreationDate":"2021-07-28T14:17:00.000","Title":"Is it possible to connect MySQL zip archive with datagrip?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have PL\/SQL function defined in oracle database. When I call it in \"Toad For Oracle\" using following statement\n\"select ccl_bal(1,2,0) from dual\"\nit take hardly 2 seconds. But when I call it from Django 3.2 it takes lot of time, almost 5 minuts. I am using cx_oracle oracle library 8.1.0 and Here is my code.\nresult=connection.cursor().callfunc(\"ccl_bal\",int,[1, 2, 0])\nAny help??","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":147,"Q_Id":68631958,"Users Score":0,"Answer":"This can very well be caused by different optimizer settings in the session. You can compare by checking:\nselect value from v$parameter where name='optimizer_mode';","Q_Score":0,"Tags":"python,django,oracle,cx-oracle","A_Id":68645616,"CreationDate":"2021-08-03T07:28:00.000","Title":"Django with cx_Oracle, calling PL\/SQL function is taking lot of time","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We have our infrastructure up in AWS, which includes a database.\nOur transfer of data occurs in Python using SQLAlchemy ORM, which we use to mimic the database schema. At this point it's very simple so it's no big deal.\nBut if the schema changes\/grows, then a manual change needs to be done in the code as well each time.\nI was wondering: what is the proper way to centralize the schema of the database, so that there is one source of truth for it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":68640226,"Users Score":0,"Answer":"Check out the Glue Schema Registry - this is pretty much what it's made for.","Q_Score":0,"Tags":"python,database,amazon-web-services,terraform,schema","A_Id":68642621,"CreationDate":"2021-08-03T17:11:00.000","Title":"Proper way to centralize schema fore databas","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So long story short: I'm trying to create a program that that loops through excel files stored locally on my computer and adds the sheets from each excel file into a master workbook.\nNOTE: I don't want to use pandas dataframe.append because each sheet has unique information and I want to retain every sheet separately.\nAny help would be greatly appreciated!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":349,"Q_Id":68641948,"Users Score":1,"Answer":"I think using openpyxl would be the easiest solution. It'll let you read and write excel sheets, and even gives you the option of creating various sheets. Personally it's my favorite route whenever I'm working with excel.","Q_Score":0,"Tags":"python,excel,pandas","A_Id":68642181,"CreationDate":"2021-08-03T19:42:00.000","Title":"Can I use python to loop through multiple excel files and add individual sheets to a master workbook?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 32 GB table in BigQuery that I need to do some adjustments through Jupyter Notebook (using Pandas) and export to Cloud Storage as a .txt file.\nHow to do this ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":68670434,"Users Score":0,"Answer":"You can use Google Cloud Platform Console to do that.\n\nGo to Bigquery in the Cloud Console.\nSelect the table you want to export and select \"Export\" then Export to GCS.\nAs your table is bigger than 1GB, be sure to put a wildcard in the filename so Bigquery exports the data in 1GB approx chunks (i.e export-*.csv).\n\nYou cannot export nested and repeated data in CSV format. Nested and repeated data are supported for Avro, JSON, and Parquet (Preview) exports. Anyway, the form will tell you if you can when you try to select the file format.","Q_Score":0,"Tags":"python,pandas,google-cloud-platform,google-bigquery,jupyter-notebook","A_Id":68670925,"CreationDate":"2021-08-05T16:39:00.000","Title":"How to load a BigQuery table to Cloud Storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a custom user model extending AbstractUser. But when making migrations, many tables were added to PostgreSQL, I have only two models.\nI named my customuser : CustomUser and the other model Offers.\nThese tables were found in postgresql database:\napi_customuser api_customuser_groups api_customuser_user_permissions api_offers, auth_group, auth_group_permissions, auth_permission, django_admin.log\nand others\nNote that my django app is named : api.\nIs it normal?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":68681562,"Users Score":0,"Answer":"Yes, this is normal. These tables are a part of Django's authentication and authorization feature.","Q_Score":0,"Tags":"python,django,postgresql,django-models,django-rest-framework","A_Id":68768456,"CreationDate":"2021-08-06T12:25:00.000","Title":"tables automatically added to database when creating custom user model in django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In my company, we have an ingestion service written in Go whose job is to take messages from a HTTP end point and store them in Postgres. It receives a peak throughput of 50,000 messages\/second. However, our database can handle a maximum of 30,000 messages\/second.\nIs it possible to write a middleware in Python to optimize this? If so please explain.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":68685808,"Users Score":0,"Answer":"It seems to be pretty unrelated to Python or any particular programming language.\nThese are typical questions to be asked and answers to be given:\n\nAre there duplicates? If yes, don't save every message immediately but rather wait for duplicates (for what some kind of RAM-originated cache is required, the simplest one is hashtable).\nBatch your message into large enough packs and then dump them into PostgreSQL all-at-once. You have to determine what is \"large enough\" based on load tests.\nCan you drop some of those messages? If your data is not of critical importance, or at least not all of it, then you may detect overload by tracking number of pending messages and start to throw incoming stuff away until load becomes acceptable.","Q_Score":0,"Tags":"python,middleware","A_Id":68685927,"CreationDate":"2021-08-06T17:59:00.000","Title":"Middleware to optimize postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed Cassandra database on my CentOs system. after that, I tried to install the Cqlsh package using this command sudo yum install cqlsh and it has been installed successfully. but when I tried to run cqlsh from the terminal, the following error appears:\n\nImportError: cannot import name ensure_str\n\nsomewhere in the code, it tries to load a library named six that contains ensure_str. the error does not say that it can not find a module named six, the python interpreter can find the library but can not import it!\nI have tried googling but none of the solutions worked for me.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1027,"Q_Id":68692044,"Users Score":0,"Answer":"Used pip3 to install, and found this issue as well.\nFor me, removing six dependencies from \/usr\/lib\/python3\/dist-packages was the only thing that worked.\nrm six-1.11.0.egg-info and rm -r six-1.11.0.egg-info\nI couldn't uninstall it with pip3, so manual removal was the way to go, followed by a pip3 install six\nOnce that was back in place, cqlsh ran without issue.","Q_Score":1,"Tags":"python,cassandra,centos,cqlsh","A_Id":71485217,"CreationDate":"2021-08-07T11:39:00.000","Title":"CQLSH ImportError: cannot import name ensure_str","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to export large (10 million rows) table to a semicolon separated .csv file. I am currently using build in tool (Import\/Export Wizard) in Microsoft SQL Server Management Studio v17 and the export takes approximately 5 hours.\nIs there a simple way to speed up this process?\nI am limited by my company to use only R\/python solution, beside of course SQL Server itself.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":531,"Q_Id":68763007,"Users Score":-1,"Answer":"What is the size in memory of your table? I have a ~2Giga table turned into a csv in a couple of minutes.\nCheck your data source connection, I use OLEDB.","Q_Score":0,"Tags":"python,sql,r,sql-server","A_Id":71006367,"CreationDate":"2021-08-12T19:12:00.000","Title":"How to export fast large tables from SQL Server to csv?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using PostgresSQL and Django, I dropped two tables from postgresql manually.\ndrop table ... \nThen I did make migrations and migrate.\nBut my tables are not recreated. Can not access from shell_plus. Also in this page, django return relation 'table name' does not exists.\nI want to makemigrations and migrate to create tables again.\nHow can I solve my issue ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":320,"Q_Id":68771072,"Users Score":1,"Answer":"It depends on your current migrations files tree structure.\nIf your migration file containing CreateModel for those tables you deleted directly from PostgreSQL are the leaf nodes, i.e, there was no other migration file after that, you can simply delete the entry of the migration file in the django_migrations table and run migrate.\nFor example,\napp\/migrations\/0002_20210813_122.py is the file having commands for the creation of your tables, and this is the last node ( how do we know if this is the last file? so you just check if there's any other migration file in your project which has this filename 0002_20210813_122 under its dependencies field, if no then this file is the leaf node ). If it's a leaf node, go to django_migrations table in your database and delete an entry with value 0002_20210813_122 under column name and column app should be your app_name.\nNow run python manage.py migrate, the tables will be recreated.\nIf your migration file isn't a leaf node, then kindly share the tree structure of your migrations file, for us to help you out.","Q_Score":2,"Tags":"python,python-3.x,django,django-models","A_Id":68771842,"CreationDate":"2021-08-13T10:54:00.000","Title":"How to create table with migrate after drop table from postgresql in django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've a program which needs to have two processes doing mutually exclusive reads and writes to a mongodb document.\nOne part (lets call it \"process_a\") reads and updates (adds) to the document. The other (lets call it \"process_b\") reads and updates (deletes) all the values to the document.\nSo in an ideal scenario, process_a's read and writes never overlaps with process_b's read and writes.\nOtherwise, if the process_b's reads right before the process_a updates the document with new values. Process_b would delete (set it to zero) the document without realizing the update from process_a every happened. Thus failing to record the transaction.\nIs there any way to lock the document\/collection while one process performs its read and update task.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":68885391,"Users Score":0,"Answer":"I found no way of doing it with 100% certainty. So I ended up modifying code somewhere else to prevent it from happening.","Q_Score":0,"Tags":"python,mongodb","A_Id":68885826,"CreationDate":"2021-08-22T21:43:00.000","Title":"Perform mutually exclusive Read & Write operations on a mongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am creating a glue job(Python shell) to export data from redshift and store it in S3. But how would I automate\/trigger the file in S3 to download to the local network drive so the 3rd party vendor will pick it up.\nWithout using the glue, I can create a python utility that runs on local server to extract data from redshift as a file and save it in local network drive, but I wanted to implement this framework on cloud to avoid having dependency on local server.\nAWS cli sync function wont help as once vendor picks up the file, I should not put it again in the local folder.\nPlease suggest me the good alternatives.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":421,"Q_Id":68897012,"Users Score":1,"Answer":"If the interface team can use S3 API or CLI to get objects from S3 to put on the SFTP server, granting them S3 access through an IAM user or role would probably be the simplest solution. The interface team could write a script that periodically gets the list of S3 objects created after a specified date and copies them to the SFTP server.\nIf they can't use S3 API or CLI, you could use signed URLs. You'd still need to communicate the S3 object URLs to the interface team. A queue would be a good solution for that. But if they can use an AWS SQS client, I think it's likely they could just use the S3 API to find new objects and retrieve them.\nIt's not clear to me who controls the SFTP server, whether it's your interface team or the 3rd party vendor. If you can push files to the SFTP server yourself, you could create a S3 event notification that runs a Lambda function to copy the object to the SFTP server every time a new object is created in the S3 bucket.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,aws-glue","A_Id":68915684,"CreationDate":"2021-08-23T17:54:00.000","Title":"download file from s3 to local automatically","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The source collection has 10 million records, I need to fetch the records from the source collection in batch and insert those records in destination collection.\nThe question is - In python, How can I fetch the documents from the source collection in batch and insert those documents in the destination collection?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":68957747,"Users Score":0,"Answer":"Use $out or $merge. Add additional aggregation pipeline stages to exclude documents present in the destination already.","Q_Score":0,"Tags":"python-3.x,mongodb,pymongo","A_Id":68958158,"CreationDate":"2021-08-27T18:07:00.000","Title":"Find and move huge records from one collection to another in mongodb using pymongo","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am very new to programming and this site too... An online course that I follow told me that it is not possible to manage bigger databases with db.sqlite3, what does it mean anyway?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":68958302,"Users Score":0,"Answer":"Selecting a database for your project is like selecting any other technology. It depends on your use case.\nSize isn't the issue, complexity is. SQLite3 databases can grow as big as 281 terabytes. Limits on number of tables, columns & rows are also pretty decent.\nIf your application logic requires SQL operations like:\n\nRIGHT OUTER JOIN, FULL OUTER JOIN\nALTER TABLE, ADD CONSTRAINT, etc..\nDELETE, INSERT, or UPDATE on a VIEW\nCustom user permissions to read\/write\n\nThen SQLite3 should not be your choice of database as these SQL features are not implemented in SQLite3.","Q_Score":1,"Tags":"python,django","A_Id":68958830,"CreationDate":"2021-08-27T19:10:00.000","Title":"What did my teacher mean by 'db.sqlite3' will fall short in bigger real word problems?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"im downloading an xls file from the web using selenium.\nnow,i want to use the data in it with pandas, but 1 column includes numbers in scientific notation.\nis there a way to change them to numbers?\nim trying to use the data in the excel, and transfer some of it into google sheet for my team to use, but if i cant figure out how to send the numbers and not the scientific notations, it wont work.\nthanks,\nAvi","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":68962190,"Users Score":0,"Answer":"What you can do is this:\n\nyour data is imported as string, so we need to split it at e to get our power of 10 out:\ndf = pd.DataFrame({'sci_num' :['6.2345e20'] })\ndf[['num','exp10']] = df['sci_num'].str.split('e', expand=True)\n\nThen you need to use these two columns to get your results:\ndf['sci_num_corrected'] = df['num'].astype(float) * (10 ** df['exp10'].astype(int))","Q_Score":0,"Tags":"python-3.x,pandas","A_Id":68966220,"CreationDate":"2021-08-28T07:22:00.000","Title":"Sceintific notation to number in xls file - python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have this table\n\n\n\n\nuser_id\ntitle\n\n\n\n\n1\nABCD\n\n\n1\nnull\n\n\n2\nEFGH\n\n\n\n\nI'am trying to get all the titles of every user id and convert null to an empty string.\nI tried using this\nSELECT IFNULL(title, '') FROM table WHERE user_id = 1\nBut it says that multiple rows returned, when I try 2 it returns a result.\nIs there a way to convert all null relut to empty string if ther more than 1 result? Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":68962195,"Users Score":0,"Answer":"You can use COALESCE() to replace NULL with an empty string.","Q_Score":2,"Tags":"python,mysql","A_Id":68962230,"CreationDate":"2021-08-28T07:23:00.000","Title":"MySQL Replace Null with an Empty String","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have hundreds of PDFs and all of them have exact same format (mostly tabular). I have an excelsheet where I pick values from tables (PDF) and paste them to specific location in excel (also in table) to analyse the data. I have been through hell : powershell, itextsharp, acrobat forms, export data etc etc but so far I have been unlucky. Is there anyway I can automate this manual process of picking data from PDF and putting them into excel. Again all PDFs are of exact same format (only value differs).\nEdit: To add more details these PDFs are tax returns. I have to consolidate tax returns which are filed monthly. Therefore the table heads remain same in excel only value changes for each month because different return for different month. Right now I am opening individual PDF and copying values and pasting them to excel sheet. I want to automate this process.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1042,"Q_Id":68972743,"Users Score":0,"Answer":"You can export multiple tables from pdf using tabula and pandas\nsample code\nimport pandas as pd\nimport tabula\ndf = tabula.read_pdf('path to file', pages='all')\nfor i in range(len(df)):\ndf[i].to_excel('file_' + str(i) + '.xlsx')","Q_Score":0,"Tags":"python,excel,pdf,automation,data-extraction","A_Id":68973339,"CreationDate":"2021-08-29T12:08:00.000","Title":"I want to automate the manual process of extracting data from PDF to excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have thousands of files stored in MongoDB which I need to fetch and process.\nProcessing consists of a few steps which should be done sequentially. The whole process takes around ~2 mins per file from start to end.\nMy question is how to do that as fast as possible while being scalable in future? Should I do it in pure python or should I maybe use Airflow + Celery (or even Celery by itself)? Are there any other ways\/suggestions I could give a try?\nAny suggestion is appreciated.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":75,"Q_Id":68976526,"Users Score":2,"Answer":"Celery alone is precisely made to do what you need - no need to reinvent the wheel.","Q_Score":0,"Tags":"python,multiprocessing,celery,airflow,etl","A_Id":68982629,"CreationDate":"2021-08-29T20:08:00.000","Title":"Processing thousands of files in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a getIamPolicy for google cloud sql service? If there is how to use it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":69012554,"Users Score":2,"Answer":"You don't have Cloud IAM role at the Cloud SQL instance level. You only have project level permission with access to all the Cloud SQL instance of the project. You can perform a getIamPolicy on the project to get all the policies and find which one give access to Cloud SQL\nWith Cloud SQL, you have users per instance, but there isn't getIamPolicy for this API","Q_Score":0,"Tags":"python-3.x,google-cloud-platform,google-iam","A_Id":69015631,"CreationDate":"2021-09-01T11:04:00.000","Title":"GCP python API. Accessing getiampolicy for gcp cloud sql instance","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i made a python flask web application that lists database entries.\nThere is a form to add a new entry. It uses db.session.add() and then db.session.commit()\nAfter adding a new entry, it doesn't appear in the web applications list, while all other entries are listed.\nWhen I look for the entry in the MySQL Database via SELECT ..., I can see that the entry exists.\nAfter I restart MySQL via sudo service mysql restart, the new entry appears in the web application.\nDo You have a Idea why this happens?\nThank You.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":69072108,"Users Score":0,"Answer":"db.session.add() only creates the db request in memory.\nYou need to call db.session.commit() to persist the changes.","Q_Score":0,"Tags":"python,mysql,flask,sqlalchemy","A_Id":69072488,"CreationDate":"2021-09-06T09:17:00.000","Title":"SQLAlchemy does not find new entries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am facing this error when I start my flask python3 application on mac.\nOSError: cannot load library 'gobject-2.0-0': dlopen(gobject-2.0-0, 2): image not found. Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0-0'\nI am using weasyprint in my project which is causing this issue.\nI tried to install glib and it is installed in my system","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3879,"Q_Id":69097224,"Users Score":0,"Answer":"I had the same issue after the homebrew update. Turned out the issue was because of the older pango lib version.\nI did brew install pango\nThis upgraded pango lib from 1.48.2 -> 1.50.4 which internally installed gobject's latest version as dep. And my issue got resolved.","Q_Score":8,"Tags":"python-3.x,flask,glib,weasyprint,macbookpro-touch-bar","A_Id":71291557,"CreationDate":"2021-09-08T04:58:00.000","Title":"gobject-2.0-0 not able to load on macbook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to update records for OME_Contract_Target__C in salesforce using sf.bulk.ome_contract_target__C and and it is throwing the below error\n'errors': [{'statusCode': 'INVALID_FIELD_FOR_INSERT_UPDATE',\n'message': 'Unable to create\/update fields: OME_Contract__c. Please check the security settings of this field and verify that it is read\/write for your profile or permission set.',\n'fields': ['OME_Contract__c']}]}]\nThis issue is happening only for OME_Contract__c column. Other columns are getting updated without any issues. Any suggestions to resolve this will be helpful.m","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":69112357,"Users Score":0,"Answer":"Does the user that is performing the update (the one that is being used for authenticating the request) have edit permissions on that field ('OME_Contract__c')?","Q_Score":0,"Tags":"python,salesforce,bulk,veeva","A_Id":69123813,"CreationDate":"2021-09-09T04:45:00.000","Title":"Salesforce Bulk API - (Unable to update column ome_Contract__C for object \"OME_Contract_Target__C\" )","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to update records for OME_Contract_Target__C in salesforce using sf.bulk.ome_contract_target__C and and it is throwing the below error\n'errors': [{'statusCode': 'INVALID_FIELD_FOR_INSERT_UPDATE',\n'message': 'Unable to create\/update fields: OME_Contract__c. Please check the security settings of this field and verify that it is read\/write for your profile or permission set.',\n'fields': ['OME_Contract__c']}]}]\nThis issue is happening only for OME_Contract__c column. Other columns are getting updated without any issues. Any suggestions to resolve this will be helpful.m","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":69112357,"Users Score":0,"Answer":"First you need to identify which profile that the user is using.\nThen go to the object manager and look for that object and field. View the field level security for that specific field.\nmake sure the profile of that user having a write access to that field.","Q_Score":0,"Tags":"python,salesforce,bulk,veeva","A_Id":69143023,"CreationDate":"2021-09-09T04:45:00.000","Title":"Salesforce Bulk API - (Unable to update column ome_Contract__C for object \"OME_Contract_Target__C\" )","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a S3 bucket and some Python code, the code read all the available files for the current day and download them to s3 (it reads the files from FTP in an ascending order, based on the datetime in the filename when the file gets uploaded to FTP), so for example I have downloaded file 1 and file 2 in the last run and uploaded them to S3, now I know FTP has a new file file 3 available, then a new run will download files in the following order: file1 file2 and file3 and upload all the files again in the same order to the same S3 path (file1 and file2 gets overwritten, and new file file 3 will also be uploaded to s3).\nMy question is what's the easiest way to identify the newly-uploaded file file3 in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":69116144,"Users Score":0,"Answer":"The easiest way I can think of to see the difference between 'updated' files and newly created files is simply doing a try\/except GetObject before the PutObject. This is preferred over first doing the PutObject then trying to figure out what changed since S3 has no easy way of retrieing objects by 'Modified date' or simular.\nSo if your question was about checking which files were already present in S3 before uploading, try doing the GetObject first :).","Q_Score":0,"Tags":"python,python-3.x,amazon-web-services,amazon-s3,ftp","A_Id":69116592,"CreationDate":"2021-09-09T10:01:00.000","Title":"What's the easieat way to get the latest uploaded file in S3 (when other existing files get overwritten) - Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you grab information from SQL queries in python and then put that python code into PowerBi? I currently use PowerBi at work and I am doing a python course so I am curious if I could do a machine learning model with python in PowerBi.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":69134528,"Users Score":0,"Answer":"Yes, Whatever you have in SQL put it in table\nUse Python cursors to get data from SQL table and put it in dataframe then truncate\/drop SQL table.\nCheck PowerBI global options and locate Python Scripting, fill in proper directory\nGo to Power BI data sources and find Python script. Use it to create whatever dataset you need. Maybe do ML inside.","Q_Score":0,"Tags":"python,sql,powerbi","A_Id":69138187,"CreationDate":"2021-09-10T15:25:00.000","Title":"Python SQL Query into PBI","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find a way, either in R or Python, to use a dataframe as a table in an Oracle SQL statement.\nIt is impractical, for my objective, to:\n\nCreate a string out of a column and use that as a criteria (more than a 1k, which is the limit)\nCreate a new table in the database and use that (don't have access)\nDownload the entire contents of the table and merge in pandas (millions of records in the database and would bog down the db and my system)\n\nI have found packages that will allow you to \"register\" a dataframe and have it act as a \"table\/view\" to allow queries against it, but it will not allow them to be used in a query with a different connection string. Can anyone point me in the right direction? Either to allow two different connections in the same SQL statement (to Oracle and a package like DuckDB) to permit an inner join or direct link to the dataframe and allow that to be used as a table in a join?\nSAS does this so effortlessly and I don't want to go back to SAS because the other functionality is not as good as Python \/ R, but this is a dealbreaker if I can't do database extractions.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":116,"Q_Id":69181208,"Users Score":0,"Answer":"Answering my own question here -- after much research.\nIn short, this cannot be done. A series of criteria, outside of a list or concat, you cannot create a dataframe in python or R and pass it through a query into a SQL Server or Oracle database. It's unfortunate, but if you don't have permissions to write to temporary tables in the Oracle database, you're out of options.","Q_Score":0,"Tags":"python,r,pandas,oracle,join","A_Id":69894161,"CreationDate":"2021-09-14T16:12:00.000","Title":"Python or R -- create a SQL join using a dataframe","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an use case where I need to do the followings:\n1.Check if redshift_db.redhsift_tbl is present or not? If table is present then perform operation A else perform operation B.\nI was checking in boto3 documentation to get an api which will tell whether or not the table is present but no luck. What is the best way to check whether a redshift table is present or not?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":69198161,"Users Score":1,"Answer":"There are 2 ways you can go. If you are using an external orchestration tool then this tool can query the catalog tables (pg_table_def or stv_tbl_perm) and issue the appropriate next commands based on the result. Or if you need Redshift to do this then you will need create a stored procedure to take the correct action based on examining the catalog tables.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-redshift","A_Id":69200302,"CreationDate":"2021-09-15T18:25:00.000","Title":"How to check if a aws redshift table is present or not","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I upgrade to oracle 21 and I need an rpm cx_oracle compatible oracle 21 for my script python.\nps: I can't use python -m pip install cx_Oracle --upgrade --user\nif you have another rpm instead of rpm cx_oracle to connect to db it will be useful also\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":69206357,"Users Score":0,"Answer":"If you execute ldd on this driver you will find that this driver does not have link time dependency on libclntsh.so. It uses dlopen to loacate and dynamically load Oracle's client library (potentially of any version).\nOracle OCI C interface is very strict about backward compatibility.\nSo unless you really demand some 21c OCI feature you really do not need to upgrade cx_Oracle drivers.\n\"Any\" version of cx_Oracle should be able to load latest libclntsh.so from Oracle 21c client\/database installation.","Q_Score":0,"Tags":"python,oracle","A_Id":69296543,"CreationDate":"2021-09-16T10:00:00.000","Title":"rpm cx_oracle compatible oracle 21","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a mongoDB document [A] containing a reference object id of another document [B].\nNow when object [B] is deleted what is the best way to delete document [A] also b'coz it contains object id of document [B] which does not exist.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":69221562,"Users Score":0,"Answer":"I think the best solution is at where place object[B] has been deleted, after that update object[A] or delete object[A] ,","Q_Score":0,"Tags":"python,mongodb,fastapi","A_Id":69221776,"CreationDate":"2021-09-17T10:07:00.000","Title":"When the referenced object is deleted then how we can delete the objects that have references to it in fastapi and mongoDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is a typical request:\nI built a DAG which updates daily from 2020-01-01. It runs an INSERT SQL query using {execution_date} as a parameter. Now I need to update the query and rerun for the past 6 months.\nI found out that I have to pause Airflow process, DELETE historical data, INSERT manually and then re-activate Airflow process because Airflow catch-up does not remove historical data when I clear a run.\nI'm wondering if it's possible to script the clear part so that every time I click a run, clear it from UI, Airflow runs a clear script in the background?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":148,"Q_Id":69290067,"Users Score":0,"Answer":"After some thought, I think here is a viable solution:\nInstead of INSERT data in a DAG, use a DELETE query and then INSERT query.\nFor example, if I want to INSERT for {execution_date} - 1 (yesterday), instead of creating a DAG that just runs the INSERT query, I should first run a DELETE query that removes data of yesterday, and then INSERT the data.\nBy using this DELETE-INSERT method, both of my scenarios work automatically:\n\nIf it's just a normal run (i.e. no data of yesterday has been inserted yet and this is the first run of this DAG for {execution_date}), the DELETE part does nothing and INSERT inserts the data properly.\n\nIf it's a re-run, the DELETE part will purge the data already inserted, and INSERT will insert the data based on the updated script. No duplication is created.","Q_Score":1,"Tags":"python,airflow","A_Id":69290614,"CreationDate":"2021-09-22T19:16:00.000","Title":"How to purge historical data when clearing a run from Airflow dashboard?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a simple way to download all tables from a postgresql database into pandas? For example can pandas just load from the .sql file? All the solutions I found on the line suggest connecting to the database and using select from commands, which seems far more complicated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":69369652,"Users Score":0,"Answer":"You have to connect to a database anyway. You can find out table names from odbc cursor and then use pandas.read_table for names ex. pypyodbc finding names:\nallnames=cursor.tables( schema='your_schema').fetchall()\n-- without view and indexes below\ntabnames=[el[2] for el in allnames if el[3]=='TABLE']","Q_Score":0,"Tags":"python,pandas,postgresql","A_Id":69372262,"CreationDate":"2021-09-29T00:06:00.000","Title":"Download all postgresql tables to pandas","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a lambda function where, after computation is finished, some calls are made to store metadata on S3 and DynamoDB.\nThe S3 upload step is the biggest bottleneck in the function, so I'm wondering if there is a way to \"fire-and-forget\" these calls so I don't have do wait for them before the function returns.\nCurrently I'm running all the upload calls in parallel using asyncio, but the boto3\/S3 put_object call is still a big bottle neck.\nI tried using asyncio.create_task to run coroutines without waiting for them to finish, but as expected, I get a bunch of Task was destroyed but it is pending! errors and the uploads don't actually go through.\nIf there was a way to do this, we could save a lot on billing since as I said S3 is the biggest bottleneck. Is this possible or do I have to deal with the S3 upload times?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":69399774,"Users Score":1,"Answer":"If there was a way to do this,\n\nSadly there is not, unless you are going to use other lambda function to do the upload for you. This way your main function would delegate time consuming file processing and upload to a second function in an asynchronous way. Your main function can then return immediately to the caller, and the second function does that heavy work in the background.\nEither way, you will have to pay for the first or second function's execution time.","Q_Score":1,"Tags":"python,amazon-s3,aws-lambda,boto3,python-asyncio","A_Id":69399858,"CreationDate":"2021-09-30T23:27:00.000","Title":"Fire-and-forget upload to S3 from a Lambda function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"newRow = [\"some_data\", \"some_data\", \"=MULTIPLY(A1;BA)\"]\nWhen I'm trying this, in my google sheet, the cell in filled by : '=MULTIPLY(O33;M33) as a string. How can I make my equation usable ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":69421786,"Users Score":0,"Answer":"Remove the \"\" in your code will do the trick","Q_Score":2,"Tags":"python,google-sheets,google-sheets-formula","A_Id":69423101,"CreationDate":"2021-10-03T03:14:00.000","Title":"Insert equation in google sheet using python and the google sheet API","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie in Power BI\/Power query and python. I hope to ask this question succinctly.\nI have a \"primary\" query in PBI but need to change the values of one column (categories) based on the values in the (description) column. I feel there is a better solution than a new conditional if\/else column, or ReplaceReplacer.text in M Code.\nAn I idea I had was to create a list or query of all values in (description) that need to have their category changed , and somehow use python to iterate through the (description) list and when it finds a value in (description), it knows to drop the new value into category.\nI've googled extensively but can't find that kind of \"loop\" that I can drop a python script into Power Query\/Power BI.\nWhat direction should I be heading in, or am I asking the right questions? I'd appreciate any advice!\nJohn","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":218,"Q_Id":69458212,"Users Score":0,"Answer":"You are having a rather simple ETL task at hand that clearly doesn't justify incorporating another language like Python\/Pandas.\nGiven the limited information you are sharing I would imagine to use a separate mapping table for your categories and then merge that one with your original table. And eventually you only keep the columns you are interested in.\nE.g. this mapping or translation table has 2 columns: OLD and NEW. Then you merge this mapping table with your data table such that OLD equals your Description column (the GUI will help you with that) and then expand the newly generated column. Finally rename the columns you want to keep and remove all the rest. This is way more efficient than 100 replacements.","Q_Score":0,"Tags":"python,powerbi,powerquery","A_Id":69460681,"CreationDate":"2021-10-05T23:16:00.000","Title":"Replace values in power query column using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I haven't been able to find anything that covers this and I'm not sure it's possible since I can't find anything in the cx_Oracle docs. But, is there a way to turn off the sql statment echo or output? I'm looking for something kind of like the paramiko-expect \"display=False\" option. Typically, I set the paramiko-expect display option depending on my logging level. That option just seems to be eluding me in cx_Oracle.\nThis is running Python 2.7\/3.7\nOracle version is anything between 12c to 19c\nPlatform is RH7.9\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":69484768,"Users Score":0,"Answer":"I don't know what happened, but I put in my logging architecture place of print statements and they stopped. I must have had an unexpected print somewhere in my code that was printing those out. Thanks for the responses... stupid mistake on my part.","Q_Score":0,"Tags":"python,oracle,cx-oracle","A_Id":69487906,"CreationDate":"2021-10-07T16:33:00.000","Title":"cx_oracle disable statement echo from console","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find a library (any language, but preferably C# or Python) which will let me open an XLSX file, iterate through the chart objects, and find data about the chart - ideally including the data backing the chart.\nThe Pandas Python package, or ExcelDataReader NuGet package have useful functionality for opening the file and reading a grid of numbers, as well as ways to add charts, but I don't find any way to read the charts.\nCurious to hear from anyone who has ideas\/solutions.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":69490094,"Users Score":1,"Answer":"Hey I have a good solution for C#. In C# you can use OLEDB, this allows you to connect a C# code to a excel or access database (so long the database is in the C# code files). You don't need to get any addons for this is you have C# on Visual Studio.","Q_Score":0,"Tags":"python,c#,excel,pandas,charts","A_Id":69490121,"CreationDate":"2021-10-08T03:19:00.000","Title":"Reading chart data from an Excel file","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to push my code to the heroku and when I migrate my manage.py, it causes this error: django.db.utils.DataError: length for type varchar cannot exceed 10485760 .\nFirst, my length has been set to 100000000 and I change it back to 1000 and make migrations. But even after that I still got this Error. I try to search my whole files and everything is set to 1000. Help me to solve the problem!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":69557904,"Users Score":0,"Answer":"The error takes place when you run the migration file that was constructed when you have set it to 100'000'000. You need to look to the migration files and remove the migration file (and the ones that depend on that one, given you made more migration files). You can find these in the app_name\/migations\/ directory (with app_name the name of the module of your app).\nThen you can run makemigrations again, which will construct a CharField with length 1'000, and will update the database accordingly.","Q_Score":1,"Tags":"python,django","A_Id":69557947,"CreationDate":"2021-10-13T15:05:00.000","Title":"django.db.utils.DataError: length for type varchar cannot exceed 10485760","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I just need some advice about what database should I use, and how should I store my data.\nNamely I need to store big chunk of data per user, I was thinking about storing everything in JSON data, but I thought that I could ask you first.\nSo I am using Django, and for now MySql, I need to store like 1000-2000 table rows per user, with columns like First Name, Last Name, Contact info, and also relate it somehow to the user that created that list. Also I need this to be able to efficiently get data from database.\nIs there any way of storing this big data per user?\nThank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":69,"Q_Id":69609348,"Users Score":0,"Answer":"I know pandas is a library that works very well for storing data. So maybe look into that and see what file formats are well documented with it.","Q_Score":0,"Tags":"python,mysql,json,django,database","A_Id":69609521,"CreationDate":"2021-10-17T23:29:00.000","Title":"What is the best way to store big data per user?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to deploy a Django app from a development server to a production server\nI have set up a virtualenv with python 3.8.10, created the mysql database, I am running in the virtualenv. I get no errors from python manage.py check, get \"no changes detected\" when running python manage.py makemigrations, but when I run ```python manage.py migrate`` I get the following:\nOperations to perform:\nApply all migrations: admin, auth, contenttypes, sessions\nRunning migrations:\nApplying contenttypes.0001_initial... OK\nApplying admin.0001_initial...Traceback (most recent call last):...\nfinal line of the traceback:\nDjango.db.utils.OperationalError: (1824, \"Failed to open the referenced table 'classroom_user'\")\n(\"classroom\" is the name of the app within the project \"codex\") I just recently rebuilt all of the tables in this database on my development server with no issues.\nThe database on the production server is empty. models.py is in place and complete. I have tried it both with an empty migrations folder and the migration folder removed. The migration does create django_admin_log, django_content_types, django_migrations, but no other tables.\nAll of the other posts I have seen on this have been about have foreign key constraints, but in my models.py all of the tables that have foreign keys are specified after the tables where the keys are.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":854,"Q_Id":69653388,"Users Score":0,"Answer":"This worked for me!\nTry to check your table ENGINE with this command:\n\n\nMySQL: SHOW TABLE STATUS WHERE NAME='XXX';\n\n\nThen find its Engine (It's either MyISAM on InnoDB)\nAfter that, change the engine of your newly created table (which throw the error) and match it with your classroom_user table with this command:\n\n\nMySQL: ALTER TABLE classroom_user ENGINE='XXXX';\n\n\nThen run migrate again and you're good to go!","Q_Score":3,"Tags":"python,mysql,django,django-migrations","A_Id":71194549,"CreationDate":"2021-10-20T21:44:00.000","Title":"Django migration: django.db.utils.OperationalError: (1824, \"Failed to open the referenced table 'classroom_user'\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to deploy a Django app from a development server to a production server\nI have set up a virtualenv with python 3.8.10, created the mysql database, I am running in the virtualenv. I get no errors from python manage.py check, get \"no changes detected\" when running python manage.py makemigrations, but when I run ```python manage.py migrate`` I get the following:\nOperations to perform:\nApply all migrations: admin, auth, contenttypes, sessions\nRunning migrations:\nApplying contenttypes.0001_initial... OK\nApplying admin.0001_initial...Traceback (most recent call last):...\nfinal line of the traceback:\nDjango.db.utils.OperationalError: (1824, \"Failed to open the referenced table 'classroom_user'\")\n(\"classroom\" is the name of the app within the project \"codex\") I just recently rebuilt all of the tables in this database on my development server with no issues.\nThe database on the production server is empty. models.py is in place and complete. I have tried it both with an empty migrations folder and the migration folder removed. The migration does create django_admin_log, django_content_types, django_migrations, but no other tables.\nAll of the other posts I have seen on this have been about have foreign key constraints, but in my models.py all of the tables that have foreign keys are specified after the tables where the keys are.","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":854,"Q_Id":69653388,"Users Score":6,"Answer":"OK, solved.\nI was able to get a different error with a slightly older version of Django (3.2.6 instead of 3.2.8) and on a Windows server instead of Linux. This gave me an error regarding foreign key restraints that I have seen in other posts, but was not an error I had seen before.\nI had to perform the migrations for my app first (where classroom is the app within the project.):\npython manage.py makemigrations classroom\npython manage.py migrate","Q_Score":3,"Tags":"python,mysql,django,django-migrations","A_Id":69662414,"CreationDate":"2021-10-20T21:44:00.000","Title":"Django migration: django.db.utils.OperationalError: (1824, \"Failed to open the referenced table 'classroom_user'\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to use the db.sql 3 but when I open it is not reading the file. Moreover, I also downloaded SQLite extension but when I again click on db.SQLite 3 is nothing showing there. So please help me regarding this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":281,"Q_Id":69734499,"Users Score":0,"Answer":"download extension called sqlite viewer on your vs code","Q_Score":0,"Tags":"python,database,visual-studio","A_Id":71902239,"CreationDate":"2021-10-27T07:31:00.000","Title":"How to open db.sqlite3 in Visual studio for django project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a TSQL procedure in an Azure Database Instance. The procedure has logic that determines the state of an ETL process and is sensitive to the current time to determine whether an event is within bound or not within bound. This procedure has complex behaviour and is a core requirement.\nI must validate the behaviour of the procedure. I am writing unit tests in Python, I am using unittest to manage tests and pyodbc to make the calls to the database. These unit tests must validate the behaviour of the procedure irrespective of the time that the procedure is called. I need the database to behave as if it is a certain time like '05:30:00' for example.\nI am familiar with the concept of mocking objects to run tests at any time and to remove external dependencies. I do not think this applies to Microsoft databases, more like a REST API for example. Another consideration is that I would not want to target a copy of the object that I want to test as the copy might not be the same as the original.\nThe only solutions that come to mind (bad solutions) are:\n\nImplement a variable in the procedure to hold DATETIME and override this using a parameter with a DEFAULT specification instead of GETDATE().\nUse some conditional logic that is only executed if the session was authenticated by the application account that runs the tests, using a default value for the time.\n\nOutcomes of 1 would be;\n\nAll calls to the procedure would need to be updated\nA parameter that shouldn't be there would now be included\nRisk of calling the procedure with the wrong arguments would be increased and could cause havoc silently\n\nOutcomes of 2 would be;\n\nTesting is dependent on the user account not changing and if we segregate responsibility for security implementation and that person is not available we would be temporarily stuck\nSlower procedure\nReduced readability\n\nBoth of these options are awful. Has anybody been able to solve this kind of problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":69741261,"Users Score":0,"Answer":"The procedure will not access time with GETDATE() from within the procedure. The DATETIME will be supplied to a procedure parameter provided by an outer scope.\nThis will require a bit of refactoring but will mean that the procedure is then unit testable. Wish I had thought of it more quickly :\/","Q_Score":0,"Tags":"unit-testing,stored-procedures,azure-sql-database,python-unittest","A_Id":69742192,"CreationDate":"2021-10-27T15:16:00.000","Title":"Database unit testing - How to mock the time within a procedure","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is function in azure data explorer i.e. series_decompose() , so I need to use this function in my python program locally with data from sql\nSo can I do it, and if yes then how?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":69741581,"Users Score":0,"Answer":"You can run KQL functions when using Kusto (a.k.a. ADX or Azure Data Explorer), or another service that runs on top of Kusto, e.g. Azure Monitor, Sentinel etc.","Q_Score":0,"Tags":"python,azure,azure-functions,azure-data-explorer,azureml-python-sdk","A_Id":69741661,"CreationDate":"2021-10-27T15:37:00.000","Title":"can we use azure data explorer function (for example series_decompose() ) locally or anywhere in python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to fetch data in descending order. So what would be faster? fetch the data in descending order or reverse() the list of fetched data.\nNote: I have been using SQLAlchemy in Flask framework. My application has to fetch hundreds of data from MySQL.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":65,"Q_Id":69749134,"Users Score":1,"Answer":"It depends on if that sql table is indexed on the column that you are sorting.\nIf it is, let the query do the sorting. If it is not, it depends more on the parallelization of the sorting algo that you are running between the sql engine or your python code. If it is just hundreds of rows, it really wouldn't be significant performance difference between the two approaches if the table is not indexed on that column.","Q_Score":0,"Tags":"python,list,flask,sqlalchemy,flask-sqlalchemy","A_Id":69764385,"CreationDate":"2021-10-28T06:16:00.000","Title":"In SQLAlchemy, Which is better to fetch the data in descending order or reverse() list the fetched data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using sqlite3 with Python and I have a .db file in my project, and it has all the details related to the user, the password hashes, the salt used for hashing, the username, etc. And this file is on a local git repo. I do have a remote public repo for this project. But I was concerned if pushing the database file to a public repo might be a good idea?\nShould I make the repo private? Or should I add this file to the .gitignore list and make a way to generate the database and the tables on the fly with empty data?\nOr is there a way to protect the database with a username and password kinda like in MySQL, not exactly like that, as MySQL runs on a server and you enter the username and password for using the server rather than a specific database?\nI know it might be a better idea to use some cloud-based DB with APIs and all, but this is just a basic level project and I don't have much experience related to cloud-based DBs, but the user might provide actual sensitive data.\nSo is it a safe option to push the .db file to the public remote repo?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":69753642,"Users Score":1,"Answer":"Git, as any other CVS (Concurrent Versioning System), is designed to store source code, and all the files, binary included (possibly by Git LFS), which cannot be generated on demand by tools, source code or scripts.\nAbout the specific case of a database, you should provided DDL scripts to reproduce the DB schema, and the SQL code, evicted of reserved data, to populate it again.\nA good approach would be to introduce place holders inside SQL files, and scripts which require sensitive data to substitute to the above place holders.","Q_Score":0,"Tags":"python-3.x,database,git,sqlite","A_Id":69753928,"CreationDate":"2021-10-28T11:54:00.000","Title":"Is it safe to push a sqilte file to public repo?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently developing a flutter web app for tracking the information in students classes. The app is basically a task management app curated for students.\nI have finished most of the flutter UI design, but my main issue now is connecting the web app with my intended database, postgresql . I have come to understand that I cannot connect a flutter web app directly to the postgres database. I plan to use python to run the functionality of the postgres database i.e python scripts to populate tables in the database etc.\nThe only solution I can think of is creating an API that can take user information from my flutter frontend and store it in my python-run postgres database. How could I achieve this, or what are alternative solutions for connecting the flutter UI to my postgres database?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":76,"Q_Id":69769429,"Users Score":1,"Answer":"Connecting a frontend application directly to a database is usually not a good idea. You will expose a lot of security concern within your application. What you are proposing right now is already a good option. Python is a good backend choice for a new project, do explore framework like Flask, FastApi (for creating API), and SqlAlchemy or Sqlmodel (for interacting with your database).","Q_Score":0,"Tags":"python,postgresql,flutter","A_Id":72009061,"CreationDate":"2021-10-29T13:07:00.000","Title":"Connect a python-run PostgreSQL database to a flutter web app","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to save objects in DB [obj.save()] with the help of ORMs. But it throws the following error:\ndjango.db.utils.ProgrammingError: set_session cannot be used inside a transaction\nDoes any ideas about this error?\nI am using Django & PostgresDB","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":333,"Q_Id":69811805,"Users Score":0,"Answer":"this is a bug of psysopg2 for that you can install older version of psysopg2 for now i was having same issue so I installed older and balanced version of psysopg2. so this one you can try this command to install\npip install psycopg2==2.6.2\nmay be this will work","Q_Score":1,"Tags":"python,django,postgresql,django-rest-framework","A_Id":69811960,"CreationDate":"2021-11-02T14:05:00.000","Title":"django.db.utils.ProgrammingError: set_session cannot be used inside a transaction","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying execute below block of code with cx_oracle by bind variables, but getting below mentioned error everytime. Not sure what is missing.\nAnyone has idea on this\nCode :\na = input(\"Please enter your name ::\")\nconn = cx_Oracle.connect('hello\/123@oracle')\ncur = conn.cursor()\ntext1 = \"select customer from visitors where name = :myvalue;\"\ncur.execute(text1,myvalue=str(a))\nERROR observed :\ncx_Oracle.DatabaseError: ORA-00933: SQL command not properly ended","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":69815143,"Users Score":0,"Answer":"Remove the semi-colon at the end of your SQL statement.","Q_Score":0,"Tags":"python-3.x,cx-oracle,bind-variables","A_Id":69817461,"CreationDate":"2021-11-02T18:14:00.000","Title":"Trying to extract data through bind variables in cx_oracle python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While going through the Postgres Architecture, one of the things mentioned was that the Postgres DB has a connection limit of 500(which can be modified). And to fetch any data from the Postgres DB, we first need to make a connection to it. So in this case what happens if there are simultaneous 10k requests coming to the DB? How does the requests map to the connection limit, since we have the limit of 500. Do we need to increase the limit or do we need to create more instance of Postgres or is concurrency in play?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":498,"Q_Id":69820266,"Users Score":1,"Answer":"If there are 10000 concurrent statements running on a single database, any hardware will be overloaded. You just cannot do that.\nEven 500 is way too many concurrent requests, so that value is too high for max_connections (or for the number of concurrent active sessions to be precise).\nThe good thing is that you don't have to do that. You use a connection pool that acts as a proxy between the application and the database. If your database statements are sufficiently short, you can easily handle thousands of concurrent application users with a few dozen database connections. This protects the database from getting overloaded and avoids opening database connections frequently, which is expensive.\nIf you try to open more database connections than max_connections allows, you will get an error message. If more processes request a database connection from the pool than the limit allows, some sessions will hang and wait until a connection is available. Yet another point for using a connection pool!","Q_Score":0,"Tags":"python-3.x,postgresql,psycopg2","A_Id":69820545,"CreationDate":"2021-11-03T06:00:00.000","Title":"How does Postgres handle more requests than connections","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing an ERP with groups of developers and we need to preserve customers data when deleting existing columns or table for Dajngo models and DB.\nFor Example:\nI added a column named columns1 and I gave the customer a release product of the System and then, but a week later I had to delete that new column but the customer have data stores in the column1 column, here how can I preserve data or solve this situation.\nAnother Example:\nI have a new column name column2 with unique attr but here the customer have data, but I can not add new column with out allowed it to store the null data, but in this situation I do not want to allow the null data in column column2 and ether I can't put default attr because it has unique attr.\nHow to solve these things in Django.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":69820932,"Users Score":0,"Answer":"i think you need to add one Boolean field in the table field name delete if you want to delete the column don't delete it put delete field value to true. when you query add a filter with condition delete = false. i think this will work for 1st condition","Q_Score":1,"Tags":"python,django","A_Id":69821045,"CreationDate":"2021-11-03T07:15:00.000","Title":"How do I preserve previous data when deleting new columns or deleting exists table in Django models?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hi i got this probleme :\ncx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: \"\/app\/oracle\/product\/10.2.0\/server\/lib\/libclntsh.so: cannot open shared object file: No such file or directory\". See https:\/\/cx-oracle.readthedocs.io\/en\/latest\/user_guide\/installation.html for help\ni searched and tried multiple fixes but none seemed to work i checked multiple time my oracle installation and it's x64 i've setup the correct path with ldconfig and all but it's still not working and i don't know why i can't figure out what's the problem. ( i'm a total beginner )","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":933,"Q_Id":69850014,"Users Score":1,"Answer":"Make sure that your environment variable ORACLE_HOME point to an Oracle installation: in your case:\nexport ORACLE_HOME=\/oracle\/product\/10.2.0\/server\nand correct to unset LD_LIBRARY_PATH.\nOracle by default searches in $ORACLE_HOME\/lib for the libraries.","Q_Score":0,"Tags":"python,oracle","A_Id":69855654,"CreationDate":"2021-11-05T07:52:00.000","Title":"cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library UBUNTU","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So we are in the process of migrating from Azure SQL DB to Azure Synapse SQL Pools. I figured setting Airflow up to use the new database would be as simple as changing the server address and credentials, but when we try to connect to the database via Airflow it throws this error:\n40532, b'Cannot open server \"1433\" requested by the login. The login failed.\nWe use the generic mssqloperator and mssqlhook. I have verified the login info, pulled the server address directly from Synapse, and the synapse connection string shows port 1433 is correct, so I am at a loss for what could be causing the issue. Any help would be appreciated.\nEdit: The Airflow Connection schema we use is the Microsoft Sql Server Connection, with host being {workspace}.sql.azuresynapse.net, login being the admin login, password being the admin password, and port being 1433","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":69904134,"Users Score":0,"Answer":"The error is due to the port which was not enabled.\nMake sure that port 1433 is open for outbound connections on all firewalls between the client and the internet.","Q_Score":0,"Tags":"python,airflow,azure-synapse","A_Id":69911363,"CreationDate":"2021-11-09T19:43:00.000","Title":"Migrating from Azure Sql to Azure Synapse, can't connect to Synapse in Airflow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to get the number of rows affected as a result of executing session.bulk_insert_mappings() and session.bulk_update_mappings()?\nI have seen that you can use ResultProxy.rowcount to get this number with connection.execute(), but how does it work with these bulk operations?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":69973956,"Users Score":0,"Answer":"Unfortunately bulk_insert_mappings and bulk_update_mappings do not return the number of rows created\/updated.\nIf your update is the same for all the objects (for example increasing some int field by 1) you could use this:\nupdatedCount = session.query(People).filter(People.name == \"Nadav\").upadte({People.age: People.age + 1)","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":71420714,"CreationDate":"2021-11-15T11:50:00.000","Title":"Get Row Count in SQLAlchemy Bulk Operations","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My DB is Postgres and Code in Django,\nI live a project daily but I want old DB in a new project which has only just some updates in it but If I don't Update it It shows migration error and if I use --fake then on that page it will show similar error 'Programming Error Column Does Not Exist' I tried each and every way pls help me.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":70019679,"Users Score":0,"Answer":"After trying for many days I notice that there are errors in fields not matching so I just replace or update my old project models.py to my new project models.py and make migrations and migrate again and then I go to my new project and I run migrate and it runs correctly.","Q_Score":0,"Tags":"python,django,postgresql,web-hosting","A_Id":70108166,"CreationDate":"2021-11-18T12:16:00.000","Title":"I have to change db everytime when project updated What Should I do?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"hey I'm currently working on a website (Photo selling services) and now I wanna deploy it on a public host,\nI didn't change the database and I'm using Django's SQLite as my database, Is it gonna be a problem or it's fine?\nand also I'm handling the downloads with my views and template and the files (photos) will be downloaded from my database and I wanted to know do I need one host for my application and another for putting my photos in? or I can just run the whole website on one host without a problem ( same as I'm running it on my local host).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":70019753,"Users Score":0,"Answer":"I prefer not to use SQLite in production because:\n\nIt is just a single file and it may get deleted suddenly by anyone that has access to that.\nNo user management.\nLimited data types.\n\nFor serving files on a heavy-traffic website it's good to have CDN to serve files from.","Q_Score":0,"Tags":"python,django,database,sqlite,host","A_Id":70019921,"CreationDate":"2021-11-18T12:21:00.000","Title":"putting a Django website on production using sqlite data base","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to code this in python:\nSuppose the weekly hours for all employees are stored in a table. Each row records an employee\u2019s seven-day work hours with seven columns. For example, the following table stores the work hours for eight employees. Write a program that inputs the hours of all employees and displays employees and their total hours in decreasing order of the total hours.\nI have difficulties understanding how to input the parameters (hours per employee) and store for each one while sum it for each.\nAny help will be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":70025385,"Users Score":0,"Answer":"The program has to be broken down into the following\n\nA database with at least two tables\n\nAn application\/code layer to connect to the database for storing and retrieving values\n\n\nFor Task 1, do the following\nCreate two tables\nTable one is for employee, it will have columns employeeid(auto number,pkey), fullname(text field)\nTable two is for hours, it will have ten columns, id(autonumber,pkey),employeeid(foreignkey with employee table),day1_hour(number field),day2_hour(number field),day3_hour(number field),day4_hour(number field),day5_hour(number field),day6_hour(number field),day7_hour(number field),total_hour(number field)\nThen you need a python application written to insert records, this can be done in django, flask or any of your preferred choice.\nFinally a select query(SQL) will be written to retrieve the records, the syntax of the Select statement will be determined by the database system you are using, e.g sql lite, mysql,ms sql.","Q_Score":0,"Tags":"python","A_Id":70025657,"CreationDate":"2021-11-18T19:05:00.000","Title":"Looking for some advice with python code for employee table question","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could you please explain the difference between written snowflake SQL queries in Python using python connector client library and writing the same sqls in form of DataFrame using Snowpark.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":70047088,"Users Score":0,"Answer":"Snowflake Connector for Python provides an interface that let your python application connects to snowflake and query data\nwhile snowpark is an API that provides programming language constructs for building SQL statements which is based on Dataframe. like for example instead of writing select statement as a string and execute it. you can use select() method withouth writing the sql query.\nsnowpark API is only available in Scala language at the moment.","Q_Score":1,"Tags":"python,snowflake-cloud-data-platform","A_Id":70047967,"CreationDate":"2021-11-20T14:59:00.000","Title":"What is the difference between client library in snowflake and Snowpark?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Whenever i run any select query in snowflake the result set is having auto generated row number column (as a first column).. how to ignore this column from the code...\nLike : select * from emp ignore row;","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":158,"Q_Id":70086114,"Users Score":0,"Answer":"When you query Snowflake, regardless of which client you use, that column won't be returned.\nIt is a pure UI thing in the Snowflake Editor for readability.","Q_Score":1,"Tags":"python,pandas,dataframe,snowflake-cloud-data-platform,series","A_Id":70090056,"CreationDate":"2021-11-23T18:34:00.000","Title":"Snowflake- How to ignore the row number (first column) in the result set","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import CSV file from my gcs to postgres database in cloud sql, then I connected through pgadmin and make the same columns but with different data types like sale_dollars type in postgres is double precision and in gcs its float,\nwhen I am importing, I am getting this error and I am so confused I have tried to change the data type in pgadmin like real, integer but couldn't find the type of float.\ngeneric::failed_precondition: ERROR: invalid input syntax for type double precision: \"sale_dollars\" CONTEXT: COPY iowa_test_table, line 1, column sale_dollars: \"sale_dollars\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":70090761,"Users Score":0,"Answer":"You are trying to import the header row as if it were data. Tell copy to skip the header row, or just don't produce the header to start with.","Q_Score":0,"Tags":"python,postgresql,google-cloud-platform,google-bigquery,pgadmin","A_Id":70091154,"CreationDate":"2021-11-24T04:31:00.000","Title":"Importing Csv file to postgres Cloud SQL instance invalid input syntax error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a customer table with autoincrement id, the problem is after id:5 it inserted 151, 152, 153... how this is happening? is there has any way to fix this?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":110,"Q_Id":70112945,"Users Score":2,"Answer":"There are at least five ways this could happen.\n\nSomeone deliberately inserted a row with id=150. This advances the next auto-increment for the table automatically. I.e. auto-increment will not generate a value less than the greatest id in the table.\n\nThere were 145 failed INSERTs. By default, InnoDB allocates the next auto-inc value, then tries the INSERT. But it doesn't \"undo\" the auto-inc if the INSERT fails. So if you have a lot of failed attempts to INSERT rows, for example if they violate other table constraints, then you \"lose\" auto-inc values. They are allocated, but not used in a row.\n\nSome rows were INSERTed with those values between 6 and 150, then subsequently deleted.\n\nInnoDB spontaneously \"skipped\" some values. This can happen. Auto-increment must generate unique values, but it is not guaranteed to generate consecutive values.\n\nauto_increment_increment was set to 145 temporarily, then reset to 1, its default value. This doesn't happen by accident, it would have been deliberate.","Q_Score":1,"Tags":"python,mysql,flask,sqlalchemy,flask-sqlalchemy","A_Id":70113218,"CreationDate":"2021-11-25T14:36:00.000","Title":"SQLAlchemy autoincrement not inserting correctly","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a good tool where I can do a complex calculation in Python but have it show the results as Excel formulas? I'm thinking of a use case where I want to do complex financial projections with more business logic than is comfortable to write directly in Excel. However, the end-users are familiar with Excel and want to verify my work by checking a spreadsheet.\nTo be more concrete, I would like to write things like\ntotal_sales = europe_sales + us_sales\nand have that translate to Excel formulas like\nA3 = A2 + A1\nObviously, this would be for generating more complex spreadsheets with dozens of columns across an arbitrary number of rows","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":49,"Q_Id":70148570,"Users Score":1,"Answer":"There is no tool to my knowledge that does exactly what you're asking. That said, I think you have a couple options.\n\nIf what you are doing is not too complex, write it in excel.\nYou can use VBA to write macros, however, if your supervisors can understand your VBA code, more than likely they will understand your python script.\nIf your boss really wants to check your work, try and replicate a previous example (if one exists) and confirm the output, or create a couple edge\/test cases to confirm your script works, or just have them confirm the results of the actual data for the first time and trust it going forward.\n\n\nCreate a flow chart to help explain what you're doing\n\nExcel is very powerful but there is a lot that ends up being a pain. If you're working on a task that is not straight forward to do in excel, chances are it will be difficult for your boss to confirm\/check your excel logic anyway. Best advice is to talk to your boss and explain you can give them the result in excel but the functions\/logic is not easily implemented.\nbest of luck","Q_Score":0,"Tags":"python,excel,pandas,excel-formula","A_Id":70148845,"CreationDate":"2021-11-29T00:14:00.000","Title":"Python tool that shows calculations in Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using an AWS Lambda function (in Python) to connect to an Oracle database (RDS) using cx_Oracle library. But it is giving me the below error - \"DPI-1047: Cannot locate a 64-bit Oracle Client library: \"libclntsh.so: cannot open shared object file: No such file or directory\".\nSteps I've followed -\n\nCreated a python virtual environment and downloaded the cx_Oracle library on an EC2 instance.\nUploaded the downloaded library on S3 bucket and created a Lambda Layer with it\nUsed this layer in the Lambda function to connect to the Oracle RDS\nUsed below command to connect to the DB -\n\nconn = cx_Oracle.connect(user=\"user-name\", password=\"password\", dsn=\"DB-Endpoint:1521\/\"database-name\",encoding=\"UTF-8\")\nPlease help me in resolving this issue.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":113,"Q_Id":70152772,"Users Score":1,"Answer":"Set the environment variable DPI_DEBUG_LEVEL to the value 64 and then rerun your code. The debugging output should help you figure out what is being searched. Note that you need to have the 64-bit instant client installed as well!","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,cx-oracle","A_Id":70173309,"CreationDate":"2021-11-29T10:01:00.000","Title":"Getting an error (DPI-1047) while connecting to Oracle RDS with AWS Lambda","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a situation where my data lies in a different GCP project say \"data-pro\" and my compute project is set up as a different GCP project, which has access to \"data-pro\" 's tables.\nSo is there way to specify the default project-id using which the queries must run ?\ni can see that there is a default data set , parameter .. but no default projectID.\nSO my queries are as follows :\n\nselect name ,id from employeedDB.employee .\/\/ this employeedDB is in data-proc\n\nand my BigQueryInsertJobOperator Configuration is as below :\n\nBigQueryInsertJobOperator(dag=dag, task_id=name,\ngcp_conn_id=connection_id,--\/\/connection_id over compute-proc\nconfiguration={\n\"query\": {\n\"query\": \"{% include '\"+sqlFile+\"' %}\",\n\"useLegacySql\": False\n},\n},\npool='bqJobPool')","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":70153553,"Users Score":0,"Answer":"You should define different connection id with different project (and you can set it either via parameter in each task or via \"default_args\" feature.","Q_Score":1,"Tags":"python,google-cloud-platform,google-bigquery,airflow,airflow-2.x","A_Id":70157291,"CreationDate":"2021-11-29T11:01:00.000","Title":"BigQueryInsertJobOperator Configuration for default project ID","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible to use sqlmap against an ODBC connection so that I can test the database with SQLMAP if there are some vulnerabilities? Maybe is it possible to use SQLMAP in the context of pyodbc?\nI want to test if the ODBC driver has some vulnerabilities and therefore wanted to run sqlmap.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":70172648,"Users Score":2,"Answer":"Nope.. SQL Map is primarily a tool to do all kind of injection attacks across the well known databases . The injection vulnerabilities are a result of lack of or improper input sanitization at the application level .\nThe ODBC driver however is more like a protocol handler for a particular database , where on one end it connects over the database over the network and on the other side interacts with the database library used by the programmer in the application .\nTypically just like other software , ODBC drivers may have vulnerabilities due to the usage of other vulnerable components \/ libraries used for the development. Though other things also may exists due to poor coding , lack of validation and improper bounds check.","Q_Score":2,"Tags":"python,security,connection,odbc,sqlmap","A_Id":70172932,"CreationDate":"2021-11-30T16:22:00.000","Title":"Is it possible to use SQLMAP against an ODBC driver connection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two tables - \"Users\" and \"Projects\", i want to be able to show which users are assigned to which project. There may be multiple users assigned to the project.\nI was thinking of creating a 'project_users_matrix' table where a new column would be created for each user and a new row created for each project, then the cells can just show a 1 or 0 depending on if the person is working on that project.\nThe 'cleaner' option would be to have columns 'user_1', 'user_2', 'user_3' in the project database but then there can't be an indeterminate number of users for a project.\nIs there a better way to do this? It seems like there should be...","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":21,"Q_Id":70179196,"Users Score":1,"Answer":"If users can participate in many projects, and projects can have many users then you have a many-to-many relationship and you need three tables: users, projects and an association table that contains user ids and projects ids only. Each active user-project combination should have a row in the association table.\nIf users cannot participate in multiple projects simultaneously then you have either a one-to-many relationship between projects and users, or users and projects, which can be expressed by a foreign key column on the many side.","Q_Score":2,"Tags":"python,mysql","A_Id":70180122,"CreationDate":"2021-12-01T04:40:00.000","Title":"I want to create a MYSQL table to show which users are working on which project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two tables - \"Users\" and \"Projects\", i want to be able to show which users are assigned to which project. There may be multiple users assigned to the project.\nI was thinking of creating a 'project_users_matrix' table where a new column would be created for each user and a new row created for each project, then the cells can just show a 1 or 0 depending on if the person is working on that project.\nThe 'cleaner' option would be to have columns 'user_1', 'user_2', 'user_3' in the project database but then there can't be an indeterminate number of users for a project.\nIs there a better way to do this? It seems like there should be...","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":70179196,"Users Score":0,"Answer":"you need to create 2 more fields in project-table 1st for User_id and 2nd for Active\/inactive in 1st field you need to store id of user who is working with that project and in 2nd field enter value 0\/1 and provide button that if user is active on that table it shows 1. and once it done with his work.user can update it with 0.","Q_Score":2,"Tags":"python,mysql","A_Id":70179277,"CreationDate":"2021-12-01T04:40:00.000","Title":"I want to create a MYSQL table to show which users are working on which project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scenario:\nI have a AWS Glue job which deals with S3 and performs some crawling to insert data from s3 files to postgres in rds.\nBecause of the file size being sometimes very large it takes up huge time to perform the operation, per say the amount of time the job runs is more then 2 days.\nScript for job is written in python\nI am looking for a way to be able to enhance the job in some ways such as:\n\nSome sort of multi-threading options within the job to perform faster execution - is this feasible? any options\/alternative for this?\nIs there any hidden or unexplored option of AWS which I can try for this sort of activity?\nAny out of the box thoughts?\n\nAny response would be appreciated, thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":108,"Q_Id":70179579,"Users Score":0,"Answer":"IIUC you need not to crawl the complete data if you just need to dump it in rds. So crawler is useful if you are going to query over that data using Athena or any other glue component but if you need to just dump the data in rds you can try following options.\n\nYou can use glue spark job to read all the files and using jdbc connection to your rds load the data into postgres.\n\nOr you can use normal glue gob and pg8000 library to load the files into postgres. You can utilize batch load from this utility,","Q_Score":0,"Tags":"python,multithreading,amazon-web-services,aws-glue","A_Id":70179866,"CreationDate":"2021-12-01T05:35:00.000","Title":"AWS Glue advice needed for scaling or performance evaluation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that is calling the google sheets api.\nThe code works fine, no errors.\nI put the code onto a server into a folder (C:\\GoogleAPI\\main.py)\nI can run this using powershell and from the command prompt :\npython.exe C:\\googleapi\\main.py (this works fine)\nNow, problem is running under SQL server agent...\nThe error returned is :\nfrom googleapiclient.discovery import build ModuleNotFoundError: No module named 'googleapiclient'. Process Exit Code 1. The step failed.\ni used pip to install everything and all libraries are in the site-packages folder in :\nC:\\Program Files (x86)\\Python37-32\\site-packages\nWhen i run the SQL job, i am using a credential which is mapped to my user (also an admin on the server).\nSo, my question is, why will the sql agent not recognise the libraries when running using SQL Server agent????\nsystem Path variable contains a link to C:\\Program Files (x86)\\Python37-32\\site-packages\ni am very frustrated with this as i cannot find an answer anywhere.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":70190221,"Users Score":0,"Answer":"OK, so, the \"solution\" here to get around the modules not found error is by restarting the SQL server agent !","Q_Score":1,"Tags":"python","A_Id":70197608,"CreationDate":"2021-12-01T19:32:00.000","Title":"Calling a Python Script using SQL server agent - modules not found error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ctx=con.cursor()\nctx.execute(select col1 from table1)\nresult=ctx.fetchall()\ndata=pd.DataFrame(result)\ndata.columns['field']\nfor index,row in data:\nupdate table2 set col2='some value' where col1=str(row['field'])","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":97,"Q_Id":70190380,"Users Score":0,"Answer":"Solution to this is:\nInsert the data into some transient table and then then use that table for update.\nFor insert :\ndata = panda.DataFrame(result)\njust use data.to_csv('file complete path',index=False,Header=True)\nusing put command place the file in internal stage and from there use Copy command to copy data into transient table.\nlater on you can use this table to update your target table.","Q_Score":1,"Tags":"python,dataframe,snowflake-schema","A_Id":70471719,"CreationDate":"2021-12-01T19:46:00.000","Title":"Updating snowflake table row by row using panda dataframe (iterrows()) taking lot of time .Can some one give better approach to speed up updates?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to write several Panda Dataframes into a SQL database. The dataframes are generated in different processes using the multiprocessing library.\nEach dataframe should get its own trial number when it is written into the database. Can I solve this using SQL autoincrement or do I have to create a counter variable in the Python code.\nIf I use the function pandas.DataFrame.to_sql and set an index as autoincrement, I get a consecutive index for each row.\nHere is an example how it should look like\n\n\n\n\ntrial number\ntimestamp\nvalue\n\n\n\n\n1\ntime1\nvalue1\n\n\n1\ntime2\nvalue2\n\n\n1\ntime_n\nvalue_n\n\n\n2\ntime1\nvalue1\n\n\n2\ntime2\nvalue2\n\n\n2\ntime3\nvalue3\n\n\n2\ntime_n\nvalue_n\n\n\n\n\nI use Python 3.9 and MariaDb as Database. I hope for help. Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":70199234,"Users Score":0,"Answer":"You should have a separate trials table in your database where you cspture the details of each trial. The trials table will have an auto incremented id field.\nBefore writing your dataframes to your values table, each process inserts a record into the trials table and get the generated auto increment value.\nThen use this value to set the trial number column when you dump the frame to your table.","Q_Score":0,"Tags":"python,mysql,pandas,dataframe","A_Id":70199692,"CreationDate":"2021-12-02T12:11:00.000","Title":"Write Panda Dataframes to SQL. Each data frame must be identifiable by a trial number","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a mysql database where i add news articles, and before adding to it it try to compere that article with 100 last articles if it has any similarity.\nso if is 95% similar i can tag it as same as article 122 or if it is 70-95% similar i can tag it as similar to article 133,\nWhich is best way to do this:\n\n\nis there a way or a function that mysql can do it\n\ndo i need to use python to compare that article in a while loop with other 100 articles\n\n\n\nas i read in forums python is the best way, but i tried some library to compare string1(article1) with string2(article2) and even if its totally different article it tell me it is 70% same\n\ni think it is because of some same words like : and , he ,she, will,\nnews,text,or,and, the, i","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":60,"Q_Id":70234560,"Users Score":-2,"Answer":"If you are using Linux you can call from python the diff command and play with the parameters, a teacher a few years ago did this to detect copy in a programing exam, it worked even after reformatting the code","Q_Score":1,"Tags":"python,mysql,plagiarism-detection","A_Id":70234626,"CreationDate":"2021-12-05T13:02:00.000","Title":"check similarity\/plagiarism between articles in mysql via python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This should be fairly simple, I've done it before, but I am stumped. I have a PostgreSQL server set up on one machine on my internet and I am trying to access it from a different machine on the same network. I can connect to the database when on the same machine using localhost but not from other machines using that machines IP. The server is 10.0.0.23 and the other computers are of the same format (10.0.0.21 etc).\nI enter jdbc:postgresql:\/\/10.0.0.23:5432\/dbname and the user\/pass, but it says \"The connection attempt failed\". The postgresql.conf file has been edited to uncomment the line that allows it to listen for all addresses (and configs refreshed in PgAdmin). I don't think that pg_hba.conf needs to be edited because I am on the same network, but that could be wrong. I have also tried editing the pg_hba.conf though to include the 10.0.0.X IPs but that did not help.\nI'm fairly lost at this point, so any thoughts appreciate. Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":145,"Q_Id":70240211,"Users Score":0,"Answer":"For anyone else who this might help, I ended up figuring out that it was a firewall issue. I don't remember having to do this before but I went into the Windows defender firewall and created a rule for port 5432 and I can connect now.","Q_Score":1,"Tags":"python,postgresql,pgadmin","A_Id":70252022,"CreationDate":"2021-12-06T02:10:00.000","Title":"Can't connect to PostgreSQL database on LAN from separate machine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run a python code to create feature store. When I am running I am getting Bigquery.jobs.create permission error. I checked the permissions for my account with gcloud iam roles describe roles\/viewer and Bigquery permissions are there.\nNow, what mistake I am making and how can I solve this error.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":70250653,"Users Score":0,"Answer":"It seems that you need to create BigQuery job. At least the account you are using should have \"BigQuery Job User\" role.","Q_Score":0,"Tags":"python,google-cloud-platform,google-bigquery,feature-store,feast","A_Id":70315936,"CreationDate":"2021-12-06T19:09:00.000","Title":"BigQuery.jobs.create pemission","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I like the idea of having my historical stock data stored in a database instead of CSV. Is there a speed penalty for\u00a0fetching large data sets from MariaDB compared to CSV","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":70253179,"Users Score":0,"Answer":"Quite the opposite. Whenever you fetch data from a CSV, unless you have a stopping condition (for example, take the first entry with x = 3) you must parse every single line in the file. This is an expensive operation because not only do you have to read all of the lines (making it O(n)), but in general, you will be typecasting as well. In a database, you have already processed all of the lines, and if in this case there is an index on x or whatever attribute you are searching by, the database will be able to find the information in O(log(n)) time and will not look at the vast majority of entries.","Q_Score":0,"Tags":"python,database,dataset,stock-data","A_Id":70321146,"CreationDate":"2021-12-06T23:24:00.000","Title":"Speed - CSV vs MariaDB fetching stock data (python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file with comments marked by '#'. I want to select only the table part from this and get it into a pandas dataframe. I can just check the '#' marks and the table header and delete them but it will not be dynamic enough. If the csv file is slightly changed it won't work.\nPlease help me figure out a way to extract only the table part from this csv file.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":70302840,"Users Score":0,"Answer":".csv file can't have comment. Then you must delete comment-line manualy. Try start checking from end file, and stop if # in LINE and ';' not in LINE","Q_Score":0,"Tags":"python,pandas,oracle,dataframe,csv","A_Id":70303795,"CreationDate":"2021-12-10T10:11:00.000","Title":"How to extract a table from a csv file generated by Database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We can basically use databricks as intermediate but I'm stuck on the python script to replicate data from blob storage to azure my sql every 30 second we are using CSV file here.The script needs to store the csv's in current timestamps.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":70345519,"Users Score":1,"Answer":"There is no ready stream option for mysql in spark\/databricks as it is not stream source\/sink technology.\nYou can use in databricks writeStream .forEach(df) or .forEachBatch(df) option. This way it create temporary dataframe which you can save in place of your choice (so write to mysql).\nPersonally I would go for simple solution. In Azure Data Factory is enough to create two datasets (can be even without it) - one mysql, one blob and use pipeline with Copy activity to transfer data.","Q_Score":1,"Tags":"python,azure,apache-spark,google-cloud-platform,databricks","A_Id":70347715,"CreationDate":"2021-12-14T07:59:00.000","Title":"Is there any way to replicate realtime streaming from azure blob storage to to azure my sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am creating a Django project where I have to use existing database data. The existing database is Postgres and it is hosted on Aws. My goal is to copy them from Aws to my local Postgres DB and use in my project.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":70345903,"Users Score":0,"Answer":"You can dump the database from AWS and import locally from tools like Mysql workbench","Q_Score":0,"Tags":"python,django,postgresql,amazon-web-services","A_Id":70347223,"CreationDate":"2021-12-14T08:35:00.000","Title":"How to migrate remote postgres db into my local django project?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking to a way to \"simply\" access to a Cach\u00e9 database using python (I need to make sql query on this database).\nI've heard about a python package (Intersys) but I can't find it anymore (having this package would be the most simple way).\nI've tried using pyodbc connection with the appropriate Cach\u00e9 driver : it works on my machine, however when I try to deploi the function in production (Linux OS), the driver's file is not found.\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":70360965,"Users Score":0,"Answer":"There is only one way, on how to make it work with Python, is using pydobc, and InterSystems driver.","Q_Score":0,"Tags":"python,azure-functions,intersystems-cache,intersystems","A_Id":70363390,"CreationDate":"2021-12-15T09:09:00.000","Title":"Connecting Cach\u00e9 database in Azure function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can anyone tell me which way to insert in oracle is more performatico?\nWrite.format('jdbc') mode or using CX_Oracle?\nIn my project I came across a case where they use write.format('jdbc') to INSERT and CX_Oracle to UPDATE, so I'm thinking of changing to INSERT and UPDATE on the same CX_Oracle connection, what do you think ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":100,"Q_Id":70363301,"Users Score":1,"Answer":"I has worked on similar usecase. Here are some takeaway from my last project.\n\ncx_oracle is very slow compared to write.format('jdbc'). I was inserting 1M records and there was drastic difference b\/w those two approach. cx_oracle even with executeMany didn't help much. I will strongly recommend to use spark JDBC.\n\nEven in case of update, I ended up doing delete (SQL Query) - insert (using pyspark), because couldn't achieve update in spark and the alternative was very slow.\n\nSpark does parallel writes while inserting to db too.\n\nEven for read operation use spark jdbc read because spark will optimize the job and send projection and filtering at DB directly.","Q_Score":1,"Tags":"python,apache-spark,pyspark,apache-spark-sql","A_Id":70372061,"CreationDate":"2021-12-15T11:57:00.000","Title":"Performance comparison cx_oracle vs write.format('jdbc')","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say we have the tables: table0 and table1.\nIn both table0 and table1 we store a name, age, and date.\nHow can I check if an entry from table0 and an entry from table1 have the same name and age?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":70450283,"Users Score":0,"Answer":"Simple take join with where clause t1.name = t2.name and t1.age = t2.age.","Q_Score":0,"Tags":"python,mysql","A_Id":70450323,"CreationDate":"2021-12-22T14:04:00.000","Title":"Checking if in a table in mysql are two rows with the same elements","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wand to access the postgreSQL database from android using Pydroid3 application in my mobile phone. to access postgreSQL database i need to import psycopg2, but android doesn't support this.\ncan anyone please suggest the way to solve this issue.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":91,"Q_Id":70484518,"Users Score":1,"Answer":"With Android you typically don't make a hard database connection, unless its a local one.\npsycopg2 doesn't work for the same reason you're not supposed to use JDBC . You should talk to a remote database via a RESTful API, meaning a server sends you data that it itself retrieves from the database.\nSo your next step would to create a web site\/service that handles GET requests. So your app can talk to it, and it can talk to the database.","Q_Score":0,"Tags":"android,python-3.x,database,postgresql,pydroid","A_Id":70484582,"CreationDate":"2021-12-26T06:01:00.000","Title":"How to access postgreSQL database from android mobile using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a batch script which is running multiple scripts that scrape information and put it into a database. The scraping scripts get information and use SQL alchemy to write the information to a mysql database.\nI am running into an issue. I have a try, except to run the scripts. Occasionally, some of the scripts fail, but still maintain connection to the database. This will add up, and will eventually cause a too many connections error.\nIs there a way to clear all the connections to the database from the batch script? I tried \"close all sessions\" but it is not doing it.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":64,"Q_Id":70551568,"Users Score":1,"Answer":"Issues was that though connections were closed, the engines still remained. Calling engine.dispose() fixed the problem.","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":70557559,"CreationDate":"2022-01-01T20:10:00.000","Title":"Close all connection to DB in SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I created an app to connect to Oracle and load data to tables. I build exe and it also runs fine on my machine. However when we tried to run on my friend's machine it gives error DPI-1047: Cannot locate a 64 bit Oracle Client library.\nWe explicitly set the PATH variable and pointed to the correct Oracle client\nHe can connect to database with TOAD with the same path. For some reason the app gives this error.\nWe made sure that the path used by TOAD is setup as first entry in the PATH variable.\nWe also tried to setup a new Environmental variable and read from there.\nWe also tried to explicitly setup the path in the code. But no resolution.\nDo I need to install Python on his machine? Or Am I missing something?\nI though Oracle client is all needed on machine for the app to work.\nOnly difference is I am admin of my machine but he is NOT administrator of his machine. But we made sure that oci.dll file has 'Read and Execute' permission for his user.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":70587360,"Users Score":0,"Answer":"The user machine needed 64 bit Oracle. We installed 64 bit client on user machine and added the path to PATH variable and moved up ABOVE the path 32 bit client on user machine.\nEverything started working.","Q_Score":1,"Tags":"python-3.x,cx-oracle","A_Id":70809596,"CreationDate":"2022-01-05T03:03:00.000","Title":"App exe file not connecting to database - DPI-1047: Cannot locate a 64 bit Oracle Client library","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created an app to connect to Oracle and load data to tables. I build exe and it also runs fine on my machine. However when we tried to run on my friend's machine it gives error DPI-1047: Cannot locate a 64 bit Oracle Client library.\nWe explicitly set the PATH variable and pointed to the correct Oracle client\nHe can connect to database with TOAD with the same path. For some reason the app gives this error.\nWe made sure that the path used by TOAD is setup as first entry in the PATH variable.\nWe also tried to setup a new Environmental variable and read from there.\nWe also tried to explicitly setup the path in the code. But no resolution.\nDo I need to install Python on his machine? Or Am I missing something?\nI though Oracle client is all needed on machine for the app to work.\nOnly difference is I am admin of my machine but he is NOT administrator of his machine. But we made sure that oci.dll file has 'Read and Execute' permission for his user.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":70587360,"Users Score":0,"Answer":"Set the environment variable DPI_DEBUG_LEVEL to the value 64 and run the application on the machine on which the app does not work. Post those results in your question if that doesn't resolve the issue for you!","Q_Score":1,"Tags":"python-3.x,cx-oracle","A_Id":70609328,"CreationDate":"2022-01-05T03:03:00.000","Title":"App exe file not connecting to database - DPI-1047: Cannot locate a 64 bit Oracle Client library","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to write a python program to create a table and add columns to it but the name and the quantity of tables and its columns will we user defined means I don't know the name and quantity, it will all be taken as an input from user.\nany idea how to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":70614188,"Users Score":0,"Answer":"I think using the sqllite3 package it is possible. All you need to do is run the queries with a connection to your database.","Q_Score":0,"Tags":"python,sql,pandas","A_Id":70614319,"CreationDate":"2022-01-06T22:06:00.000","Title":"creating table in sql-server using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not very familiar with pytest but try to incorporate it into my project. I already have some tests and understand main ideas.\nBut I got stuck with test for Excel output. I have a function that makes a report and saves it in Excel file (I use xlsxwriter to save in Excel format). It has some merged cells, different fonts and colors, but first of all I would like to be sure that values in cells are correct.\nI would like to have a test that will automatically check content of this file to be sure that function logic isn't broken.\nI'm not sure that binary comparison of generated excel file to the correct sample is a good idea (as excel format is rather complex and minor change of xlsxwriter library may make files completely different).\nSo, I seek an advice how to implement this kind of test. Had someone similar experience? May you give advice?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":247,"Q_Id":70643411,"Users Score":0,"Answer":"IMHO a unit test should not touch external things (like file system, database, or network). If your test does this, it is an integration test. These usually run much slower and tend to be brittle because of the external resources.\nThat said, you have 2 options: unit test it, mocking the xls writing or integration test it, reading the xls file again after writing.\nWhen you mock the xlswriter, you can have your mock check that it receives what should be written. This assumes that you don't want to test the actual xlswriter, which makes sense cause it's not your code, and you usually just test your own code. This makes for a fast test.\nIn the other scenario you could open the excel file with xslsreader and compare the written file to what is expected. This is probably best if you can avoid the file system and write the xls data to a memory buffer from which you can read again. If you can't do that, try using a tempdir for your test, but with that you're already getting into integration test land. This makes for a slower, more complicated, but also more thorough test.\nPersonally, I'd write one integration test to see that it works in general, and then a lot of unit tests for the different things you want to write.","Q_Score":0,"Tags":"python,pytest,python-unittest","A_Id":70643489,"CreationDate":"2022-01-09T16:40:00.000","Title":"Python Unit Test for writing Excel file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with the sqlite3 module, using Python 3.10.0. I have created a database with a table of English words, where one of the columns is creatively named \"word\". My question is, how can I sample all the words that contain at most the letters within the given word? For example, if the input was \"establishment\", valid outputs could be \"meant\", \"tame\", \"mate\", \"team\", \"establish\", \"neat\", and so on. Invalid inputs consist of words with any other letters other than those found within the input. I have done some research on this, but the only thing I found which even comes close to this is using the LIKE keyword, which seems to be a limited version of regular expression matching. I mentioned using Python 3.10 because I think I read somewhere that sqlite3 supports user-defined functions, but I figured I'd ask first to see if somebody knows an easier solution.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":70643509,"Users Score":0,"Answer":"Your question is extremely vague.\n\nLet me answer a related question: \"How may I efficiently find anagrams of a given word?\"\nThere is a standard approach to this.\nSimply alphabetize all letters within a word, and store them in sorted order.\nSo given a dictionary containing these \"known\" words,\nwe would have the first three map to the same string:\n\npale <--> aelp\npeal <--> aelp\nplea <--> aelp\nplan <--> alnp\n\nNow given a query word of \"leap\", how shall we efficiently find its anagrams?\n\nTurn it into \"aelp\".\nQuery for that string, retrieving three matching dictionary words.\n\nSqlite is an excellent fit for such a task.\nIt can easily produce suitable column indexes.\n\nNow let's return to your problem.\nI suspect it's a bit more complex than anagrams.\nConsider using a related approach.\nRip through each dictionary word, storing digrams in standard order.\nSo for \"pale\", we would store:\n\npale <--> ap\npale <--> al\npale <--> el\n\nRepeat for all other dictionary words.\nThen, at query time, given an input of \"leap\",\nyou might consult the database for \"el\", \"ae\", and \"ap\".\nNotice that \"ae\" missed, there.\nIf that troubles you, when processing the whole dictionary\nfeel free to store all 2-letter combinations, even ones that aren't consecutive.\nPossibly going to trigrams, or all 3-letter combinations, would prove helpful.\nSpend some time working with the problem to find out.","Q_Score":0,"Tags":"python,sql,scrabble","A_Id":70644387,"CreationDate":"2022-01-09T16:52:00.000","Title":"How to find all words with any permutation of the given letters in SQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a xlsx from a template exported from Microsoft dynamics NAV, so I can upload my file to the system.\nI am able to recreate and fill the template using the library xlsxwriter, but unfortunately I have figured out that the template file also have an attached XML source code file(visible in the developer tab in Excel).\nI can easily modify the XML file to match what I want, but I can't seem to find a way to add the XML source code to the xlsx file.\nI have searched for \"python adding xlsx xml source\" but it doesn't seem to give me anything I can use.\nAny help would be greatly appreciated.\nBest regards\nMartin","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":70667250,"Users Score":2,"Answer":"Xlsx file is basically a zip archive. Open it as archive and you'll probably be able to find the XML file and modify it. \u2013\nMak Sim\nyesterday","Q_Score":1,"Tags":"python,xlsx,xlsxwriter,dynamics-nav","A_Id":70687607,"CreationDate":"2022-01-11T13:00:00.000","Title":"Adding XML Source to xlsx file in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Django(python) project running on a digitalocean.com droplet. Because the configurations options are limited I have an excess of free memory. I'm thinking about using that available memory, and load some database tables in memory. I already have a Redis server caching some views. Is it possible to cache a database whole table? How?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":70673173,"Users Score":0,"Answer":"There's really no point (if you want to be able to query those tables with models or SQL).\nEither your RDBMS (MySQL, Postgres, ...) will deal with caching tables in memory, or alternately the Linux file page cache will deal with keeping the underlying data files properly in memory.","Q_Score":0,"Tags":"python,django,performance,memory,redis","A_Id":70673187,"CreationDate":"2022-01-11T20:39:00.000","Title":"Django project - database table in cache","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an application that is hosted through Google App Engine. It is intended to be a file hosting application, where files are uploaded directly to GCS. However, there is some processing that needs to happen with these files, so originally my plan was to download the files, do the modifications, then reupload. Unfortunately, GAE is a read-only file system. What would be the proper way to make file modifications to objects in GCS from GAE? I am unfamiliar with most google cloud services, but I see ones such as google-cloud-dataproc, would these be able to do it?\nOperations are removing lines from files, and combining files into a single .zip","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":70,"Q_Id":70686552,"Users Score":2,"Answer":"You can store the file in the tmpfs partition that you have on App Engine mounted in \/tmp. It's in memory file system and you will use memory to store files. If the files are too large, increase the memory size of your App Engine instance else you will have a out of memory error\nIf the file is too big, you have to use another product.\nThink to clean the files after use to free memory space.","Q_Score":1,"Tags":"python,google-app-engine,google-cloud-platform,google-cloud-storage","A_Id":70693303,"CreationDate":"2022-01-12T18:23:00.000","Title":"Modifying files in Google Cloud Storage from Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a SQL Server v2017 at work. When they installed machine learning it installed Python 3.5 with Pandas 0.19. I am trying to use read_excel on a file on a network drive. I can run the script on my local machine, but I have Python 3.9 and Pandas 1.35. The Script works fine locally but not when executed through the Server using EXECUTE sp_execute_external_script. I realize there could be a huge number of things that coul dbe causeing problems, but I need to rule out Pandas version first. The server is locke own adn it takes a lot of red tape to change something.\nCan Pandas 0.19 read_excel access excel files on a UNC address. I know the newer version can, but this would help me rule out the Pandas library as a source for the issue.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":70687091,"Users Score":0,"Answer":"(I work for MS and I support SQL ML Services)\nThe short answer to your question is -\nYou will have a hard time accessing a UNC path in ML Services. It is technically possible, but the complications make it a no-go for many. You didn't show your code or errors, but I can assure you that your problem isn't with pandas, and perhaps you got an error about not being able to 'connect' because we disable outbound network traffic from ML services by default... but if you got past that, then you probably got an authentication error.\nThe long answer to your question is -\nSQL 2016 and 2017 - We use local 'worker' accounts. The default names (they are based on your instance name) are MSSQLSERVER01,02,03... 20. (There are 20 by default... there is also a MSSQLSERVER00, but we'll ignore that one).\nThe Launchpad service is ran by it's service account (default: NT Service\\MSSQLLaunchpad), and it can be ran as a domain account. But, it is not launchpad that is actually executing your R\/Python code. Launchpad kicks off the R process, and it does this under the MSSQLSERVERXX users. It is THAT user that is technically running your code, and therefore, it is that user that is trying to connect to your UNC path and not YOUR user that you are logged into SQL as. This user is a local user - which cannot authenticate across a UNC share. This issue comes down to a design limitation.\nIn Windows, there is no way to provide a username\/password in your UNC path (whereas, in Linux, you can). Using a mapped drive will not work because those are local-to-your-user-and-login-session. Therefore, a mapped drive of one logged in user will not be accessible to other users (and therefore the MSSQLSERVERXX users).\nIn short, if you absolutely wanted to make it work, you would have to disable authentication entirely on your network share. In Windows, this is more than just adding \"EVERYONE\" permissions to the file. You would also have to allow GUEST (or in the *nix world, ANONYMOUS) access to file shares. This is disabled by default in all recent Windows versions and you would have to modify various gpos\/registry settings\/etc to even allow that. It would not be my recommendation.\nIf this were in an AD environment, you could also theoretically allow the COMPUTER account of your SQL host so that ALL connections from THAT \"COMPUTER\" would be allowed. Again, less than ideal.\nIn SQL 2019 - we got rid of the local user accounts, and use appcontainers instead. This removes the need for local user accounts (many customers in large organizations have restrictions on local user accounts), and offers additional security, but as always, with more security comes more complexity. In this situation, if you were to run the launchpad service as a domain user, your R\/Python processes ARE executed as the LAUNCHPAD account (but in a very locked down appcontainer context). Theoretically, you could then grant THAT service account in AD access to your remote UNC share... BUT, appcontainers provide a far more granular control of specific 'permissions' (not file level permissions). For example, at least conceptually, when you are using an app on your phone, or perhaps a Windows store UWP app, and it asks 'do you want to allow this to access your camera?\" - those layer of permissions are something that appcontainers can provide. We have to explicitly declare individual 'capabilities', and we do not currently declare the ability to access UNC shares due to several other security implications that we must first consider and address. This too is a design limitation currently.\nThe above possibilities for SQL 2016\/2017 do not apply, and will not work, for SQL 2019.\nHowever, for all of them, while it may not be ideal, my suggestion and your best option is:\n\nReconsider which direction you are doing this. Instead of using your SPEES (sp_execute_external_scripts) code to access a network share, consider sharing out a directory from the SQL host itself... this way, you at least don't have to allow GUEST access, and can retain some level of permissions. Then you can drop whatever files you need into the share, but then access it via the local-to-that-host path (ex: C:\\SQL_ML_SHARE\\file.xel) in your SPEES code.","Q_Score":0,"Tags":"python,excel,pandas","A_Id":71343151,"CreationDate":"2022-01-12T19:07:00.000","Title":"Pandas 0.19 Read_Excel and UNC Addresses","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when using mysql select statements in python,there is a ValueError pointing that 'Y' (0x59)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":70691952,"Users Score":0,"Answer":"The reasons for this, and the solutions are as follows\n(1) Rrror message: ValueError: unsupported format character 'Y' (0x59) at index 146\n(2) Cause: because the sql executed by python has a writing method similar to DATE_FORMAT(CREATE_TIME, '%Y-%m-%d').\nWhere %Y conflicts with python's parameter %s\n(3) Solution: change DATE_FORMAT(CREATE_TIME, '%Y-%m-%d') to DATE_FORMAT(CREATE_TIME, '%Y-%m-%d').\n(4) Some friends commented that if the SQL is put into the string and then put into the execution, you need to add another layer: DATE_FORMAT(CREATE_TIME, '%%%%Y-%%%%m-%%%%d')","Q_Score":0,"Tags":"python-3.x,valueerror","A_Id":70691960,"CreationDate":"2022-01-13T05:42:00.000","Title":"ValueError: unsupported format character \u2018Y\u2018 (0x59) at index 146","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using list_accounts boto3 I was able to get the Joined Timestamp, however this time I want to capture the closed timestamp of all accounts in my AWS Organization that are in closed status. Can someone tell me if there is a Boto3 function available to fetch this data ? TIA","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":70696073,"Users Score":1,"Answer":"This is not possible. If an account is closed or not has nothing to do with the organization and their for you cant use boto3(oragnization) to get the info like joined timestamp with the list_accounts. With the list_accounts you just see the the time stamp you joined (this is info related to organization) you can not see the timestamp of when the account was created (this info is related to the account).","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,boto3","A_Id":70696272,"CreationDate":"2022-01-13T11:47:00.000","Title":"How can I capture the closed timestamp of an AWS Account using Boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on merging two datasets on python; however, I'm running into a sorting issue while preparing the excel files for processing.\nExcel 1 sorts A-Z of project ID's as:12.a2.b3\nHowever, excel 2 sorts A-Z as:132.a2.b\nHow do I make sure they both sort as excel 1?\nI've changed format of columns from General to number for both and it's still similar outcome.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":70700093,"Users Score":0,"Answer":"IMHO, sorting is unnecessary. u want:\n\nmerging two datasets on python\n\nThus, just import\/merge both data 1st.. then sort in python.. just from looking in the output file you can see if some of the row label IS actually different. Eg : \"2.a\" vs \"2.a \"","Q_Score":0,"Tags":"python,excel,sorting","A_Id":70737928,"CreationDate":"2022-01-13T16:42:00.000","Title":"A-Z sorting is different between two excel files","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an application which is using Cassandra as a database.I need to create some kind of reports from the Cassanbdra DB data, but data is not modelled as per report queries. So one report may have data scattered in multiple tables. As Cassandra doesn't allow joins like RDBMS, this is not simple to do.So I am thinking of a solution to get the required tables data in some other DB (RDBMS or Mongo) in real time and then genereate the report from there. So do we have any standard way to get the data from Cassandra to other DBs (Mongo or RDBMS) in realtime i.e. whenever an insert\/update\/delete happens in Cassandra same has to eb updated in destination DB. Any example programe or code would be very helpful.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":70761569,"Users Score":0,"Answer":"You would be better off using spark + spark cassandra connector combination to do this task. With Spark you can do joins in memory and write the data back to Cassandra or any text file.","Q_Score":1,"Tags":"python,cassandra,mongodb-query,pipeline,rdbms","A_Id":70765688,"CreationDate":"2022-01-18T19:47:00.000","Title":"Getting data from Cassandra tables to MongoDB\/RDBMS in realtime","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a logs table in the data base database. For the second day I'm struggling to upload it in CSV format.\nCommands like\n\ncopy logs to 'D:\/CSV.csv' WITH CSV DELIMITER ',' HEADER;\n\ndon't help. The error logs relation does not exist always pops up, while an empty CSV file is created along the path specified in the command.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":70770874,"Users Score":0,"Answer":"Solving the problem:\nBefore using the command\n\nCopy logs To 'D:\/CSV.csv' With CSV DELIMITER ',' HEADER;\n\nNeeded:\n\nLog in to psql with the command psql -U username (according to the postgres standard)\n\nThe next step is to connect to the database where your table is located. In my case, this is \\connect \"data base\"\n\nAnd then you can copy the table in csv format with the command above.\n\n\nI hope that this answer will help all newcomers in the same trouble.","Q_Score":0,"Tags":"python,postgresql,csv,psql","A_Id":70780714,"CreationDate":"2022-01-19T12:49:00.000","Title":"How to export a table in CSV format?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So whenever i am trying to read from a source with stream i get this error \"A file referenced in the transaction log cannot be found\" and it points to a file that does not exist.\nI have tried:\n\nChanging the checkpoint location\nChanging the start location\nRunning \"spark._jvm.com.databricks.sql.transaction.tahoe.DeltaLog.clearCache()\"\n\nIs there anything else i could do?\nThanks in advance guys n girls!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":166,"Q_Id":70773415,"Users Score":0,"Answer":"So! I had another stream that was running and it had the same parent directory as this stream.. this seems to have been a issue.\nFirst stream was looking in: .start(\"\/mnt\/dev_stream\/first_stream\")\nSecond stream was looking in: .start(\"\/mnt\/dev_stream\/second_stream\")\nEditing the second stream to look in .start(\"\/mnt\/new_dev_stream\/new_second_stream\") fixed this issue!","Q_Score":0,"Tags":"python,databricks,azure-databricks,databricks-connect","A_Id":70773744,"CreationDate":"2022-01-19T15:37:00.000","Title":"Databricks streaming \"A file referenced in the transaction log cannot be found\"","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python application (windows machine) connecting to on-prem SQL-Server to fetch the data and do some python functions.\nI wanted this application keeps continuously check the data periodically.\nSo, I kept this application in AWS-ECS and assigned the cron-job using lambda.\nThe problem I am facing in cloud-watch logs,\nI could see the error: \"timeout: invalid time interval 'm'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":70840767,"Users Score":0,"Answer":"I created and run my application in Windows environment in my machine. But when I deploy the code in AWS. In particularly, the application triggers with lambda its working in a Linux environment so the work directory for windows and linux path must be changed. For example:\nin windows: app\/foldername\/.py\nin Linux: src\/foldername\/.py\nSo with this small working directory issue, it fails to look the code path when Lambda trigger. That is the reason the server connection timeout error.","Q_Score":0,"Tags":"python,aws-lambda,amazon-ecs","A_Id":70906479,"CreationDate":"2022-01-24T21:31:00.000","Title":"deployed application fails with timeout","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to remove duplicate item in excel and sum the value of them by using python. I have some codes but they only could remove the duplicate item and they are unable to sum the value of them.\nIf someone knows how I could solve this problem please give me the answer.\nthanks for your favor","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":70897057,"Users Score":0,"Answer":"You can use pandas,\n\npandas.read_excel()\nDataFrame.drop_duplicates(subset=None, keep='first', inplace=False)\nDataFrame.sum(axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)","Q_Score":0,"Tags":"python","A_Id":70897103,"CreationDate":"2022-01-28T16:33:00.000","Title":"Duplicate values in excel and sum the value of duplicate value by python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a very basic question.\nThe input is api feed from source, that has created date as a column. What I am looking to accomplish is to store this file(by splitting it up) into the following format:\nlanding\/year=2020\/month=01\/date=01 and so on...\nThe year, month, date values are the dates from Created_at column.\nTHe file will be stored as transaction_id.parquet (transaction_id is also another column in the feed).\nWhat is the suggested option to get to this structure? Is it prefix for each file by splitting created_date into year, month, date?\nLooking for you response.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":175,"Q_Id":70900619,"Users Score":0,"Answer":"Your design should be something like below\n\nCreate a file in YYYYMMDD format\nlet's assume that you are receiving a file named 20220129file_name.txt\nSplit it by \"_\" to get the DATE portion\nSplit other parts such as year\/month and day\nCreate another function to validate if a particular year\/month\/day S3 folder exists? if yes then put the file in that folder or else create the folder set and put the file.\nThere is no ready-made code for the same but you can create it. It's pretty simple.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3","A_Id":70921284,"CreationDate":"2022-01-28T21:43:00.000","Title":"Store file in S3 according to Year, month,date","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have excel sheet having one sheet with 4 tables they are placed randomly. Out of the four tables, three tables have column name, except for one. Each table has 4 to 5 rows and 4 to 5 columns. How to extract all the tables without doing hard coding using Python. All tables are separated by some space.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":70908156,"Users Score":0,"Answer":"Does this previous question\n\nhttps:\/\/stackoverflow.com\/questions\/69255564\/how-to-extract-different-tables-in-excel-sheet-using-python\n\nhelp? The example code that can be adapted to just print all tables in all sheets. It does also use pandas.","Q_Score":0,"Tags":"python,excel,pandas,openpyxl","A_Id":70912990,"CreationDate":"2022-01-29T17:50:00.000","Title":"How to extract the table in excel using openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I update the modules I get this error message:\nTable 'crm_lead': unable to set NOT NULL on column 'partner_id'\nWhat should I do to prevent it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":104,"Q_Id":70922850,"Users Score":4,"Answer":"There probably has changed something on the field partner_id on model crm.lead regarding required parameter. But you already have some data in database without fulfilling the NOT NULL constraint resulting from that change.\nSo you either try to fix the database table crm_lead by setting all partner_ids or you remove the required=True on that field.\nIIRC there is no required or NOT NULL on crm.lead's partner_id field in Odoo vanilla\/default code. So you probably have custom modules changing that.\nThe \"Error\" itself is only a warning. In the end Odoo can't set that constraint in database, but will work anyway.","Q_Score":1,"Tags":"python,postgresql,odoo,odoo-13,notnull","A_Id":70923363,"CreationDate":"2022-01-31T08:09:00.000","Title":"Odoo13-CE: Error message: Table 'crm_lead': unable to set NOT NULL on column 'partner_id'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using FastAPI and Async SQl. I have defined the schema with email unique. But when I pass the same email to the FastAPI route. It throws error on development server\nsqlite3.IntegrityError: UNIQUE constraint failed: users.email and returns a 500 Internal response via Postman. I want to show error as a message and not 500 error what should I do ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":175,"Q_Id":70954271,"Users Score":0,"Answer":"what is happening is an Error that the client should NOT have access to,\nthere is a difference between validation errors that YOU define and between the ORM and database engine errors the library shows, \nwhat you should do is check why are you sending a database query with an empty email! if you do so, the SQL engine will tell you that you can not put a NULL value there, which is something you did set.\nthere should be a validation before sending the query to the DB,\nin FastApi, such validation should be done with Pydantic models. \nEDIT: \nthanks to @Mecid X Recebli for his comment, I think the question has been changed and my answer is no longer valid. \\\nI answered as if you are sending an empty value of email to DB query, but your question says that you are sending a duplicate email,\nyou should do email validation in the Pydantic schema, and that is by querying the DB for that email, if you have an account with the same email, raise a validation exception inside the validation method in the pydantic shema.","Q_Score":0,"Tags":"python,sql,fastapi","A_Id":70992669,"CreationDate":"2022-02-02T10:47:00.000","Title":"Fastapi async sql unique constraint throws Internal Server Error. Instead of returning a error message","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Python script I want to run from Microsoft Excel.\nHowever, the script currently writes data to that excel file when I run it from Python.\nIf I keep the excel file open when running from Python I get a permission denied error which I can fix by closing the excel file.\nWill running the python script from within the excel file still allow it to write to it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":111,"Q_Id":70973413,"Users Score":1,"Answer":"I have the impression that you have written a program to modify an Excel file. In order to do that, that Excel file can't be accessed by some application (like Excel).\nWhen you open that file in Excel, and you try to run that program, then that program tries to open the file you have just opened, returning an \"access denied\" error.\nSo, I believe there are two things you can do:\n\nRun that program from outside Excel.\nRun that program from Excel itself, but without opening that file in Excel.","Q_Score":0,"Tags":"python,excel","A_Id":70973828,"CreationDate":"2022-02-03T14:51:00.000","Title":"Running Python script directly from Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working in restricted environment where I can install only conda supported libraries\/packages. I'm trying to build connection to Sql server DB(Azure) via python which requires ODBC driver. Is there any alternate way to build connection to DB without driver?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":199,"Q_Id":70982988,"Users Score":0,"Answer":"Unfortunately No.\nODBC and JDBC are the standard and recommended drivers to connect your Azure SQL Database with backend development.\nWhen it comes to python, pyodbc is the standard python library which runs on ODBC Driver to connect with python.","Q_Score":1,"Tags":"python,azure-sql-database,jupyter,pyodbc","A_Id":71318380,"CreationDate":"2022-02-04T07:48:00.000","Title":"Build connection to SQL server DB without ODBC driver","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Command class that runs an api call to a coin market cap api and writes it into the database that I have rigged up to my django project called cryptographicdatascience. The table that the data is written to is called apis_cmc and is not defined in my models.py.\nMy question is if I am writing straight to a table using sqlalchemy do I need to go in and create a model for the same table in my models.py file?\nIt seems to me that the answer is no but I'm sure there is something I'm overlooking.\nThanks,\nJustin","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":71003121,"Users Score":1,"Answer":"No, there is no need to create table in models.py as it is not a part of a Django ORM.\nBut have in mind, that Django ORM won't know about your table and won't store any data in migration files. It can cause some problems on deploy stage or when you move to other working machine.","Q_Score":0,"Tags":"python,django,sqlalchemy","A_Id":71003174,"CreationDate":"2022-02-05T23:52:00.000","Title":"Do I need to create a model for a table I'm inserting through sqlalchemy in django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to run my python scripts on my IIS website using CGI and I am having trouble with importing. When run on its own, the python scripts finds the mysql.connector module installed in my os perfectly fine, but when I try to run it on the website, it gives a Bad Gateway (502.2) error with the stacktrace stating ModuleNotFoundError: No module named 'mysql.connector'. I'm assuming CGI cannot find the module in my OS, how can I let it find the module? Do I have to specify my modules folder somewhere in the IIS like a PATH variable?\nHere is the complete stacktrace of the bad gateway page:\nThe specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are \"Traceback (most recent call last): File \"C:\\Users\\pedro\\OneDrive\\Documents\\adet\\ind.py\", line 2, in import python_mysql File \"C:\\Users\\pedro\\OneDrive\\Documents\\adet\\python_mysql.py\", line 1, in import mysql.connector ModuleNotFoundError: No module named 'mysql.connector' \".","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":71009933,"Users Score":0,"Answer":"As a workaround, I simply copied and pasted all of my modules in my modules folder to my website's folder. I guessed that since my personal imports were working that it should also work if I added the other modules to the same folder and lo, it did. I hope there's a cleaner way to solve this, but for now I'll choose this as a solution.","Q_Score":0,"Tags":"python,iis,cgi","A_Id":71010585,"CreationDate":"2022-02-06T17:46:00.000","Title":"IIS CGI could not locate my python modules","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I believe this question already shows that I am new to docker and alembic. I am building a flask+sqlalchemy app using docker and postgres. So far I am not using alembic, but I am about to plug it in and some questions came up. I will have to create a pg_trgm extension and also populate one of the tables with data I already have. Until now I have only created brand new databases using sqlalchemy for the tests. So here is what I am thinking\/doing:\n\nTo create the extension I could simple add a volume to the postgres docker service like: .\/pg_dump.sql:\/docker-entrypoint-initdb.d\/pg_dump.sql. The extension does not depend on any specific db, so a simple \"CREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA public;\" would do it, right?\n\nIf I use the same strategy to populate the tables I need a pg_dump.sql that creates the complete db and tables. To accomplish that I first created the brand new database on sqlalchemy, then I used a script to populate the tables with data I have on a json file. I then generated the complete pg_dump.sql and now I can place this complete .sql file on the docker service volume and when I run my docker-compose the postgres container will have the dabatase ready to go.\n\nNow I am starting with alembic and I am thinking I could just keep the pg_dump.sql to create the extensions, and have a alembic migration script to populate the empty tables (dropping the item 2 above).\n\n\nWhich way is the better way? 2, 3 or none of them? tks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":304,"Q_Id":71079732,"Users Score":0,"Answer":"Create the extension in a \/docker-entrypoint-initdb.d script (1). Load the data using your application's migration system (3).\nMechanically, one good reason to do this is that the database init scripts only run the very first time you create a database container on a given storage. If you add a column to a table and need to run migrations, the init-script sequence requires you to completely throw away and recreate the database.\nPhilosophically, I'd give you the same answer whether you were using Docker or something else. You could imagine running a database on a dedicated server, or using a cloud-hosted database. You'd have to ask your database administrator to install the extension for you, but they'd generally expect to give you credentials to an empty database and have you load the data yourself; or in a cloud setup you could imagine checking a \"install this extension\" checkbox in their console but there wouldn't be a way to load the data without connecting to the database remotely.\nSo, a migration system will work anywhere you have access to the database, and will allow incremental changes to the schema. The init script setup is Docker-specific and requires deleting the database to make any change.","Q_Score":0,"Tags":"python,docker,sqlalchemy,alembic","A_Id":71079959,"CreationDate":"2022-02-11T11:58:00.000","Title":"Use alembic migration or docker volumes to populate docker postgres database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"My requirement is to read an excel using Pyspark, while doing same getting below error.\nOr else alternatively is there any solution using Pandas to read excel and convert into Pyspark dataframe ? Any one is fine.\nlat_data=spark.read.format('com.crealytics.spark.excel').option(\"header\",\"true\").load(\"a1.xlsx\")\nerror:\nPy4JJavaError: An error occurred while calling o756.load.\n: java.lang.ClassNotFoundException: Failed to find data source: com.crealytics.spark.excel.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":71108323,"Users Score":0,"Answer":"You need to install the crealytics library. You can do it via pip:\npip install xlrd","Q_Score":0,"Tags":"python,pandas,pyspark","A_Id":71110237,"CreationDate":"2022-02-14T07:19:00.000","Title":"How to read excel xlsx file using pyspark","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the following versions as of now\n\nDjango Rest Framework is 3.12.4\nPython version is 3.9\nDjango version is 3.2.3\nPostgreSQL 13.5\n\nIs postgres 14 compatible with the above versions? I need to upgrade postgres to 14.\n[Edit] Sorry had to remove the link to avoid confusion\nthanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":254,"Q_Id":71129744,"Users Score":0,"Answer":"I checked and Postgress 14.2 is compatible with the following\n\nDjango Rest Framework 3.12.4\nPython version 3.9\nDjango version 3.2.3\nPsycopg 2.8.6","Q_Score":1,"Tags":"python,django,postgresql,amazon-web-services,django-rest-framework","A_Id":71327489,"CreationDate":"2022-02-15T16:11:00.000","Title":"Is postgres 14 compatible with Django3.2.3 and Django framework 3.12.4?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was working on wine data on kaggle. Where there was a column named price has values like $32, $17, $15.99, Nan\nwine_data.isnull().sum()--After applying this code, there were a lot of missing values so I wrote another code i.e.\nwine_data['designation'].fillna(wine_data['designation'].mode()[0], inplace = True)\nwine_data['varietal'].fillna(wine_data['varietal'].mode()[0], inplace = True)\nwine_data['appellation'].fillna(wine_data['appellation'].mode()[0], inplace = True)\nwine_data['alcohol'].fillna(wine_data['alcohol'].mode()[0], inplace = True)\nwine_data['price'].fillna(wine_data['price'].mode()[0], inplace = True)\nwine_data['reviewer'].fillna(wine_data['reviewer'].mode()[0], inplace = True)\nwine_data['review'].fillna(wine_data['review'].mode()[0], inplace = True)\nThen I wanted to do a correlation of alcohol with rating and price with rating but both alcohol and price column has '%' and '$' these characters.So, I applied this code.\nwine_data = wine_data.assign(alcohol_num = lambda row: row[\"alcohol\"].replace(\"%\", \"\", regex=True).astype('float'))\nwine_data = wine_data.assign(price_numbers= wine_data['price'].str.replace('$','',regex = True)).astype('float')\nIt's throwing me an error like--\ncould not convert string to float: 'J. Lohr 2000 Hilltop Vineyard Cabernet Sauvignon (Paso Robles)'\nThen I tried this code:\nwine_data = wine_data.assign(price_numbers= wine_data['price'].str.replace('$','',regex = True)).astype('int')\nIt's throwing me an error like--\ninvalid literal for int() with base 10: 'J. Lohr 2000 Hilltop Vineyard Cabernet Sauvignon (Paso Robles)'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":71139549,"Users Score":0,"Answer":"Your data is not clean. One of the elements in your price column keeps containing the string 'J. Lohr 2000 Hilltop Vineyard Cabernet Sauvignon (Paso Robles)', which is why the column cannot be converted to float, even though you did some other cleansing steps.\nYou want be a bit more structured in your data cleansing: Do one step after the other, take a look at the intermediate df, and do not try to do many cleansing steps at once with an apply() function. If you have a messy dataset, maybe 10 steps are required, no way you can do all of that with a single apply() call.","Q_Score":1,"Tags":"python,python-3.x,pandas,types","A_Id":71140179,"CreationDate":"2022-02-16T09:51:00.000","Title":"How to convert a datatype of a column with both integer and decimal numbers in Python?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to know if there is a possibility to create a single table containing all the JSON files from an s3 bucket, I've searched a lot and I can't find a solution for this, if anyone can help with any tips I'd appreciate it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":71144695,"Users Score":0,"Answer":"Yes it is possible but it is not clear what your intent is. If you have a bucket with a set of json files that are in a Redshift readable format and have common data that can be mapped into columns, then this is fairly straight forward. The COPY command can read all the files in the bucket and apply a common mapping to the tables columns. Is this what you want?\nOr do you have a bunch of dissimilar json files in various structures that you want to load some information from each into a Redshift table? Then you will likely want to use a Glue Crawler to inventory the jsons and load them separately into Redshift and then combine the common information into a single Redshift table.\nPlus there are many other possibilities of what you need. The bottom line is that you are asking to load many unstructured files into a structured database. There is some mapping that needs to happen but depending on what your data looks like this can be fairly simple or quite complex.","Q_Score":0,"Tags":"python,amazon-redshift","A_Id":71149643,"CreationDate":"2022-02-16T15:31:00.000","Title":"how to create a table in redshift from multiple json files in S3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written an application using Python code utilizing the Pandas and Openpyxl modules.\nSummary of my app:\nBrowse and find an excel file(original), browse and find another excel file(new).\nPress a button and update certain columns from original file with information of new file using name of item as a reference. Press save button and save the file to my computer.\nUsing my Windows machine I have made it into an .exe file and everything works perfectly fine. I am able to do everything I created it to do. I am trying to make it compatible on both Windows and MacOS. I have created a .app file using Py2App, and the app \"runs\" just fine. I am able to browse for files and so far it looks like I am able to save files. The problem is that the files I am trying to \"use\" are completely greyed out and I am unable to choose any files. I'm fairly new to MacOS so any advice or help would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":22,"Q_Id":71178770,"Users Score":0,"Answer":"Change the file format selector.","Q_Score":0,"Tags":"python,pandas,macos-catalina,py2app,.app","A_Id":71179168,"CreationDate":"2022-02-18T19:16:00.000","Title":"I have used Py2App to create an .app file from a Python code project I am working on but when I open the app to browse files they are all greyed out?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a mssql schema with the django ORM \/ pymssql extension. I have some classes build via the inspectdb function. A lot of the Primarykeys in the tables are UUID fields \/ mssql uniqueidentifier, which the ORM inspected as CharFields with length 36.\nI am concerned now with possible duplicates for the primary keys since the tables are growing very fast.\nThe tables have a default constraint for any new primary key on the database site. So basically I have two (different) sources of UUID generation (the database server and the application server)\nHow is it possible to insert via the ORM from django performantly?\nAm I save with generating the UUIDs via pythons uuid module or do I have to ask the database everytime for a new UUID before creating a object with django?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":71209850,"Users Score":0,"Answer":"The primary key cannot be duplicated, so it will raise a \"duplicated pk duplicated\" exception. In addition, the odds of getting a duplicated uuid is quite close to 0, you will get 3 lottery prizes before you get a duplicated uuid.","Q_Score":0,"Tags":"python,sql-server,django","A_Id":71209956,"CreationDate":"2022-02-21T16:35:00.000","Title":"UUIDs with django and mssql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can't seem to figure out why I'm getting the error below for this python method in my script. I can't really post the full method here but it's erroring out at the return statement of the method which is:\nreturn [{'parameters': [2022-02-21 00:00:00, 'US\/Pacific', 2022-02-23 00:00:00, 'US\/Pacific']}]\npymysql.err.ProgrammingError: (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \\'\\'parameters\\': \"(\\'2022-02-21 00:00:00\\',\\'US\/Pacific\\',\\'2022-02-23 00:00:00\\',\\'US\/Pacific\\' at line 16')","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":27,"Q_Id":71253588,"Users Score":1,"Answer":"That looks like a Python statement, is it possible you're confusing Python and SQL and trying to execute Python code as an SQL statement?","Q_Score":0,"Tags":"python,mysql,pymysql","A_Id":71253758,"CreationDate":"2022-02-24T14:34:00.000","Title":"How to resolve pymysql.err.ProgrammingError?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say that on Google Cloud Storage I have bucket: bucket1 and inside this bucket I have thousands of blobs I want to rename in this way:\nOriginal blob:\nbucket1\/subfolder1\/subfolder2\/data_filename.csv\nto: bucket1\/subfolder1\/subfolder2\/data_filename\/data_filename_backup.csv\nsubfolder1, subfolder2 and data_filename.csv - they can have different names, however the way to change names of all blobs is as above.\nWhat is the most efficient way to do this? Can I use Python for that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":71308957,"Users Score":0,"Answer":"If you have a lot of rename to perform, I recommend to perform the operation concurrently (use several thread and not perform the rename sequentially).\nIndeed, you have to know how works CLoud Storage. rename doesn't exist. You can go into the Python library and see what is done: copy then delete.\nThe copy can take time if your files are large. Delete is pretty fast. But in both case, it's API call and it take time (about 50ms if you are in the same region).\nIf you can perform 200 or 500 operations concurrently, you will significantly reduce the processing time. It's easier with Go or Node, but you can do the same in Python with await key word.","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-storage,gsutil","A_Id":71315148,"CreationDate":"2022-03-01T13:08:00.000","Title":"how to efficiently rename a lot of blobs in GCS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I cannot figure out a solution for exporting data from the Teradata database in a parquet format. I am using tdload and tbuild method which required tpt script to be generated.\nWhat should be the solution to export file in parquet files from the Teradata database?\ntdload cmd -\ntdload --SourceTdpid 192.168.xx.xx --SourceUserName dbc --SourceUserPassword dbc --SourceTable AdventureDW.FactProductInventory --TargetTextDelimiter \"|\" --FileWriterFileSizeMax 30G --TargetFilename F:\\Data\\data.parquet My_Unload_Job\nWhat changes should I do the command to get output in parquet format?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":216,"Q_Id":71323109,"Users Score":0,"Answer":"I was able to convert the data from csv to parquet format via dask framework. Here post the extraction, using pyarrow engine I was able to convert the large csv datasets into inbuilt parquet function via dask framework","Q_Score":0,"Tags":"python,database,teradata,data-migration,data-extraction","A_Id":72272920,"CreationDate":"2022-03-02T12:58:00.000","Title":"Export data in parquet file in teradata","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Actually we are deploying elasticsearch django application in aws ec2.Here, i need to know something about elasticsearch. Will elasticearch automatically updates with the update in postgres or do we need to use extra modules to sync elasticsearch and postgres together so that whatever changes happens in postgres database, it will also update in elasticsearch.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":71350496,"Users Score":0,"Answer":"Elasticsearch-dsl keeps sync between indexes and db","Q_Score":0,"Tags":"python,django,elasticsearch,e-commerce,elasticsearch-dsl","A_Id":71350584,"CreationDate":"2022-03-04T11:03:00.000","Title":"Do weed something to sync elasticsearch dsl to postgres databse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"there is no code required to put here.\nI want to save a really long number as I am making kind of a game where score is saved.\nBut I tested it and put 25000000000 as the score, and in mysql it saves as 2147483647.\nI also modified the limit of the integer, and set it as an integer in mysql. Any toughts?\nIf it is under 10 numbers it works, if it passes 10 it doesn't, even if i modified the limit","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":24,"Q_Id":71365925,"Users Score":3,"Answer":"Use the BIGINT datatype instead.\nPer the MySQL docs, the INT type only supports signed values between -2147483648 and 2147483647, whereas the BIGINT type supports signed values between -2^63 and 2^63 - 1 (-9223372036854775808 to 9223372036854775807).","Q_Score":0,"Tags":"python,mysql","A_Id":71365943,"CreationDate":"2022-03-05T21:01:00.000","Title":"python\/mysql not saving good integers","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to deploy a Python Flask app on an AWS EC2 (Ubuntu 20.04) instance. The app fetches data from an S3 bucket (in the same region as the EC2 instance) and performs some data processing.\nI prefer using s3fs to achieve the connection to my S3 bucket. However, I am unsure if this will allow me to leverage the 'free data transfer' from S3 to EC2 in the same region - or if I must use boto directly to facilitate this transfer?\nMy app works when deployed with s3fs, but I would have expected the data transfer to be much faster - so I am wondering that perhaps AWS EC2 is not able to \"correctly\" fetch data using s3fs from S3.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":71391966,"Users Score":0,"Answer":"All communication between Amazon EC2 and Amazon S3 in the same region will not incur a Data Transfer fee. It does not matter which library you are using.\nIn fact, communication between any AWS services in the same region will not incur Data Transfer fees.","Q_Score":0,"Tags":"amazon-web-services,amazon-s3,amazon-ec2,boto,python-s3fs","A_Id":71394766,"CreationDate":"2022-03-08T08:25:00.000","Title":"Can I use s3fs to perform \"free data transfer\" between AWS EC2 and S3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The HTTP triggered function app is built using Python 3.x and uses pyodbc with ODBC Driver 17 for SQL Server to communicate with the SQL server DB which has been deployed in the same resource group and the same region. The function app throws ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]Communication link failure (0) (SQLExecDirectW)') error (used logs to validate) from time to time and returns an error as the response.\nThis issue gets immediately resolved once we re-deploy the function app and keeps working for a period of time and becomes apparent again (re-deployed yesterday evening due to the issue and worked fine until re-checked in the morning and the issue was there again). This function app was working as expected until a couple of days before when we released a newer version with some improvements.\nWe have the exact same function app (the latest version) and setup (including the DB) deployed in 2 other Azure directories (the Dev and Test instances) and they are working without a hitch. The only difference is the service tier (the production version uses a premium plan while the dev and test use consumption plans).\nTried disabling the \"always on - keeping at least one function app instance running\" feature on the premium plan to verify whether it's due to a DB session issue but that didn't work as well. Also added the IP of the function app in the DB whitelist just in case (The azure resources can access the DB feature is also on so, adding the IP of the function app is not mandatory I guess) and that didn't work too.\nAny support or expertise on the subject matter would be appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":71434240,"Users Score":0,"Answer":"I believe and guess there will be a version mismatch on the ODBC driver\/connector between server and client.\nOr\nit could be related to the usage limit of the Azure SQL database in the Production Slot. Could you please check if there is any like compute the size of the database usage and increase it if it is near to the Quota defined!\nIf the same error comes in a while, please email the Function logs data and invocation Id along with the error details to azcommunity@microsoft.com with your subscription ID as well as your function app name as Microsoft Azure Support will do the analysis to find the root cause on your Azure Function App - SQL DB Connection Error.","Q_Score":1,"Tags":"python-3.x,azure-functions,azure-sql-database,pyodbc","A_Id":71733960,"CreationDate":"2022-03-11T06:02:00.000","Title":"Azure function app throws \"database communication link failure\" error and stops working","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a little Django app that uses PyMongo and MongoDB.\nIf I write (or update) something in the database, I have to restart the server for it to show in the web page. I'm running with 'python manage.py runserver'\nI switched to the django dummy cache but that didn't help.\nEvery database action is within an 'with MongoClient' statement.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":71472582,"Users Score":0,"Answer":"I figured it out. I read in the data in the django_tables2 class variables. So it was never refreshed...\nBangs forehead on desk...","Q_Score":0,"Tags":"python,django,mongodb,pymongo","A_Id":71484768,"CreationDate":"2022-03-14T18:30:00.000","Title":"Cache problems in Django\/PyMongo\/MongoDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i run this command pip3 install mysql-connector in Command Prompt and its show this:\nC:\\Users\\pc>pip3 install mysql-connector Requirement already satisfied: mysql-connector in c:\\users\\pc\\appdata\\local\\prog rams\\python\\python38\\lib\\site-packages (2.2.9).\nbut i still get ModuleNotFoundError: No module named 'mysql' when i import mysql.connector in spyder and pycahrm.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":71494597,"Users Score":0,"Answer":"In Spyder go to Tools > Preferences > Python interpreter, select Use the following Python interpreter: and choose the Python .exe from your other installation.\nThere is probably a similar setting in Pycharm\u2019s preferences.","Q_Score":1,"Tags":"python,mysql,pip,pycharm,spyder","A_Id":71499958,"CreationDate":"2022-03-16T09:31:00.000","Title":"Receive module not found error after i pip mysql.connector","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a Django Project, and I have all my datas in .sql files.\nI want to insert all of them in my Database.\nCan I use the python shell to do it ? Or should I use another method ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":71499891,"Users Score":0,"Answer":"If you have file or can convert your data into csv or xml then you can make a script which iterate through your data file and then store each data into variable and as per your Django model fields and then you can bulk_create","Q_Score":0,"Tags":"python,django,database","A_Id":71522975,"CreationDate":"2022-03-16T15:31:00.000","Title":"Insert data from SQL file in Django DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a Python script with CX_ORACLE which will take about 20K files and upload them into a BLOB column.\nThey are PDFs,CSVs,TXTs, and JPGs.\nI'm just not understanding how I can both put these text and binary files all into the BLOB column.","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":30,"Q_Id":71561770,"Users Score":6,"Answer":"Remember - all files are 'binary' files. A text file is just a binary file with an encoding that allows it to be represented as text. An ASCII or UTF encoded file is still a binary file under the hood.\nA BLOB column holds an arbitrary binary sequence, with no assumptions about encoding. So you can pass any binary sequence in as input.","Q_Score":0,"Tags":"python,oracle,binary,cx-oracle","A_Id":71561828,"CreationDate":"2022-03-21T17:34:00.000","Title":"Is there a difference in inserting a txt\/csv file or an image into a BLOB column in Oracle?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to working with databases and couldn't find any relevant answers for this.\nWhat are the uses of SQLAlchemy over MYSQL CONNECTOR for python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":71569879,"Users Score":0,"Answer":"I do not have much experience with MYSQL CONNECTOR for Python. However, from what I know SQLAlchemy primarily uses ORM (Object-Relational Mapping) in order to abstract the details of handling the database. This can help avoid errors some times (and also introduce possibly introduce others). You might want to have a look at the ORM technique and see if it is for you (but don't use it as a way to avoid learning SQL). Generally, ORMs tend not to be as scalable as raw SQL either.","Q_Score":0,"Tags":"python,mysql,database","A_Id":71570288,"CreationDate":"2022-03-22T09:43:00.000","Title":"Purpose of SQLAlchemy over MYSQL CONNECTOR PYTHON","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python 3.9 and MongoDB 5.0.6.\nI have a function that has insert_many call inside. I want to wait until the data is inserted into the timeseries collection before return. How can I do that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":71578060,"Users Score":1,"Answer":"By default pymongo is synchronous, so the insert_many() function call will not return until the data is inserted, so there's no specific need to wait.\nThis assumes you're not using the motor async driver or trying to read from a secondary replica.","Q_Score":1,"Tags":"python,python-3.x,mongodb,pymongo,pymongo-3.x","A_Id":71578871,"CreationDate":"2022-03-22T19:43:00.000","Title":"How to wait until data is inserted into a mongodb timeseries collection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to connect my python scripts to an MySQL or MariaDB Server on my RaspberryPi4.\nMy python script right now just contains import mysql.connector. But when I try to start it via sudo python3 startdb.py I just get import mysql.connector ModuleNotFoundError: No module named 'mysql' as an error.\nI get an other error, when I start the script via sudo python startdb.py: import mysql.connector ImportError: No module named mysql.connector.\nI searched for a solution on many sites or forums. I mostly just found various versions of pip install mysql-connector-python (also with pip3, mysql-connector-python-rf or mysql-connector) to run but none of them worked for me. The only difference I recognized is that I previously got the error ModuleNotFoundError with both sudo python and sudo python3, but now I only get it with sudo python3.\nDoes anyone know how to solve this?\nCould the fact that my script isn't in a sub-directory of \/home\/pi\/, but instead of \/home\/, be the problem?\nEdit: I just tried executing the script via the desktop mode using my mouse and just clicking on run and it worked. But when I'm using the command line in desktop mode or with a SSH session it doesn't work.\nAnother Edit: It looks like when I'm starting the script without sudo it'll work just fine. Don't actually know why's that, but I'm good for now. But would be very interesting to know and understand why the sudo makes it \"crash\".\nThanks and happy to hear some solutions :D\nCooki","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":104,"Q_Id":71631409,"Users Score":0,"Answer":"raspbian give user mode in running, just in Desktop gives some permission to user for run app as root to access all necessary attributes , use sudo with all initial steps when you download and install project package's","Q_Score":1,"Tags":"python,raspberry-pi,raspbian,mysql-connector,raspberry-pi4","A_Id":72359440,"CreationDate":"2022-03-26T20:01:00.000","Title":"ModuleNotFoundError: No module named 'mysql' with mysql-connector-python already installed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am cleaning up a SQL script to replace escaped apostrophes \\' with '' as it is in MySQL syntax and I need it to work in MSSQL but no matter what I try it doesn't work. How do you replace escaped apostrophes with two apostrophes in a file with Python?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":26,"Q_Id":71748225,"Users Score":1,"Answer":"It turns out replace(\"\\'\", \"''\") works. I'm sure I tried this but it must have been something else that stopped it from working initially.","Q_Score":0,"Tags":"python,sql,escaping,apostrophe","A_Id":71774782,"CreationDate":"2022-04-05T08:05:00.000","Title":"Replace Escaped Apostrophe in SQL File with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am pulling a list from a sharepoint site using shareplum. The output is saved locally in an excel file. There is only one sheet created. I need to set the name of the worksheet to the date user will run the script.\nSo, if i run the script today in order to pull the data from the sharepoint, I need the title to be today's date. The list on the sharepoint is updated dynamically, so i need to know what the excel file represents (the date where it was created).\nI tried:\nws.title = datetime.now()\nand\nws.title = date.today()\nUnfortunately this did not do the trick","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":71777750,"Users Score":0,"Answer":"Try use datetime.now().strftime(\"%m\/%d\/%Y, %H:%M:%S\")","Q_Score":0,"Tags":"python,openpyxl","A_Id":71777819,"CreationDate":"2022-04-07T07:09:00.000","Title":"How to set ws.title to today's date with openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to append records to a sqlite db file in a table, first checking if the db file exists and then checking if the table exists. If not create the db and table file dynamically.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":71792445,"Users Score":0,"Answer":"Expanding on to answer by @Umang, you could check for the table's existence using a query as SELECT count(*) FROM sqlite_master WHERE type='table' AND name='table_name';.","Q_Score":1,"Tags":"python,sqlite","A_Id":71792573,"CreationDate":"2022-04-08T06:19:00.000","Title":"How to check if sqlite db file exists in directory and also check if a table exists using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to append records to a sqlite db file in a table, first checking if the db file exists and then checking if the table exists. If not create the db and table file dynamically.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":76,"Q_Id":71792445,"Users Score":1,"Answer":"I hope you are using sqlite3 library, in that if you use connect method it will do exactly what you want.find the db or else create it.","Q_Score":1,"Tags":"python,sqlite","A_Id":71792491,"CreationDate":"2022-04-08T06:19:00.000","Title":"How to check if sqlite db file exists in directory and also check if a table exists using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I determine the exact PostgreSQL driver used by PyQt6? QPSQL is a Qt thingy for which there is no documentation as to configuration options, so I'm guessing it's just a wrapper for a real driver.\nThere is no such thing as a \"PostgreSQL Driver\", as shown in the QSqlDatabase doc.\nPostgres version 12 docs lists the following \"external projects\" as possible drivers: DBD::Pg, JDBC, libpqxx, node-postgres, Npgsql, pgtcl, pgtclng, pq, psqlODBC, psycopg. There are also two native drivers: libpq, ECPG.\nThough not listed in the version 12 docs, there are several variations of ODBC, divided into single-tier and multi-tier types.\nA Postgres doc states that psqlODBC is the \"official PostgreSQL ODBC driver\", but that doesn't mean that PyQt6 is using it.\nPossibly Qt won't commit to a specific driver because they may want to change driver implementation without notice. Nevertheless, I'd like to know what I have so I can tweak its options. Even better, I'd like to use a different driver if I don't like the one Qt provides. Qt has a section, \"Compile Qt with a specific driver\"; that should not be necessary for a PyQt6 programmer, and it is not clear whether such a compiled thing would find its way into PyQt6 with the static method, registerSqlDriver(). The QSqlDriver doc has a bunch of enums used with the hasFeature() method; this is useful, but it's not the same as manipulating driver parameters. The JDBC driver has a whole raft of options which are enumerated in the Postgres docs; I'd like to be able to retrieve a similar list for whatever driver Qt implements.\nAny help, please.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":71813171,"Users Score":0,"Answer":"Qt uses libpq driver for qsqlpsql plugin.","Q_Score":0,"Tags":"python,driver,pyqt6","A_Id":71818793,"CreationDate":"2022-04-10T01:41:00.000","Title":"What is the exact PostgreSQL driver used by PyQt6?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a catalog of OBIEE reports which include many BI Publisher Reports. I want the SQL queries (and eventually the list of physical tables) used in the data models of all the BIP Reports in my catalog. I don't want to do it by manually going into each data model as there are hundreds of BIP reports. Is there a way to do that?\nRelated to that, we looking into analyzing all the XML files for the reports through a python script.Is there a way I can extract the SQL queries from a XML file with or without using a Python script?\nAny insight would be appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":132,"Q_Id":71839671,"Users Score":0,"Answer":"The BI Publisher base tables all start with XDO. You can query the ALL_OBJECTS table to list all the XDO tables.\nCheck XDO_DS_DEFINITIONS_B table for the data definitions.","Q_Score":0,"Tags":"python,xml,obiee,bi-publisher","A_Id":71849522,"CreationDate":"2022-04-12T08:48:00.000","Title":"Extracting SQL Queries of all Oracle BI Publisher Reports","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How to convert a existing postgresql db into a Microsoft Access db with python?\nI want to convert my postgresql db into a Microsoft Access db.\nThere are many possible solutions, like transfer table by table and inside the tables row by row.\nBut which of the solution mide be the best in terms of performance?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":71845806,"Users Score":2,"Answer":"Install the ODBC driver and link the tables from PostgreSQL\nMark the link tables and choose Convert to local table\n(Optional) Go to Database Tools, Access Database, and select to split the database to have the tables in an external Access database","Q_Score":0,"Tags":"python,postgresql,ms-access","A_Id":71846834,"CreationDate":"2022-04-12T15:58:00.000","Title":"convert postgresql db into Microsoft Access db with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing my bachelor thesis on a project with a massive database that tracks around 8000 animals, three times a second. After a few months, we now have approx 127 million entries and each row includes a column with an array with 1000-3000 entries that has the coordinates for every animal that was tracked in that square that moment. All that lays in a sql database that now easily exceeds 2 TB in size.\nTo export the data and analyse the moving patterns of the animals, they did it online over PHPMyAdmin as a csv export that would take hours to be finished and break down about everytime.\nI wrote them a python (they wanted me to use python) script with mysql-connector-python that will fetch the data for them automatically. The problem is, since the database is so massive, one query can take up minutes or technically even hours to complete. (downloading a day of tracking data would be 3*60*60*24 entries)\nThe moment anything goes wrong (connection fails, computer is overloaded etc) the whole query is closed and it has to start all over again cause its not cached anywhere.\n\nI then rewrote the whole thing as a class that will fetch the data by using smaller multithreaded queries.\n\nI start about 5-7 Threads that each take a connection out of a connection pool, make the query, write it in a csv file successively and put the connection back in the pool once done with the query.\nMy solution works perfectly, the queries are about 5-6 times faster, depending on the amount of threads I use and the size of the chunks that I download. The data gets written into the file and when the connection breaks or anything happens, the csvfile still holds all the data that has been downloaded up to that point.\nBut on looking at solutions how to improve my method, I can find absolutely nothing about a similar approach and no-one seems to do it that way for large datasets.\n\n\n\nWhat am I missing? Why does it seem like everyone is using a single-query approach to fetch their massive datasets, instead of splitting it into threads and avoiding these annoying issues with connection breaks and whatnot?\nIs my solution even usable and good in a commercial environment or are there things that I just dont see right now, that would make my approach useless or even way worse?\nOr maybe it is a matter of the programming language and if I had used C# to do the same thing it wouldve been faster anyways?\n\nEDIT:\nTo clear some things up, I am not responsible for the database. While I can tinker with it since I also have admin rights, someone else that (hopefully) actually knows what he is doing, has set it up and writes the data. My Job is only to fetch it as simple and effective as possible. And since exporting from PHPMyAdmin is too slow and so is a single query on python for 100k rows (i do it using pd.read_sql) I switched to multithreading. So my question is only related to SELECTing the data effectively, not to change the DB.\nI hope this is not becoming too long of a question...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":71857095,"Users Score":0,"Answer":"There are many issues in a database of that size. We need to do the processing fast enough so that it never gets behind. (Once it lags, it will keel over, as you see.)\n\nIngestion. It sounds like a single client is receiving 8000 lat\/lng values every 3 seconds, then INSERTing a single, quite wide row. Is that correct?\nWhen you \"process\" the data, are you looking at each of the 8000 animals? Or looking at a selected animal? Fetching one out of a lat\/lng from a wide row is messy and slow.\nIf the primary way things are SELECTed is one animal at a time, then your matrix needs to be transposed. That will make selecting all the data for one animal much faster, and we can mostly avoid the impact that Inserting and Selecting have on each other.\nAre you inserting while you are reading?\nWhat is the value of innodb_buffer_pool_size? You must plan carefully with the 2TB versus the much smaller RAM size. Depending on the queries, you may be terribly I\/O-bound and maybe the data structure can be changed to avoid that.\n\"...csv file and put it back...\" -- Huh? Are you deleting data, then re-inserting it? That sees 'wrong'. And very inefficient.\nDo minimize the size of every column in the table. How big is the range for the animals? Your backyard? The Pacific Ocean? How much precision is needed in the location? Meters for whales; millimeters for ants. Maybe the coordinates can be scaled to a pair of SMALLINTs (2 bytes, 16-bit precision) or MEDIUMINTs (3 bytes each)?\n\nI haven't dwelled on threading; I would like to wait until the rest of the issues are ironed out. Threads interfere with each other to some extent.\nI find this topic interesting. Let's continue the discussion.","Q_Score":1,"Tags":"python,mysql,multithreading","A_Id":71867063,"CreationDate":"2022-04-13T11:55:00.000","Title":"Is it useful to multithread sql queries to fetch data from a large DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to create a new table where selected columns combined in one column then append new rows based on the cell value across the selected columns for example,\n\n\n\n\nID\nJan\nFeb\n\n\n\n\n11\nDoing\nCompleted\n\n\n12\nCompleted\n-\n\n\n13\n-\nCompleted\n\n\n14\nDoing\nDoing\n\n\n\n\nI want to convert the above table into this table below\n\n\n\n\nID\nStatus\n\n\n\n\n11\nDoing\n\n\n11\nCompleted\n\n\n12\nCompleted\n\n\n13\nCompleted\n\n\n14\nDoing\n\n\n14\nDoing\n\n\n\n\nI would be thankful if anyone can help me to solve this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":71879016,"Users Score":0,"Answer":"I figured out one solution in Power Query\n1st - Merge the selected columns by clicking Merge Columns, choose comma as the separator. Name the column as Status.\n2nd - Select the merged column (Status) then click Split column choose by delimiter. Choose comma as the delimiter. Split at choose Each occurence of the delimiter click Advanced options and choose rows","Q_Score":0,"Tags":"python,excel","A_Id":71881504,"CreationDate":"2022-04-15T00:58:00.000","Title":"Append new rows based on cell values across multiple columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can the money type that PostgreSQL offers be robustly parsed, to extract both the value and the currency symbol? (In Python, but something a bit non-language specific is also welcome)\nThe problems I think are that various components can change. e.g currency symbol can vary, as its position, as well as the symbol for what I would call decimal point, and maybe even the negative symbol...\nContext: I'm writing a PostgreSQL adapter for Python, and wondering whether to parse money output, or leave it as a string.\nHere is a list of all(?) 148 possible ways 12345.67 and -12345.67 can be format on my system based on the lc_monetary setting. (There are more lc_monetary possibilities but I've removed values that would duplicate output in this list)\n\n\n\n\nlc_monetary\n12345.67\n-12345.67\n\n\n\n\naa_DJ.iso88591\n$12 345.67\n-$12 345.67\n\n\naa_ER.utf8\n$ 12,346\n-$ 12,346\n\n\naa_ET.utf8\n$12,345.67\n-$12,345.67\n\n\naf_ZA.iso88591\nR12,345.67\n-R12,345.67\n\n\nan_ES.iso885915\n12.345,67 \u20ac\n-12.345,67 \u20ac\n\n\nar_AE.iso88596\n\u062f.\u0625. 12,345.670\n\u062f.\u0625. 12,345.670-\n\n\nar_BH.iso88596\n\u062f.\u0628. 12,345.670\n\u062f.\u0628. 12,345.670-\n\n\nar_DZ.iso88596\n\u062f.\u062c. 12,345.670\n\u062f.\u062c. 12,345.670-\n\n\nar_EG.iso88596\n\u062c.\u0645. 12,345.670\n\u062c.\u0645. 12,345.670-\n\n\nar_IN.utf8\n\u20b9 12,345.67\n-\u20b9 12,345.67\n\n\nar_IQ.iso88596\n\u062f.\u0639. 12,345.670\n\u062f.\u0639. 12,345.670-\n\n\nar_JO.iso88596\n\u062f.\u0623. 12,345.670\n\u062f.\u0623. 12,345.670-\n\n\nar_KW.iso88596\n\u062f.\u0643. 12,345.670\n\u062f.\u0643. 12,345.670-\n\n\nar_LB.iso88596\n\u0644.\u0644. 12,345.670\n\u0644.\u0644. 12,345.670-\n\n\nar_LY.iso88596\n\u062f.\u0644. 12,345.670\n\u062f.\u0644. 12,345.670-\n\n\nar_MA.iso88596\n\u062f.\u0645. 12,345.670\n\u062f.\u0645. 12,345.670-\n\n\nar_OM.iso88596\n\u0631.\u0639. 12,345.670\n\u0631.\u0639. 12,345.670-\n\n\nar_QA.iso88596\n\u0631.\u0642. 12,345.670\n\u0631.\u0642. 12,345.670-\n\n\nar_SA.iso88596\n12,345.67 \u0631\u064a\u0627\u0644\n-12,345.67 \u0631\u064a\u0627\u0644\n\n\nar_SD.iso88596\n\u062c.\u0633. 12,345.670\n\u062c.\u0633. 12,345.670-\n\n\nar_SY.iso88596\n\u0644.\u0633. 12,345.670\n\u0644.\u0633. 12,345.670-\n\n\nar_TN.iso88596\n\u062f.\u062a. 12,345.670\n\u062f.\u062a. 12,345.670-\n\n\nar_YE.iso88596\n\u0631.\u064a. 12,345.670\n\u0631.\u064a. 12,345.670-\n\n\nayc_PE.utf8\nS\/ 12,345.67\n-S\/ 12,345.67\n\n\naz_AZ.utf8\n12 345.67 man.\n-12 345.67 man.\n\n\nbe_BY.cp1251\n12 345.67 \u0440\u0443\u0431\n-12 345.67 \u0440\u0443\u0431\n\n\nbe_BY.utf8@latin\n12 345.67 Rub\n-12 345.67 Rub\n\n\nbem_ZM.utf8\nK12,345.67\n-K12,345.67\n\n\nber_MA.utf8\n\u2d37.\u2d4e. 12,345.670\n\u2d37.\u2d4e. 12,345.670-\n\n\nbg_BG.cp1251\n12 345,67 \u043b\u0432\n-12 345,67 \u043b\u0432\n\n\nbn_BD.utf8\n\u09f3 12,345.67\n-\u09f3 12,345.67\n\n\nbn_IN.utf8\n\u20b9 1,23,45.67\n-\u20b9 1,23,45.67\n\n\nbo_CN.utf8\n\uffe512,345.67\n\uffe5-12,345.67\n\n\nbr_FR.iso88591\n12 345,67 EUR\n-12 345,67 EUR\n\n\nbr_FR.iso885915@euro\n12 345,67 \u20ac\n-12 345,67 \u20ac\n\n\nbs_BA.iso88592\nKM 12 345,67\n-KM 12 345,67\n\n\nca_AD.iso885915\n\u20ac 12.345,67\n-\u20ac 12.345,67\n\n\nca_ES.iso88591\nEUR 12.345,67\n-EUR 12.345,67\n\n\ncrh_UA.utf8\n12 345.67 gr\n-12 345.67 gr\n\n\ncs_CZ.iso88592\n12 345,67 K\u010d\n-12 345,67 K\u010d\n\n\ncsb_PL.utf8\n12.345,67 z\u0142\n-12.345,67 z\u0142\n\n\ncv_RU.utf8\n12 345.67 t\n-12 345.67 t\n\n\ncy_GB.iso885914\n\u00a312,345.67\n-\u00a312,345.67\n\n\nda_DK.iso88591\nkr 12.345,67\nkr -12.345,67\n\n\nde_AT.iso88591\nEUR 12 345,67\n-EUR 12 345,67\n\n\nde_AT.iso885915@euro\n\u20ac 12 345,67\n-\u20ac 12 345,67\n\n\nde_BE.iso88591\nEUR 12.345,67\nEUR- 12.345,67\n\n\nde_BE.iso885915@euro\n\u20ac 12.345,67\n\u20ac- 12.345,67\n\n\nde_CH.iso88591\nFr. 12'345.67\nFr.- 12'345.67\n\n\nde_DE.iso88591\n12.345,67 EUR\n-12.345,67 EUR\n\n\ndv_MV.utf8\n\u0783. 12,345.67\n-\u0783.12,345.67\n\n\ndz_BT.utf8\n\u0f51\u0f44\u0f74\u0f63\u0f0b\u0f40\u0fb2\u0f58\u0f0b 12,345.670\n\u0f51\u0f44\u0f74\u0f63\u0f0b\u0f40\u0fb2\u0f58\u0f0b- 12,345.670\n\n\nel_CY.iso88597\n12.345,67\u20ac\n-\u20ac12.345,67\n\n\nen_BW.iso88591\nPu12,345.67\n-Pu12,345.67\n\n\nen_DK.iso88591\n\u00a412.345,67\n-\u00a412.345,67\n\n\nen_HK.iso88591\nHK$12,345.67\n(HK$12,345.67)\n\n\nen_IE.iso88591\nEUR12,345.67\n-EUR12,345.67\n\n\nen_IE.iso885915@euro\n\u20ac12,345.67\n-\u20ac12,345.67\n\n\nen_NG.utf8\n\u20a612,345.67\n-\u20a612,345.67\n\n\nen_PH.iso88591\nPhp12,345.67\n(Php12,345.67)\n\n\nen_SG.iso88591\n$12,345.67\n($12,345.67)\n\n\nen_ZW.iso88591\nZ$12,345.67\n-Z$12,345.67\n\n\nes_AR.iso88591\n$ 12.345,67\n-$ 12.345,67\n\n\nes_BO.iso88591\n$b 12.345,67\n-$b 12.345,67\n\n\nes_CR.iso88591\nC= 12 345,67\n-C= 12 345,67\n\n\nes_CR.utf8\n\u20a1 12 345,67\n-\u20a1 12 345,67\n\n\nes_CU.utf8\n12 345,67 $\n-12 345,67 $\n\n\nes_DO.iso88591\n$ 12,345.67\n-$ 12,345.67\n\n\nes_GT.iso88591\nQ 12,345.67\n-Q 12,345.67\n\n\nes_HN.iso88591\nL. 12,345.67\n-L. 12,345.67\n\n\nes_NI.iso88591\nC$ 12,345.67\n-C$ 12,345.67\n\n\nes_PA.iso88591\nB\/ 12,345.67\n-B\/ 12,345.67\n\n\nes_PY.iso88591\nGs. 12.345,67\n-Gs. 12.345,67\n\n\nes_SV.iso88591\nC= 12,345.67\n-C= 12,345.67\n\n\nes_SV.utf8\n\u20a1 12,345.67\n-\u20a1 12,345.67\n\n\nes_VE.iso88591\nBs. 12.345,67\n-Bs. 12.345,67\n\n\net_EE.iso88591\nEUR 12 345,67\n-EUR 12 345,67\n\n\net_EE.iso885915\n\u20ac 12 345,67\n-\u20ac 12 345,67\n\n\neu_ES.iso885915@euro\n\u20ac 12.346\n-\u20ac 12.346\n\n\nfa_IR.utf8\n12\u066c346 \u0631\u06cc\u0627\u0644\n-12\u066c346 \u0631\u06cc\u0627\u0644\n\n\nff_SN.utf8\n12,345.67 CFA\n-12,345.67 CFA\n\n\nfi_FI.iso88591\n12 345,67 EUR\n-12 345,67 EUR\n\n\nfi_FI.iso885915@euro\n12 345,67 \u20ac\n-12 345,67 \u20ac\n\n\nfil_PH.utf8\nPhP12,345.67\n-PhP 12,345.67\n\n\nfr_CA.iso88591\n12 345,67 $\n(12 345,67 $)\n\n\nfy_NL.utf8\n\u20ac 12 345,67\n\u20ac 12 345,67-\n\n\ngu_IN.utf8\n+\u20b9 12,345.67\n-\u20b9 12,345.67\n\n\nhe_IL.iso88598\n\u05e9\u05d7 12,345.67\n\u05e9\u05d7 12,345.67-\n\n\nhr_HR.iso88592\nKn 12 345,67\n-Kn 12 345,67\n\n\nht_HT.utf8\n12 345,67 g\n-12 345,67 g\n\n\nhu_HU.iso88592\n12.345,67 Ft\n-12.345,67 Ft\n\n\nhy_AM.utf8\n\u053412,345.67\n-\u053412,345.67\n\n\nid_ID.iso88591\nRp12.345,67\n-Rp12.345,67\n\n\nis_IS.iso88591\n12.346 kr\n-12.346 kr\n\n\nja_JP.eucjp\n\uffe512,346\n\uffe5-12,346\n\n\nka_GE.utf8\n\u10da12.345,67\n-\u10da12.345,67\n\n\nkk_KZ.utf8\n12 345.67 \u0442\u0433\n-12 345.67 \u0442\u0433\n\n\nkm_KH.utf8\n12,345.67\u17db\n-12,345.67\u17db\n\n\nko_KR.euckr\n\uffe612,346\n\uffe6-12,346\n\n\nku_TR.iso88599\n12.345,67 TL\n-12.345,67 TL\n\n\nky_KG.utf8\n12 345.67 \u0441\u043e\u043c\n-12 345.67 \u0441\u043e\u043c\n\n\nlg_UG.iso885910\n12,345.67\/-\n-12,345.67\/-\n\n\nlo_LA.utf8\n\u20ad 12,345.67\n\u20ad -12,345.67\n\n\nlt_LT.iso885913\n12.345,67 Lt\n-12.345,67 Lt\n\n\nlv_LV.iso885913\nLs 12 345,67\n-Ls 12 345,67\n\n\nmg_MG.iso885915\n12 345,67 AR\n-12 345,67 AR\n\n\nmhr_RU.utf8\n12 345.67 \u0422\u0415\u04a4\n-12 345.67 \u0422\u0415\u04a4\n\n\nmk_MK.iso88595\n12 345,67 \u0434\u0435\u043d\n-12 345,67 \u0434\u0435\u043d\n\n\nmn_MN.utf8\n12 345.67 \u20ae\n-12 345.67 \u20ae\n\n\nms_MY.iso88591\nRM12,345.67\n(RM12,345.67)\n\n\nmt_MT.iso88593\n12,345.67EUR\n(12,345.67EUR)\n\n\nmt_MT.utf8\n12,345.67\u20ac\n(12,345.67\u20ac)\n\n\nmy_MM.utf8\n12,345.67Ks\n-12,345.67Ks\n\n\nnan_TW.utf8@latin\nNT$12,345.67\n-NT$12,345.67\n\n\nnb_NO.iso88591\nkr12 345,67\nkr-12 345,67\n\n\nne_NP.utf8\n\u0930\u0942 12,345.67\n-\u0930\u0942 12,345.67\n\n\nnl_AW.utf8\nAfl. 12 345,67\nAfl. 12 345,67-\n\n\nnl_BE.iso88591\nEUR 12 345,67\nEUR 12 345,67-\n\n\nnn_NO.iso88591\nkr 12 345,67\n-kr12 345,67\n\n\nom_KE.iso88591\nKsh12,345.67\n-Ksh12,345.67\n\n\nos_RU.utf8\n12 345.67 \u0441\u043e\u043c\n-12 345.67 \u0441\u043e\u043c\n\n\npa_PK.utf8\nRs 12,345.67\n-Rs12,345.67\n\n\npap_AN.utf8\nf 12 345,67\nf 12 345,67-\n\n\nps_AF.utf8\n12\u066c346 \u0627\u0641\u063a\u0627\u0646\u06cd\n-12\u066c346 \u0627\u0641\u063a\u0627\u0646\u06cd\n\n\npt_BR.iso88591\nR$ 12.345,67\n-R$ 12.345,67\n\n\nro_RO.iso88592\nLei 12.345,67\n-Lei 12.345,67\n\n\nru_RU.iso88595\n12 345.67 \u0440\u0443\u0431\n-12 345.67 \u0440\u0443\u0431\n\n\nru_UA.koi8u\n12 345.67 \u0433\u0440\n-12 345.67 \u0433\u0440\n\n\nrw_RW.utf8\n12.345,67 Frw\n-12.345,67 Frw\n\n\nsd_IN.utf8@devanagari\n\u0930\u0941 12,345.67\n-\u0930\u0941 12,345.67\n\n\nse_NO.utf8\nru12.345,67\nru-12.345,67\n\n\nsi_LK.utf8\n\u20a8 12,345.67\n-\u20a8 12,345.67\n\n\nsq_AL.iso88591\nLek12.345,670\n-Lek12.345,670\n\n\nsq_MK.utf8\n12 345,67 den\n-12 345,67 den\n\n\nsr_RS.utf8\n12.345,67 \u0434\u0438\u043d\n-12.345,67 \u0434\u0438\u043d\n\n\nsr_RS.utf8@latin\ndin 12.346\n-din 12.346\n\n\nsv_SE.iso88591\n12 345,67 kr\n-12 345,67 kr\n\n\nsw_TZ.utf8\nTSh12,345.67\n-TSh12,345.67\n\n\nte_IN.utf8\n\u20b912,345.67\n-\u20b912,345.67\n\n\nth_TH.utf8\n\u0e3f 12,345.67\n\u0e3f -12,345.67\n\n\ntk_TM.utf8\n12,345.67 MANAT\n-12,345.67 MANAT\n\n\ntt_RU.utf8@iqtelif\n12\u2002345.67 sum\n-12\u2002345.67 sum\n\n\nuk_UA.koi8u\n12 345,67\u0433\u0440\u043d.\n-12 345,67 \u0433\u0440\u043d.\n\n\nuz_UZ.iso88591\nso'm12,345.67\n-so'm12,345.67\n\n\nuz_UZ.utf8@cyrillic\n\u0441\u045e\u043c12,345.67\n-\u0441\u045e\u043c12,345.67\n\n\nvi_VN.utf8\n12.346\u20ab\n-\u20ab12.346\n\n\nwo_SN.utf8\n12 345,67 CFA\n-12 345,67 CFA\n\n\nyi_US.cp1255\n$ 12,345.67\n$ 12,345.67-","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":71908140,"Users Score":0,"Answer":"Based on people's comments, my answer is:\n\nDon't\n\nEither don't use the type, or at most use it as output-only, to not be parsed.","Q_Score":0,"Tags":"python,postgresql,parsing,currency","A_Id":71908866,"CreationDate":"2022-04-18T06:08:00.000","Title":"How to robustly parse PostgreSQL money (in Python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are best practice approaches of properly getting messages from Kafka and generating INSERT\/UPDATE\/DELETE statements for relational dbs using Python?\nSay, I have events that Create Entity\/Update Entity\/Delete Entity and I want those messages to be transformed into relevant SQL script.\nIs there any suggestion rather than writing serialization manually?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":71937526,"Users Score":1,"Answer":"There is no way around deserializing the record from Kafka and serializing into the appropriate database query. I would not recommend writing literal DDL statements as Kafka records and running those directly against a database client.\nAs commented, you can instead produce data in a supported format (JSONSchema, Avro, or Protobuf being the most common \/ well-documented) from Kafka Connect (optionally using a Schema Registry), then use a Sink Connector for your database.","Q_Score":0,"Tags":"python,sql,apache-kafka","A_Id":71944221,"CreationDate":"2022-04-20T09:47:00.000","Title":"Serialize Kafka message into DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently working with cx_oracle\nHere with SELECT statements I am able to use the fetchall() function to get rows.\nBut how to get the outputs for queries that fall under Data Definition Language (DDL) category.\nFor example, after executing a GRANT statement with cursor.execute(), the expected output assuming the query is valid would be,\n\"GRANT executed successfully\"\nBut how do I get this with cx_oracle, Python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":71941855,"Users Score":1,"Answer":"The answer is that you have print it yourself, which is what SQL*Plus does.\nDDL statements are statements not queries because they do not return data. They return a success or error condition to the tool that executed them, which can then print any message. In your case the tool is cx_Oracle. There isn't a way to get the type (GRANT, CREATE etc) of the statement automatically in cx_Oracle. Your application can either print a generic message like 'statement executed successfully', or you can extract the first keyword(s) from the SQL statement so you can print a message like SQL*Plus does.","Q_Score":0,"Tags":"python,ddl,cx-oracle","A_Id":71946901,"CreationDate":"2022-04-20T14:54:00.000","Title":"how to get the output of DDL commands Python cx_oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project which will require the user to enter the UI and create the table name on their own. Inputting table name and columns (column name, type, and other info).\nAlthough it's easy to parametrize standard queries (i.e. insert\/replace\/update), I couldn't find ANY resource on how to parametrize DDL statements such as CREATE. Nor libraries that can handle that easily.\nI was planning to apply (1) controls on the UI and (2) controls on the API I am going to call to run this DDL. But do you have any better idea\/resource on how to get a CREATE statement from i.e. a JSON input? I am working on redshift.. Cheers!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":71979203,"Users Score":0,"Answer":"I\u2019ve used jinja2 templates and json config for this type of process. It integrates with python and can be used standalone. Just template your create table statements and apply the json config.","Q_Score":0,"Tags":"python,amazon-redshift,ddl,create-table","A_Id":71979484,"CreationDate":"2022-04-23T11:27:00.000","Title":"Create DDL statement from JSON file. How to avoid\/minimize SQL injection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have python code that depends on specific libraries like selenium and interaction with google chrome to extract data from the web.\nmy code works fine but i need a lot of records to do analysis, so i can't leave my computer on, to run the script for a month.\nThat's why I thought of running the script in a cloud service like aws but I don't have a clear idea of \u200b\u200bhow to do it, because I need the script to not stop\nand I would rather not have to pay for it (or at least not that much money)\nThat said, my code opens a website, looks for a specific text data and saves it in a csv document.\nI thank you in advance for the help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":71998750,"Users Score":1,"Answer":"You will have to check the terms of each cloud service as many do have downtime\/restarts on their free tiers.\nThe kind of task you're describing shouldn't be very resource hungry, so you may be better off setting up your own server using a Raspberry Pi or similar.","Q_Score":0,"Tags":"python,selenium,web-scraping,automation,cloud","A_Id":71998861,"CreationDate":"2022-04-25T11:23:00.000","Title":"Run pythom code in cloud without stopping","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a single django instance about to hit its limits in terms of throughput. Id like to make a second instance and start scaling horizontally.\nI understand when dealing with database read replicas there is some minimal django configuration necessary, but in the instance of only using a single database: is there anything I need to do, or anything I should be careful of when adding a second instance?\nFor the record, I use render.com (it\u2019s similar to heroku) and their scaling solution just gives us a slider and will automatically move an instance up or down. Is there any sort of configuration I need to do with django + gunicorn + uvicorn? It will automatically sit behind their load balancer as well.\nFor reference my stack is:\n\nDjango + DRF\nPostgres\nRedis for cache and broker\nDjango-q for async\nCloudflare","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":72134348,"Users Score":1,"Answer":"You can enable autoscaling on Render and it will automatically scale your instances up (and down) based on your application's average CPU and\/or memory utilization across all instances. You do not need to change your Django app.","Q_Score":1,"Tags":"python,django","A_Id":72134830,"CreationDate":"2022-05-05T22:44:00.000","Title":"Django: Horizontal scaling with a single database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"TLDR\nI am making a REST Session management solution for industrial automation purposes and need to automatically log into devices to perform configurations.\nNOTE:\nThese devices are 99% of the time going to be isolated to private networks\/VPNs (i.e., Will not have a public IP)\nDilemma\nI am being tasked with creating a service that can store hardware device credentials so automated configurations (& metrics scraping) can be done. The hardware in question only allows REST Session logins via a POST method where the user and (unencrypted) password are sent in the message body. This returns a Session cookie that my service then stores (in memory).\nThe service in question consists of:\n\nLinux (Ubuntu 20.04) server\nFastAPI python backend\nSQLITE3 embedded file DB\n\nStoring Credentials?\nMy background is not in Security so this is all very new to me but it seems that I should prefer storing a hash (e.g., bcrypt) of my password in my DB for future verification however there will not be any future verification as this is all automated.\nThis brings me to what seems like is the only solution - hashing the password and using that as the salt to encrypt the password, then storing the hashed password in the DB for decryption purposes later. I know this provides almost 0 security given the DB is compromised but I am at a loss for alternate solutions. Given the DB is embedded, maybe there is some added assurance that the server itself would have to be compromised before the DB itself is compromised? I don't know if there is a technical \"right\" approach to this, maybe not, however if anyone has any advice I am all ears.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":72154730,"Users Score":0,"Answer":"You should consider using a hardware security module (HSM). There are cloud alternatives (like AWS Secrets manager, an encrypted secrets repository based on keys stored in an actual HSM, AWS KMS). Or if your app is not hosted in a public cloud, you can consider buying an actual HSM too, but that's expensive. So it all comes down to the risk you want to accept vs the cost.\nYou can also consider building architecture to properly protect your secrets. If you build a secure secrets store service and apply appropriate protection (which would be too broad to describe for an answer here), you can at least provide auditing of secret usage, you can implement access control, you can easily revoke secrets, you can monitor usage patterns in that component and so on. Basically your secrets service would act like a very well protected \"HSM\", albeit it might not involve specialized hardware at all. This would not guarantee that secrets (secret encryption keys, typically) cannot ever be retrieved from the service like a real HSM would, but it would have many of the benefits as described above.\nHowever, do note that applying appropriate protection is the key there - and that's not straightforward at all. One approach that you can take is model your potential attackers, list ways (attack paths) for compromising different aspects of different components, and then design protections against those, as long as it makes sense financially.","Q_Score":0,"Tags":"python,database,sqlite,security,microservices","A_Id":72163403,"CreationDate":"2022-05-07T17:07:00.000","Title":"Storing decryptable passwords for automatied usage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Traceback (most recent call last):\nFile \"C:\\Users\\josej\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\mysql\\connector\\abstracts.py\", line 553, in config\nDEFAULT_CONFIGURATION[key]\nKeyError: 'datebase'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\nFile \"C:\\Users\\josej\\proyectos\\holamundo\\curso\\db.py\", line 3, in \nmidb = mysql.connector.connect ( host=\"localhost\", user=\"josejan21\", password=\"123JOSE123jan@gmail\", datebase=\"prueba\")\nFile \"C:\\Users\\josej\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\mysql\\connector_init_.py\", line 272, in connect\nreturn CMySQLConnection(*args, **kwargs)\nFile \"C:\\Users\\josej\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\mysql\\connector\\connection_cext.py\", line 94, in init\nself.connect(**kwargs)\nFile \"C:\\Users\\josej\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\mysql\\connector\\abstracts.py\", line 1049, in connect\nself.config(**kwargs)\nFile \"C:\\Users\\josej\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\mysql\\connector\\abstracts.py\", line 555, in config\nraise AttributeError(\"Unsupported argument '{0}'\".format(key))\nAttributeError: Unsupported argument 'datebase'","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":72156995,"Users Score":1,"Answer":"There's a typo in your code, in the mysql connect method you are passing in \"datebase\" instead of \"database\" as an argument.","Q_Score":0,"Tags":"mysql,python-3.x","A_Id":72157009,"CreationDate":"2022-05-07T23:13:00.000","Title":"How can I solve this MySQL and Python problem?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to convert an old MS Access database with the file-extension \".mdb\" to the newer \".accdb\"-format. I got the idea of using Pyodbc because the newer versions of MS Access refuse to open the old file. So far I succeeded in connecting to the .mdb and reading the table-names from the old file with Pyodbc.\nIs there a way to connect to the .mdb, grab its contents and save it to a .accdb or maybe copying the data table by table into an empty .accdb?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":72169146,"Users Score":0,"Answer":"Create a new accdb database and import all objects from the old mdb format. This should usually work.","Q_Score":0,"Tags":"python,ms-access,pyodbc","A_Id":72183232,"CreationDate":"2022-05-09T08:39:00.000","Title":"Migrating .mdb database to .accdb (Pyodbc)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question about working with huge amount of data. I am working with Google Big Query (i don't think it is a problem of this DB) and need to SELECT data from one table, change it (using python) and then INSERT to another table. Could you tell me, how can i speed up these operations. I use the for loop for each row of my SELECT command. And working with only 15k rows is very long-time process. Maybe multithreading or some libraries could help me to do EXACTLY the same function to all of my >15k rows in DB. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":72171831,"Users Score":0,"Answer":"Missing some details about the process (which DB server is it ?)\nAnyway, The best approach would be:\n\nFetch by buffering: dbChunk = \"DB Cursor\".fetchmany(buffer_size)\nChange the data in Python Data structures (LIST) ==> dbChunk2\nLoad into second table, using \"DB Cursor\".executemany(InsertString, dsChunk2)\n\ndsChunk2 is the updated LIST item where data was fetched into ([ (...), (...), ... ])\n\n\n\nyou can tune the buffer_size to get the best results. (start with 1000,I think)\nNote: InserString should be included Columns, Values and bind variables - match to the Select statement.","Q_Score":0,"Tags":"python,sql,database,loops,google-bigquery","A_Id":72172135,"CreationDate":"2022-05-09T12:14:00.000","Title":"How to speed up my python code working with DB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm reading from an excel file\n GA = pd.read_excel(\"file.xlsx\", sheet_name=0, engine= \"openpyxl\")\nThe data type is:\n\nEmail object\nDate datetime64[ns]\nName object\n\nI want to get only the row with the first date of an email\nFor example:\n\nA@gmail.com 1\/1\/2022 a\nA@gmail.com 2\/1\/2022 b\nB@gmail.com 3\/1\/2022 c\n\nI'm trying to get only\n\nA@gmail.com 1\/1\/2022 a\nB@gmail.com 3\/1\/2022 c\n\nI tried GA.groupby('email')['date'].min()\nBut I'm getting the TypeError: '<' not supported between instances of 'datetime.datetime' and 'int'\ni tried to change the date type to an object, tried to add reset_index(), tried to use agg('min) instead of min(), tried GA.sort_values('date').groupby('email').tail(1)\nbut keep getting this error, please help","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":72241872,"Users Score":0,"Answer":"The problem was, that the email had integer, not the date\nthank you for your time","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":72245805,"CreationDate":"2022-05-14T15:55:00.000","Title":"Trying to get the minimum date and getting TypeError: '<' not supported between instances of 'datetime.datetime' and 'int'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It depends on data type or number of characters..?If i convert a csv file into xls file,file size of .xls will be 3X times of csv file.So it depends on what format we are saving as well.?Any idea on how much bytes needed to hold a character in xls(csv file ---1 character---1 bytes)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":72242748,"Users Score":0,"Answer":"It depends on number of rows,columns and sheets we are trying to add.Even an empty rows occupies a some bytes and char occupies 1 bytes(space,spl characters as well).I have checked manually by iterating xml sheet by adding rows,columns and sheets.\nWorkbook-314bytes ,Rows - 35 bytes ,columns- 43 bytes ,worksheet-73 bytes & char- 1bytes.","Q_Score":0,"Tags":"python,csv,size,xls","A_Id":72301426,"CreationDate":"2022-05-14T18:01:00.000","Title":"How xls file size is determined?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone suggest how to download the data from Tableau server worksheet into excel using python script; so this can be done just by running the script and automate the work instead of manual process.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":72256286,"Users Score":0,"Answer":"Have you checked tableau_api_lib?\nOn the documentation I believe there's something called Crosstab (for Excel). Hope this helps!","Q_Score":0,"Tags":"python-3.x,tableau-api","A_Id":72288235,"CreationDate":"2022-05-16T08:35:00.000","Title":"Python : Tableau to Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Due to loading time and query cost, I need to export a bigquery table to multiple Google Cloud Storages folders within a bucket.\nI currently use ExtractJobConfig from the bigquery python client with the wildcard operator to create multiple files. But I need to create a folder for every nomenclature value (it is within a bigquery table column), and then create the multiple files.\nThe table is pretty huge and won't fit (could but that's not the idea) the ram, it is 1+ Tb. I cannot dummy loop over it with python.\nI read quite a lot of documentation, parsed the parameters, but I can't find a clean solution. Did a miss something or there is no google solution?\nMy B plan is to us apache beam and dataflow, but I have not skills yet, and I would like to avoid this solution as much as possible for simplicity and maintenance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":72351623,"Users Score":3,"Answer":"You have 2 solutions:\n\nCreate 1 export query per aggregation. If you have 100 nomenclature value, query 100 times the table and export the data in the target directory. The issue is the cost: you will pay the 100 processing of the table.\nYou can use Apache Beam to extract the data and to sort them. Then, with a dynamic destination, you will be able to create all the GCS path that you want. The issue is that it requires skill with Apache Beam to achieve it.\n\n\nYou have an extra solution, similar to the 2nd one, but you can use Spark, and especially Spark serverless to achieve it. If you have more skill in spark than in apache Beam, it could be more efficient.","Q_Score":0,"Tags":"python,google-cloud-platform,google-bigquery","A_Id":72353479,"CreationDate":"2022-05-23T16:05:00.000","Title":"Export Bigquery table to gcs bucket into multiple folders\/files corresponding to clusters","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We currently have a python script launched locally that periodically generates dozens of Excel files using Xlwings.\nHow can it be deployed on a cloud server as an ETL that would be linked to a job scheduler, so that no human action is needed anymore?\nMy concern is that Xlwings requires an Excel license (and a GUI?), which is not usually available in the production server.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":72380478,"Users Score":3,"Answer":"The only way that you can currently do what you have in mind is to install Excel, Python, and xlwings on a Windows Server: xlwings was built for interactive workflows.\nYou might want to look into OpenPyXL and XlsxWriter to see if you can create the reports by writing the Excel file directly, as opposed to automating the Excel application, as xlwings does.","Q_Score":5,"Tags":"python,xlwings","A_Id":72388158,"CreationDate":"2022-05-25T15:31:00.000","Title":"Can a Python script using xlwings be deployed on a server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"All,\nI have a Python code getting monthly data from a data provider through its API, formatting them, and sending them to a SQL DB (I'm using SQL Studio, all is local).\nThe monthly data are roughly available 12-14 days after the end of the month. So far I was changing the month in my Python code and running the code once new data were available (let say May 12th, I queried data for month = 5).\nIs there a way to make all of this automatic? I m no software developer and it seems that I have 0 skills on that! Can I schedule some task in Python? or SQL Studio? or a third party software?\nSome other providers can email me when new data are available, is there a way to get this email to start a code (I made a code to parse their website and download the desired file)?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":72382732,"Users Score":0,"Answer":"My oppinion is that the best way is to use OS tools (like crontab in Linux) to schedule and start some python myscript.py","Q_Score":0,"Tags":"python,sql,events,task,scheduler","A_Id":72383049,"CreationDate":"2022-05-25T18:42:00.000","Title":"Run on specific date a Python code that get data from internet and send to SQL DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why can I connect to my sql db2 while using jupyter notebooks in ibm cloud but when I try to run the same connection string in ms vs code I get an error?\n'''%sql ibm_db_sa:\/\/un:pw@host:port\/db?security=SSL'''\n(ibm_db_dbi.Error) ibm_db_dbi::Error: [IBM][CLI Driver] SQL5005C The operation failed because the database manager failed to access either the database manager configuration file or the database configuration file.\\r SQLCODE=-5005 (Background on this error at: http:\/\/sqlalche.me\/e\/dbapi) Connection info needed in SQLAlchemy format, example: postgresql:\/\/username:password@hostname\/dbname or an existing connection: dict_keys([])","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72426644,"Users Score":0,"Answer":"Could you specify what is ms?\nAlso please try to reach the dba admin to verify the user account has the correct privileges to access to the database.\nIf you are trying to connect in remote mode from a Linux environment you will need a SSL certificate to ensure the correct connection.\nAnd check if the instance is UP and if not try to run db2start.","Q_Score":0,"Tags":"python,db2,ibm-cloud","A_Id":72480271,"CreationDate":"2022-05-29T19:19:00.000","Title":"IBM DB2 Connections","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently trying to develop an employee scheduling tool in order to reduce the daily workload. I am using pyomo for setting up the model but unfortunately stuck on one of the constraint setting.\nHere is the simplified background:\n\n4 shifts are available for assignation - RDO (Regular Day Off), M (Morning), D (Day) and N (Night). All of them are 8-hrs shift\nEvery employee will get 1 RDO per week and constant RDO is preferred (say staff A better to have Monday as day off constantly but this can be violate)\nSame working shift (M \/ D \/ N) is preferred for every staff week by week (the constraint that I stuck on)\na. Example 1 (RDO at Monday): The shift of Tuesday to Sunday should be \/ have better to be the same\nb. Example 2 (RDO at Thursday): The shift of Mon to Wed should be same as the last working day of prior week, while the shift of Fri to Sun this week also need to be same but not limit to be which shift\n\nSince the RDO day (Mon - Sun) is different among employees, the constraint of point 3 also require to be changed people by people conditionally (say if RDO == \"Mon\" then do A ; else if RDO == \"Tue\" then do B), I have no idea how can it be reflected on the constraint as IF \/ ELSE statement cant really work on solver.\nAppreciate if you can give me some hints or direction. Thanks very much!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":22,"Q_Id":72432296,"Users Score":1,"Answer":"The constraints you are trying to create could be moderately complicated, and are very dependent on how you set up the problem, how many time periods you look at in the model, etc. etc. and are probably beyond the scope of 1 answer. Are you taking an LP course in school? If so, you might want to bounce your framework off of your instructor for ideas.\nThat aside, you might want to tackle the ROD by assigning each person a cost table based on their preferences and then putting in a small penalty in the objective based on their \"costs\" to influence the solver to give them their \"pick\" -- assumes the \"picks\" are relatively well distributed and not everybody wants Friday off, etc.\nYou could probably do the same with the shifts, essentially making a parameter that is indexed by [employee, shift] with \"costs\" and using that in the obj in a creative way. This would be the easiest solution... others get into counting variables, big-M, etc.","Q_Score":0,"Tags":"python,scheduling,pyomo","A_Id":72436983,"CreationDate":"2022-05-30T09:47:00.000","Title":"Employee Scheduling Constraints Issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 20+ excel files in Japanese language. Most excel files are Microsoft Excel 2007+ and few them are in Microsoft Excel OOXML file type. I would like to convert these files to csv and load in Snowflake, but prior to converting to csv, I was wondering if there is any library or pre-built function that I can use in python to determine which delimiter, escape character might be better for particular file ? Please also note few excel file contains multiple sheets.\nThanks in advance for your time and efforts!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":72434555,"Users Score":0,"Answer":"I dont really know what you mean by \"right delimiter\", if you want to detect which one is used, there is a library called detect_delimiter, if YOU want to choose a new delimiter the best approach is probably to choose one that is less likely to be used inside the data (% for example) to avoid splitting the data the wrong way. You can always upload the data as a pandas dataframe and then reconvert it to a csv after exploring which way is the optimal in your case.","Q_Score":0,"Tags":"python,python-3.x,excel,snowflake-cloud-data-platform,delimiter","A_Id":72435008,"CreationDate":"2022-05-30T12:45:00.000","Title":"Any way to find which delimiter might work for excel file using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I code I always comment out the db.create_all() to prevent creating a database. Is it ok to add db.create_all() in my source code even though I have already created my tables?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":72505475,"Users Score":0,"Answer":"db.create_all() will not create database if DB already created. So you don't have to comment on your code.","Q_Score":1,"Tags":"python,flask,sqlalchemy,flask-sqlalchemy","A_Id":72654720,"CreationDate":"2022-06-05T07:18:00.000","Title":"Should I remove db.create_all() when rerunning my code?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Well, I have \"googled it\" without finding an answer. Routine updates of a Python-based site, based on its requirements.txt, now fail with metadata-generation-failed when attempting to update \"mysqlclient.\" The question is why.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5432,"Q_Id":72538597,"Users Score":0,"Answer":"Not sure if this is still helpful. In my case I realized that\nsudo apt-get install default-libmysqlclient-dev\nwas installing the mysql client but not the mysql server. I installed the mysql server with:\nsudo apt install mysql-server\nand that fixed the issue.","Q_Score":2,"Tags":"python,mysql","A_Id":75953144,"CreationDate":"2022-06-07T23:42:00.000","Title":"Python MySQLClient installation fails with \"metadata-generation-failed\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Well, I have \"googled it\" without finding an answer. Routine updates of a Python-based site, based on its requirements.txt, now fail with metadata-generation-failed when attempting to update \"mysqlclient.\" The question is why.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5432,"Q_Id":72538597,"Users Score":0,"Answer":"follow this steps in terminal!!!\n\nbrew install mysql\n\nbrew install openssl\n\nexport PATH=${PATH}:\/usr\/local\/mysql\/bin\/\n\nsudo xcode-select --reset\n\npip install mysqlclient","Q_Score":2,"Tags":"python,mysql","A_Id":76424174,"CreationDate":"2022-06-07T23:42:00.000","Title":"Python MySQLClient installation fails with \"metadata-generation-failed\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have an excel sheet that contains in this order:\nSample_name | column data | column data2 | column data ... n\nI also have a .txt file that contains\nSample_name\nWhat I want to do is filter the excel file for only the sample names contained in the .txt file. My current idea is to go through each column (excel sheet) and see if it matches any name in the .txt file, if it does, then grab the whole column. However, this seems like a nonefficient way to do it. I also need to do this using python. I was hoping someone could give me an idea on how to approach this better. Thank you very much.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":72579505,"Users Score":1,"Answer":"Excel PowerQuery should do the trick:\n\nLoad .txt file as a table (list)\nLoad sheet with the data columns as another table\nMerge (e.g. Left join) first table with second table\nOptional: adjust\/select the columns to be included or excluded in the resulting table\n\nIn Python with Pandas\u2019 data frames the same can be accomplished (joining 2 data frames)\nP.S. Pandas supports loading CSV files and txt files (as a variant of CSV) into a data frame","Q_Score":0,"Tags":"python,excel","A_Id":72580893,"CreationDate":"2022-06-10T20:25:00.000","Title":"how to filter a .csv\/.txt file using a list from another .txt","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is possibly a very simple ask.\nI have imported an excel dataset into PowerBI which is named as \"dataset\".\nWhat I want to do now is execute some python script within the query editor on this dataset but I'm not sure how I reference it?\nFor example, If I want to simply add a column my code would look like\ndataset['New Column'] = \"New Row information\"\nBut this doesn't seem to work.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":31,"Q_Id":72602807,"Users Score":1,"Answer":"First, import and load the required dataset in powerbi desktop.we can also create visualization graphs and plots using python script.\nin visualizations tab, select python visual(PY) and select required fields from the dataset. a python script editor will be displayed.\nfor example, adding a new column in the dataset:\ndataset['New']=&Text.From(param1)&\" \",[dataset=#\"Changed Type\"])","Q_Score":0,"Tags":"python,powerbi","A_Id":73701152,"CreationDate":"2022-06-13T12:24:00.000","Title":"Execute Python Script on a pre-loaded PowerBI dataset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to execute an SQL query inside a python code and getting\n\nORA-01805: possible error in date\/time operation error.\n\nHow do I fix this issue? I have downloaded the latest oracle instant client file..how am I supposed to change the date?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":378,"Q_Id":72604577,"Users Score":0,"Answer":"I also had the error \"ORA-01805: possible error in date\/time operation issue\" with a client after upgrading the database from 11g to 19C. Upgraded the client as well but the error remained unsolved.\nThe workaround was to copy $ORACLE_HOME\/oracore\/zoneinfo contents into the same directory on the Client side. (not sure if this is supported though, but I will verify)","Q_Score":0,"Tags":"python,sql,oracle","A_Id":75333180,"CreationDate":"2022-06-13T14:31:00.000","Title":"SQL query returns ORA-01805: possible error in date\/time operation error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Django application where I give users a Excel file for them to give me dates, i ask them to give me the date in DD\/MM\/YYYY format (the one used in Latin America) The problem is that if the language of the Excel file is in English, it uses the MM\/DD\/YYYY format. So for example if they write 01\/05\/2022, when i open the file in my application i receive 05\/01\/2022.\nSo I want to know if there is a way to get the original language of the excel file, for me to put some conditions inside my application, or if i can get the original raw text of the file.\nI can't change the format that the application uses (because I receive excel files that are mainly in the spanish language) or ask my clients to write the dates in a different format, or ask them to change the language of the file.\nI am open for other type of solutions too.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":72679557,"Users Score":0,"Answer":"An Excel file doesn't have a \"language\". The system that Excel runs on has settings for region and language.\nExcel will store a date as a number internally, so if my system uses US English with MDY format, then May 5 will be stored as 44682 and if my system uses a language with a DMY, then May 5 will still be stored as 44682.\nSo, if you get the underlying numeric value for the date, you would not need to be concerned what format was used to enter it.","Q_Score":0,"Tags":"python,django,excel,date","A_Id":72680158,"CreationDate":"2022-06-19T18:55:00.000","Title":"Getting the original language or original text of date in Excel file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have, as title says, a JS\/Python\/PostgresQL app that I would like to deploy using AWS. I feel as though I could figure out deployment of the 3 pieces as separate, discrete entities, but what I haven't been able to figure out\/understand, is how the 3 pieces will communicate once they are live.\nThe site will be a lightly trafficked one where only I can add resources to the db. Additionally, what AWS services would you recommend for hosting each part?\nThanks kindly. And please let me know if I can provide any more helpful info.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72705509,"Users Score":0,"Answer":"One service that you can look at would be aws lightsail that can spin up your application and connect to your database and is designed for hosting such applications.\nAnother way would be to have your python app send request to an AWS lambda using api gateway and the lambda executes your SQL command and returns back the data. These are 2 ways one more service you can explore is AWS amplify that can do the same as well","Q_Score":0,"Tags":"javascript,python,postgresql,amazon-web-services,deployment","A_Id":72706360,"CreationDate":"2022-06-21T18:22:00.000","Title":"Deploying a JS\/Python\/PostgresQL app on AWS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm running docker-compose that has a php front end for uploading files, python watchdog for monitoring uploads via php and pandas for processing the resulting excel files (and later passed to a neo4j server).\nMy issue is that when pd.read_excel is reached in python, it just hangs with idle CPU. The read_excel is reading a local file. There are no resulting error messages. When i run the same combo on my host, it works fine. Using ubuntu:focal for base image for the php\/python\nAnyone run into a similar issue before or what could be the cause? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":72721159,"Users Score":0,"Answer":"Fixed,\nI wasn't properly logging python exceptions and was missing openpyxl module.\nA simple pip install openpyxl fixed it.","Q_Score":0,"Tags":"python-3.x,pandas,docker,docker-compose","A_Id":72736445,"CreationDate":"2022-06-22T19:40:00.000","Title":"Panda's Read_Excel function stalling in Docker Container","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"(This same scenario happens on 4 different servers - Windows Server 2019)\nI have two Python3 scripts, each running under separate Windows Task Scheduler tasks.\nDetails:\n\nTask 1 runs Script 1 - Task 2 Runs Script 2 (genericizing for this question)\nBoth scripts use identical credentials for both Windows task Scheduler tasks\nBoth scripts use identical credentials for SQL Server\nBoth scripts update tables in the same SQL Server database (credentials have same rights for both tables)\nBoth scripts run without error (RC 0)\nTask 1 - Script 1 should update Table A\nTask 2 - Script 2 should update Table B\nRunning under Windows Task Scheduler\n\nTask1\/Script1\/Table A gets updated\nTask2\/Script2\/Table B does NOT (as stated previously, no errors)\n\n\nOther steps in the scripts - before and after the SQL update statements - run fine and do what they are supposed to do.\nIf I run Task 2\/Script 2 with the Task Scheduler \"RUN\" button - the table is updated without issue.\n\nEverything from a credentials aspect is absolutely identical. If I export the Windows task scheduler tasks and view them side-by-side - they are identical (other than the script name and the scheduled run time) [No errors as stated before :-) Just want that to be clear - RC 0 on the script that is not updating its table]\nNothing in Windows event logs to indicate an issue\n=> The SQL statements are virtually identical and as stated above - update just fine when clicking \"RUN\". <= (This is what is so strange)\nI tried running as Windows 2008 (in task scheduler) - no difference (trying anything at this point)\nIt is only when running as scheduled that one of the tables does not get updated (the same task\/script\/table each time - on four different servers).\nI have recreated the tasks on all four servers, etc. - Same issue.\nI am stumped at this point.\nCan anyone point me in a direction that I have not explored yet that might shed some light on this issue?\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":72771888,"Users Score":0,"Answer":"The four different servers are in different regions and all had the same scheduled clock time for runs - no issue since they are all in different time zones.\nApparently, (unbeknownst to me), an effort has been underway to move all devices to UTC time - regardless of location - so now all of my servers have the same clock time. Thus, all four servers were running the scripts at the exact same time and creating a lock on the database.\nI did not receive any errors that I could see and the remainder of my script executed as expected.\nI did not suspect locking because with the different time zones, the scripts should have been running on local time and they should not have overlapped.\nI have since staggered the run times and I am seeing data in the tables as expected.","Q_Score":0,"Tags":"python,sql-server,windows-task-scheduler,windows-server-2019","A_Id":72801262,"CreationDate":"2022-06-27T12:11:00.000","Title":"Two scheduled Python Scripts that update tables in SQL Server - both run without error, one does not update its table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 3 database tables:\n\nusers (stores info about users e.g. email, name)\nmetadata (stores data)\nactivity (stores changes made to users\/metadata tables)\n\nI want to achieve the following:\n\nto store any change into the activity table (e.g. new user is created, a user updates the metadata table)\nto send notifications to users whenever a change into the users\/metadata tables happens.\n\nWhat are the libraries\/method that I could use in order to achieve the desired functionalities? Thank you!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":72785265,"Users Score":0,"Answer":"in addition to django signals which wes already recommended, you can also check out django channels + django activity stream","Q_Score":0,"Tags":"python,django,database","A_Id":72786710,"CreationDate":"2022-06-28T11:01:00.000","Title":"Django: send notifications at user based on database change","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a script that queries and updates an access table. I've used it successfully on my computer, but after installing anaconda and spyder on a different computer (same versions as the original installation on the original computer) it doesn't work on the new computer.\nTo clarify: I installed the package using\npip install sqlalchemy-access\non the anaconda prompt, and when running\npip list|findstr access\nI get\nsqlalchemy-access 1.1.3. However, when I run the script in spyder I get the NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:access.pyodbc error.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":72802250,"Users Score":0,"Answer":"Do you have several different python versions on your computer?\nMaybe installing the package with\npip3 install sqlalchemy-access\nCould fix this.","Q_Score":0,"Tags":"python,sqlalchemy,spyder,sqlalchemy-access","A_Id":72803088,"CreationDate":"2022-06-29T13:28:00.000","Title":"NoSuchModuleError: sqlalchemy-access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that queries and updates an access table. I've used it successfully on my computer, but after installing anaconda and spyder on a different computer (same versions as the original installation on the original computer) it doesn't work on the new computer.\nTo clarify: I installed the package using\npip install sqlalchemy-access\non the anaconda prompt, and when running\npip list|findstr access\nI get\nsqlalchemy-access 1.1.3. However, when I run the script in spyder I get the NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:access.pyodbc error.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":112,"Q_Id":72802250,"Users Score":0,"Answer":"I don't know why, but after trying multiple times, I checked again (pip list|findstr access) and found that the installation of sqlalchemy-access, which I managed to find before in the anaconda prompt, has disappeared. I installed it again (for the third or fourth time) and now it works.","Q_Score":0,"Tags":"python,sqlalchemy,spyder,sqlalchemy-access","A_Id":72847405,"CreationDate":"2022-06-29T13:28:00.000","Title":"NoSuchModuleError: sqlalchemy-access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Looking on tips how to get the data of the latest row of a sheet. I've seen solution to get all the data and then taking the length of that.\nBut this is of course a waste of all that fetching. Wondering if there is a smart way to do it, since you can already append data to the last row+1 with worksheet.append_rows([some_data])","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":524,"Q_Id":72861299,"Users Score":0,"Answer":"I used the solution @buran metnion. If you init the worksheet with\nadd_worksheet(title=\"title\", rows=1, cols=10)\nand only append new data via\nworksheet.append_rows([some_array])\nThen @buran's suggestion is brilliant to simply use\nworksheet.row_count","Q_Score":0,"Tags":"python,google-sheets,gspread","A_Id":72872482,"CreationDate":"2022-07-04T19:33:00.000","Title":"Python gspread - get the last row without fetching all the data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have table of thousands of market assets and want to store prices for every asset. New price will be added every minute. I was thinking how to desine this and figured out that the best way will be to create multiple tables - one for every asset.\nWhat is the best way to make relation bettween asset in parent table and child table. I'm new in mysql so first thing I came up with was to make name of child table from PRIMARY KEY. But I don't think it's good practise, because I imagine that searching for that table will be very slow.\nI'm using Python.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":72923076,"Users Score":0,"Answer":"The asset table should have a unique key (e.g. some type of integer). Then a single price table should have that key as a reference to the asset. Getting prices for an asset will involves a join of the asset and price table on the asset key.","Q_Score":0,"Tags":"python,mysql","A_Id":72923118,"CreationDate":"2022-07-09T16:35:00.000","Title":"How to create child table related to single row in mysql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to use mysqlcheck cmd for optimizing tables in the database. I've created a Lambda function in python for the whole process, now to execute the whole process first I need to optimize all tables of the database.\nI'm using PyMSQL module in python for connecting DB, but I guess optimising tables ability is not provided by PyMSQL, Then I tried to use the subprocess module to run the OS command mysqlcheck, but got the following error:\n\n[ERROR] FileNotFoundError: [Errno 2] No such file or directory: 'mysqlcheck'\n\nCan you tell me is any alternative of mysqlcheck is present in python Or how i can run mysqlcheck CMD in AWS Lambda?\nThank You.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":72937101,"Users Score":0,"Answer":"The alternative is to move your tables away from ENGINE=MyISAM (which sometimes needs OPTIMIZE) to ENGINE=InnoDB (which takes care if itself).","Q_Score":1,"Tags":"python,mysql,aws-lambda,mysql-python,mysqlcheck","A_Id":72955365,"CreationDate":"2022-07-11T10:26:00.000","Title":"What is the alternative of mysqlcheck in python3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've done research and can't find anything that has solved my issue. I need a python script to read csv files using a folder path. This script needs to check for empty cells within a column and then display a popup statement notifying users of the empty cells. Anything helps!!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":282,"Q_Id":72997237,"Users Score":0,"Answer":"Use the pandas library\npip install pandas\nYou can import the excel file as a DataFrame and check each cell with loops.","Q_Score":1,"Tags":"python,csv","A_Id":72997258,"CreationDate":"2022-07-15T16:43:00.000","Title":"Python script to check csv columns for empty cells that will be used with multiple excels","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I write a Python program to get data from SQL Server for automation. it runs for every n period of time. The problem is whenever it runs, it fetches all the data from table. But all I want to fetch latest records which is inserted into database after previous run of python script.\nFor example: there are 10 records in the database, the python scripts runs and fetch's all 10 records. Then 5 more records are added to the database and in the 2nd run of Python script it should fetch only those 5 records?\nOne more condition is without modifying adding columns to that table.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":74,"Q_Id":73014096,"Users Score":1,"Answer":"The simplest way to achieve what you are describing, if your table has an Identity field, is to have a separate table that holds the latest Id you retrieved data from during the data extraction process.\nYou would then simply alter the procedure you use to extract data so that it only picks up rows that are after the latest Id held in this separate table, and update it with the maximum Id of the data you have just extracted.\nThis isn't the cleanest approach by any stretch, but it does achieve what you are asking whilst keeping with the condition of not altering your existing table.","Q_Score":0,"Tags":"python,sql,sql-server,automation","A_Id":73015693,"CreationDate":"2022-07-17T17:52:00.000","Title":"Avoiding duplicate data while exporting?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is an excel file named test.xlsx, which has 3 sheets: ['Sheet1', 'Sheet2', 'Sheet3'], how do I use Python to reorder the sheets as: ['Sheet3', 'Sheet1', 'Sheet2']","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":108,"Q_Id":73023305,"Users Score":0,"Answer":"Workbooks have the move_sheet() method.","Q_Score":0,"Tags":"python-3.x,excel,openpyxl,xlwt","A_Id":73023731,"CreationDate":"2022-07-18T13:41:00.000","Title":"How do I move (reorder) sheets in Excel file with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"from vertica_python import connect\nconn_info = {'host': 'xxx.xx.xxx.xx',\n'port': 1521,\n'user': 'username#',\n'password': 'password#',\n'database': 'Training_DB_Name'}\nconnection = connect(**conn_info)\nUsing the code above, I am trying to connect to an oracle db and do some sql queries via python.(the DB is in another server) not sure if technically I need an SSL? Please explain because I dont even know what SSL is The issue I am encountering is the following:\n--> 328 self.startup_connection()\n330 # Initially, for a new session, autocommit is off\n331 if self.options['autocommit']:\n...\n580 self._logger.error(err_msg)\n--> 581 raise errors.ConnectionError(err_msg)\n583 return raw_socket\nConnectionError: Failed to establish a connection to the primary server or any backup address.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":73056939,"Users Score":0,"Answer":"The answer is indeed: VerticaPy lives and dies with Vertica. Even the key-value pairs of the JDBC\/ODBC connection strings differ between Oracle and Vertica. You won't even be able to connect to a database other than Vertica","Q_Score":0,"Tags":"python,database,oracle,vertica,vertica-python","A_Id":73069797,"CreationDate":"2022-07-20T19:04:00.000","Title":"Python - Vertica Library database connection issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a POC where we would like to get the snowflake query results into an email using Python.\nFor example : When executing an Insert statement in Snowflake, I would like to capture the result showing how many records were inserted. Please note that we are using Python Connector for Snowflake to execute our queries from Python script. Also we are using dataframes to store and process data internally.\nAny help is appreciated!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":374,"Q_Id":73062243,"Users Score":1,"Answer":"Following the INSERT statement, you can retrieve the number of rows inserted from cursor.rowcount.","Q_Score":0,"Tags":"python,airflow,snowflake-cloud-data-platform,directed-acyclic-graphs,snowflake-connector","A_Id":73066490,"CreationDate":"2022-07-21T07:29:00.000","Title":"Get Snowflake query result in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using ArangoDB for Graph-Versioning and would be looking for a faster method to evaluate whether or not a Node is the same in two different collections.\nApart from hashing each node before I write it - does ArangoDB have any mechanism that lets me read the Hash of the node?\nI usually access the Database with Python-Arango.\nIf hashing it by myself is the only viable option what would be a reasonable Hash-Function for these types of documents in a Graph-DB? _id should not be included as the same node in two different collections would still differ. _rev would not really matter, and I am not sure if _key is in fact required as the node is identified by it any way.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":23,"Q_Id":73078080,"Users Score":1,"Answer":"You need to make your own hash algo to do this.\nThe issue is that the unique values of a document that build the hash are user specific, so you need to build that hash value externally and save it with every document.\nTo confirm uniqueness, you can do that via a Foxx Microservice or in your AQL query, where you throw an error if multiple nodes are ever found with duplicate hashes.\nIf you want to enforce uniqueness on inserts, then you'll need to build that logic externally.\nYou then have the option of trusting your uniqueness or setting up a Foxx Microservice that would scour the collections in scope to ensure no other document had the same hash value.\nThe performance of querying many other collections would be poor, so an alternative to that is to set up a Foxx Queue that accepted document updates, and you then have a Foxx service performing the INSERT\/UPDATE commands from the queue. That way you don't slow down your client application, and data will be eventually updated in Arango as fast as possible.","Q_Score":0,"Tags":"graph-databases,arangodb,graph-data-science,python-arango","A_Id":73087411,"CreationDate":"2022-07-22T09:25:00.000","Title":"Node Hash in ArangoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a gspread project that reads and updates multiple sheets daily\nsuddenly one of the sheets I can update but can't read and getting\nAPIError: {'code': 500, 'message': 'Internal error encountered.', 'status': 'INTERNAL'}\nI get this with only (get , get_batch) functions\nlike :\nworksheet.findall('str_example')\nworksheet.acell('A1')\nhere's the error msg :\nAPIError Traceback (most recent call last)\n in ()\n----> 1 cells= worksheet.findall(str_example)\n3 frames\n\/usr\/local\/lib\/python3.7\/dist-packages\/gspread\/worksheet.py in findall(self, query, in_row, in_column, case_sensitive)\n1717 :rtype: list\n1718 \"\"\"\n-> 1719 return list(self._finder(filter, query, case_sensitive, in_row, in_column))\n1720\n1721 def freeze(self, rows=None, cols=None):\n\/usr\/local\/lib\/python3.7\/dist-packages\/gspread\/worksheet.py in _finder(self, func, query, case_sensitive, in_row, in_column)\n1634\n1635 def _finder(self, func, query, case_sensitive, in_row=None, in_column=None):\n-> 1636 data = self.spreadsheet.values_get(absolute_range_name(self.title))\n1637\n1638 try:\n\/usr\/local\/lib\/python3.7\/dist-packages\/gspread\/spreadsheet.py in values_get(self, range, params)\n179 \"\"\"\n180 url = SPREADSHEET_VALUES_URL % (self.id, quote(range))\n--> 181 r = self.client.request(\"get\", url, params=params)\n182 return r.json()\n183\n\/usr\/local\/lib\/python3.7\/dist-packages\/gspread\/client.py in request(self, method, endpoint, params, data, json, files, headers)\n84 return response\n85 else:\n---> 86 raise APIError(response)\n87\n88 def list_spreadsheet_files(self, title=None, folder_id=None):\nAPIError: {'code': 500, 'message': 'Internal error encountered.', 'status': 'INTERNAL'}","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":154,"Q_Id":73080524,"Users Score":0,"Answer":"It was something with Google servers , as after 2 days everything worked fine with me changing the code","Q_Score":0,"Tags":"python,google-sheets,google-sheets-api,gspread","A_Id":73093062,"CreationDate":"2022-07-22T12:41:00.000","Title":"gspread api getting 500 internal error with get requests only , I can update cells but can't read","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In SQL Server, are there possible instances where + may be used other than for string concatenation? I want to do a simple search and replace as part of my migration, but am worried that this may produce false positives where the original usage was not for concatenation.\nI understand + can at least appear as a math operator as well, and right now I'm running this find and replace on any instance where ' +, + ' etc are detected (this hopefully can make sure that only concat plus signs are replaced). Doing it this way would leave out lots of cases and I don't see an easy way to make this better. Any advice or help would be appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":73116242,"Users Score":0,"Answer":"At this point I'm looking at just using text processing to find any plus sign and ask for confirmation for every potential replacement to look through everything manually when + ' or ' + is not found. Appreciate all the input in the comments.","Q_Score":0,"Tags":"python,sql-server,migration,concatenation","A_Id":73129359,"CreationDate":"2022-07-25T23:18:00.000","Title":"SQL Server plus sign usage other than concat","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a project that needs to update a CSV file with user info periodically. The CSV is stored in an S3 bucket so I'm assuming I would use boto3 to do this. However, I'm not exactly sure how to go about this- would I need to download the CSV from S3 and then append to it, or is there a way to do it directly? Any code samples would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":356,"Q_Id":73117401,"Users Score":1,"Answer":"Ideally this would be something where DynamoDB would work pretty well (as long as you can create a hash key). Your solution would require the following.\n\nDownload the CSV\nAppend new values to the CSV Files\nUpload the CSV.\n\nA big issue here is the possibility (not sure how this is planned) that the CSV file is updated multiple times before being uploaded, which would lead to data loss.\nUsing something like DynamoDB, you could have a table, and just use the put_item api call to add new values as you see fit. Then, whenever you wish, you could write a python script to scan for all the values and then write a CSV file however you wish!","Q_Score":0,"Tags":"python,amazon-web-services,csv,amazon-s3,boto3","A_Id":73129306,"CreationDate":"2022-07-26T03:21:00.000","Title":"Writing to a CSV file in an S3 bucket using boto 3","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to host a neo4j graph database somewhere to i can create an api for it. Where do i host something like this? Their proprietary hosing called AuraDB is a little expensive, even for the base option.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":73141556,"Users Score":0,"Answer":"AuraDB has a free tier so you can use it for development. Once your API is working then you can host it in AWS or Google Cloud or AuraDB. You will get what you paid for.","Q_Score":0,"Tags":"python,neo4j,cypher","A_Id":73142163,"CreationDate":"2022-07-27T16:30:00.000","Title":"How to host Neo4j Graph db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to autogenerate documentation with pdoc3. It throws errors whenever a script refers to a non-python file. For example, if I import\ndd01 = pd.read_excel('DataDictionary01.xlsx', index_col=0)\nI get\nImportError: Error importing 'DATA.work_products.r_technology.stackdd': FileNotFoundError: [Errno 2] No such file or directory: 'DataDictionary01.xlsx'\nIs there a way of preventing this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":37,"Q_Id":73191941,"Users Score":1,"Answer":"Found the problem.\nThe error happens if the script has the reference to the non-python file outside of the\nif __name__ == '__main__':\nblock, or if there is no such block at all.\nSo, the solution is to put any such reference into this block.","Q_Score":0,"Tags":"python,python-3.x,documentation","A_Id":73192239,"CreationDate":"2022-08-01T09:54:00.000","Title":"pdoc3 tries to import non-python files","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a DF which has the following Schema :\n\no_orderkey --- int32\no_custkey --- int32\no_orderstatus --- object\no_totalprice --- object\no_orderdate --- object\no_orderpriority --- object\no_clerk --- object\no_shippriority --- int32\no_comment --- object\n\nHere the total price is actually a float(Decimals) and the order date is date time.\nBut on using df.convert_dtypes or df.infer_objects, its not automatically convering them into float\/int and date time.\nIs there any way to automatically read and convert the column data type into the correct one? For example in case we do not know the schema of such a data frame beforehand, how would we read and convert the data type to the correct one, without using a regex method to go through every object in the DF.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":73216568,"Users Score":0,"Answer":"Pandas tries to use the right datatype when read the data. However, if, for example, the totalprice column has string, it doesn't make sense for you to convert it to float. You also cannot force pandas to convert it to float, it will just report errors, which is the correct behaviour!\nYou have to use regex to clean up the string, then you can safely convert the column to float.","Q_Score":0,"Tags":"python,dataframe,schema","A_Id":73220745,"CreationDate":"2022-08-03T05:17:00.000","Title":"Autoconversion of data types in python in a dataframe","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Edit 1 - We have created a python script which will read a data from excel\/csv using pandas and then, will be cleaning it. After cleansing of the data, it will connect to snowflake server and append the data in a table which is already available in snowflake. Now the question is -\nIn this process of transferring data from python to snowflake. But would I need to ensure that columns names in pandas dataframe should be same (case-sensitive) as column names in snowflake?\nOr, any case would work to push the data?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":73252779,"Users Score":0,"Answer":"There are many steps involved in importing data into a snowflake:\n\nOpening the Data Load Wizard:\na.Click on Database -> Tables\nb.Click on\n\nTable Row to select it and Load Data\nTable Name to select it and Load Table\n\n\nSelecting a Warehouse:\na. Select a Warehouse from the dropdown list to include any warehouse on which you have the USAGE privilege. Snowflake will use this warehouse to load data into the table.\nb. Click Next\nSelecting a Source Files:\nThe users can load the local machine or cloud storage data like AWS S3, Google Cloud Storage, and Azure.\na.Local Machine:\ni. Load files from the computer\nii. Select one or more files and click on Open\niii. Click on Next\nCloud Storage:\n1.Existing Stage: (i) Select the name of the existing stage and then select the Next button\nNew Stage:\nClick the plus (+) symbol beside the Stage dropdown list.\nSelect the location where your files are located: Snowflake or any one of the supported cloud storage services, and click the Next button.\nComplete the fields that describe your cloud storage location.\nClick the Finish button.\nSelect your new named stage from the Stage dropdown list.\nClick the Next button.\n\nSelect File Format: Select a named set of options that describes the format of the data files.\nExisting Name Format:\nSelect the name of the existing file from the dropdown list.\nClick on the Next Button.\nNew File Format:\nBeside the dropdown list, select the (+) button.\nUpdate the fields according to the files\u2019 format.\nClick on Finish.\nSelect the new named file format from the dropdown list.\nClick on Next\nSelect the Load Options\nSpecify how Snowflake should behave if there are errors in the data files.\nClick on the Load button. This will prompt Snowflake to load the data in the selected table in the required warehouse.\nClick on Ok.\n\n\nI guess you, it helps you a lot.","Q_Score":0,"Tags":"python,sql,snowflake-connector","A_Id":73253128,"CreationDate":"2022-08-05T16:26:00.000","Title":"Data Transfer - Python to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ok I have table call restapi information and I have a json format containing the restapi format I want to pull the data from db using sql and fetch to another table using python but I only need what is in restapi format not all the records and I also want to create a unique Id column to the same table using python can I do that? Should I use Django\nSELECT * FROM Customers WHERE Last_Name='Smith';\nSELECT First_Name, Nickname FROM Friends WHERE Nickname LIKE '%brain%';\nSELECT CustomerName, City FROM Customers; Try it Yourself \u00bb\nI want to extract data from these queries to match the restapi json format which mens only what needed and fetch this values and thier names to table which calls restapi table for each value and it\u2019s name using python\nAfter this I want to create columns contained unique Id for each value and keep adding in case want to add anything to table later","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":73262504,"Users Score":0,"Answer":"Yes, you can. No, you don't need to use Django for that!","Q_Score":0,"Tags":"python,sql,rest,unix,django-rest-framework","A_Id":73262522,"CreationDate":"2022-08-06T18:56:00.000","Title":"Follow some format to get data from db and fetch to another table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If the ObjectId in a MongoDB is unique, then there should be a way to delete the corresponding Document without the need to mention the Collection it belongs to.\nIs there a way to delete a Document from a MongoDB in one line, when I don't know the specific Collection it belongs to?\nI want to skip looping over the list of Collection names and execute the command db_name.collection_name.delete_one({'_id': ObjId}) that many times.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":73320610,"Users Score":1,"Answer":"While in practically Mongo's ObjectId's can be considered unique there is no rule that enforces that, for example I can copy a document from collectionA to collectionB, now both of these documents have an identical ObjectId _id field, additionally some people prefer to generate their own _id values, which again will not necessarily be unique, especially across all collection.\nThis is just a theoretical explanation as to why the functionality you seek does not exist, the more practical reason is that no one (hardly no one) actually uses a database like this, if this was an actual need it would have been naturally implemented at some point.\nI personally recommend you revisit the reason that led you to this need, perhaps it makes sense in your case, but perhaps you are violating some core principals and could make your code more reliable by refactoring it.\n\nTLDR:\nThis is not part of Mongo's (and any database as far as I know) functionality, you will have to iterate over all collections.","Q_Score":0,"Tags":"python,database,mongodb,mongodb-query,pymongo","A_Id":73320767,"CreationDate":"2022-08-11T12:23:00.000","Title":"Delete Document in MongoDB by ObjectId only (pymongo)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm in the process of cleaning up some of our SQL queries in Tableau Online\/Snowflake and I have been running each CTE in a query individually to find the run time. Does anybody know of a process to find the runtime of individual CTEs automatically? Running CTEs by themselves is a tedious and (sometimes) slow task, but it has helped us find ways to optimize our costs. It is also complicated when one CTE references other CTEs. I'm also open to python solutions\/guidance - I'm a little new to python but have been looking for a project!\nThis would help us a lot in determining what size warehouse to use for daily caches and target inefficient queries much faster. Since this is business data, I can't share specific queries. Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":97,"Q_Id":73336631,"Users Score":1,"Answer":"If you have a lot of CTEs you want to manage individually, and especially if some queries re-use CTEs from other queries: Consider using views instead.\nWhen you create a view, you are basically defining a CTE with a name, that can be referenced by multiple queries - and that you can individually benchmark and test.\nAs you grow your views dependency trees, dbt turns out to be a great toolbox to manage them, their tests, and their dependencies.","Q_Score":0,"Tags":"python,sql,snowflake-cloud-data-platform,tableau-api","A_Id":73339438,"CreationDate":"2022-08-12T15:39:00.000","Title":"Find all CTE runtimes in an SQL Query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to PySpark and I see there are two ways to select columns in PySpark, either with \".select()\" or \".withColumn()\".\nFrom what I've heard \".withColumn()\" is worse for performance but otherwise than that I'm confused as to why there are two ways to do the same thing.\nSo when am I supposed to use \".select()\" instead of \".withColumn()\"?\nI've googled this question but I haven't found a clear explanation.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":405,"Q_Id":73347065,"Users Score":1,"Answer":"@Robert Kossendey You can use select to chain multiple withColumn() statements without suffering the performance implications of using withColumn. Likewise, there are cases where you may want\/need to parameterize the columns created. You could set variables for windows, conditions, values, etcetera to create your select statement.","Q_Score":0,"Tags":"python,pyspark","A_Id":74839901,"CreationDate":"2022-08-13T19:15:00.000","Title":"PySpark Data Frames when to use .select() Vs. .withColumn()?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a webserver hosted on cloud run that loads a tensorflow model from cloud file store on start. To know which model to load, it looks up the latest reference in a psql db.\nOccasionally a retrain script runs using google cloud functions. This stores a new model in cloud file store and a new reference in the psql db.\nCurrently, in order to use this new model I would need to redeploy the cloud run instance so it grabs the new model on start. How can I automate using the newest model instead? Of course something elegant, robust, and scalable is ideal, but if something hacky\/clunky but functional is much easier that would be preferred. This is a throw-away prototype but it needs to be available and usable.\nI have considered a few options but I'm not sure how possible either of them are:\n\nCreate some sort of postgres trigger\/notification that the cloud run server listens to. Guess this would require another thread. This ups complexity and I'm unsure how multiple threads works with Cloud Run.\nSimilar, but use a http pub\/sub. Make an endpoint on the server to re-lookup and get the latest model. Publish on retrainer finish.\ncould deploy a new instance and remove the old one after the retrainer runs. Simple in some regards, but seems riskier and it might be hard to accomplish programmatically.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":73353518,"Users Score":2,"Answer":"Your current pattern should implement cache management (because you cache a model). How can you invalidate the cache?\n\nRestart the instance? Cloud Run doesn't allow you to control the instances. The easiest way is to redeploy a new revision to force the current instance to stop and new ones to start.\nSetting a TTL? It's an option: load a model for XX hours, and then reload it from the source. Problem: you could have glitches (instances with new models and instances with the old one, up to the cache TTL expires for all the instances)\nOffering cache invalidation mechanism? As said before, it's hard because Cloud Run doesn't allow you to communicate with all the instances directly. So, push mechanism is very hard and tricky to implement (not impossible, but I don't recommend you to waste time with that). Pull mechanism is an option: check a \"latest updated date\" somewhere (a record in Firestore, a file in Cloud Storage, an entry in CLoud SQL,...) and compare it with your model updated date. If similar, great. If not, reload the latest model\n\nYou have several solutions, all depend on your wish.\n\nBut you have another solution, my preference. In fact, every time that you have a new model, recreate a new container with the new model already loaded in it (with Cloud Build) and deploy that new container on Cloud Run.\nThat solution solves your cache management issue, and you will have a better cold start latency for all your new instances. (In addition of easier roll back, A\/B testing or canary release capability, version management and control, portability, local\/other env testing,...)","Q_Score":0,"Tags":"python-3.x,tensorflow,google-cloud-platform,fastapi,distributed-system","A_Id":73366096,"CreationDate":"2022-08-14T17:01:00.000","Title":"How to reload tensorflow model in Google Cloud Run server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"on my hosting provider, I try to make a cron job running a .py-file.\nThe cron-job starts but I always get this error message \"ImportError: No module named mysql.connector\".\nWhen I run exactly the same script via CLI, it runs smoothly. It connects to my db, it updates, inserts, ... So, there is no issue with having this module (yes I installed).\nSo, how can I get this cron job to work?\nThis is how I start that job: python \/home2\/******\/public_html\/Test\/cron_job_test.py\n(when I run this via CLI it works)\nMany thanks,\nPeter","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":73364352,"Users Score":0,"Answer":"After hours of trying, I still wasn't able to get it working.\nSo time for an ugly bypass solution. Instead of a python script, I made a php script to do the work>\nAnd run this cron job 'php \/home2\/******\/public_html\/Test\/cron.php' and it works.\nBut I still want to know how to get the mysql.connector to work via crontab ;-)","Q_Score":0,"Tags":"python,cron","A_Id":73367689,"CreationDate":"2022-08-15T17:18:00.000","Title":"ImportError: No module named mysql.connector when using cron-jb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large chunk of python code that outputs a final dataframe. I would like to:\n\nHave this script run in VSCode every morning\nSave this final table in my sql server database.\n\nOne thing to note is that the python code begins by accessing sql server to import two datasets that the rest of the code then alters into a final dataset. When accessing the sql server database in the beginning, it asks for my credentials. So my third question is:\n\nIs there a way, when automating the python script that I can have it automatically input my credentials or will I have to manually import this everytime?\n\nI am mostly looking for resources on this and if anyone has any helpful tips\/links I would be appreciative! I have tried looking for links but can't seem to find much.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":73379288,"Users Score":0,"Answer":"That seems unnecessary, just use a cron job or a scheduled task and run the script directly.\n\nSure, look into sqlite3 or SQLAlchemy or browse the pip repository.\n\nYou could just include them in your code or in a separate file.","Q_Score":0,"Tags":"python,sql-server","A_Id":73379353,"CreationDate":"2022-08-16T19:21:00.000","Title":"Automating Python Script and saving outputted table into sql server database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data that looks like this for IP addresses: for security reasons I am writing made up numbers here.\n\n\n\n\nSubnet 1\nSubnet 2\nSite\n\n\n\n\n5.22.128.0\n17\nTexas\n\n\n5.22.0.0\n17\nBoston\n\n\netc\netc\netc\n\n\n\n\nQuestion: Can I write a VBA or python code to do the below:\nto take each Subnet 1 and: if the third octet is 128 then add 127 rows below it and fill them as such:\n\n\n\n\nSubnet 1\nSubnet 2\nSite\n\n\n\n\n5.22.128.0\n17\nTexas\n\n\n5.22.129.0\n17\nTexas\n\n\n5.22.130.0\n17\nTexas\n\n\n\n\n.... all the way to:\n\n\n\n\nSubnet 1\nSubnet 2\nSite\n\n\n\n\n5.22.255.0\n17\nTexas\n\n\n\n\nAnd if the third octet is 0 then do the same thing but from 0 to 127. while keeping the other data intact (Site and Subnet 2) the same.\nI didn't really know where to begin so I don't have code but my thinking was:\neither:\nA. Change the decimals to commas to represent figures in millions then add a summation calc until it reaches certain numbers.\nB.Create two lists one from 0-127 and one from 128-255 and then append them to the values on the columns but I still don't know how to get multiple rows for it.\nI am fairly new but if there is anything wrong with the way the question is presented please let me know. - don't care if it is done through VBA or python as I can write both - Just need a direction as to how to start.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":73409469,"Users Score":0,"Answer":"Question: Can I write a VBA or python code to do the below\n\nAnswer Well, I don\u00b4t know if you can write it, but it's writeable :)\nIf you want to have the address in one column and work only with that column you will have to do some string manipulation in your code, having as reference the dots in the strings.\nOr you can have a column with each one of the octect and then concatenate them with the dots in another column. This way you won't have to do string manipulation or even code at all, maybe you can solve it only formulas.","Q_Score":0,"Tags":"python,excel,duplicates,ip,autofill","A_Id":73409649,"CreationDate":"2022-08-18T21:09:00.000","Title":"Is there a way in python or Excel to duplicate rows and add a value to a certain column - explanation below","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to build a REST API using a NoSQL backend with a Python based open source framework to build the API. This API would run in a cloud environment and the goal is for it to be cloud agnostic and have the ability to be deployed anywhere. It must have some abstraction for the backend database technology. I found that Django REST Framework is exactly what I'm looking for, but the Django ORM only supports RDBMS. In an attempt to enable NoSQL support with Django, it seems a few open source packages have been developed but those projects have been abandoned.\nI know it's technically possible to use Amazon DynamoDB or Azure Cosmos DB with Django REST Framework, but as it's not officially supported, it sounds like it would require custom code and deviating from standard configurations to get it to work.\n\nDoes anyone have an API running for Production use using a NoSQL backend with Django REST framework?\nWith Django REST framework, is it possible to abstract the backend database connections to support different NoSQL database types?\nIs a framework like Flask better suited for creating a REST API using this type of backend?\nAre there other REST frameworks available which may provide the functionality required if Django REST framework cannot meet these requirements?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":304,"Q_Id":73421051,"Users Score":1,"Answer":"I would go with flask + mongodb","Q_Score":1,"Tags":"python,rest,django-rest-framework,architecture,nosql","A_Id":73421061,"CreationDate":"2022-08-19T18:45:00.000","Title":"Building a REST API using a Python based open source framework and NoSQL backend","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm seeking experts' help to suggest\/recommend a better workaround to my use case below, (ODI is very new to me btw):\n\nDesired output: .txt files\nInput: an excel file with multiple sheets\nInput source: FTP server\nEnvironment to use: ODI\n\nI have an Excel file and I will need to extract the sheets into separate CSVs. I managed to do this using a short python scripts.\nMy idea is to: ODI connect to the FTP > ODI run the python script > ODI read the CSVs and insert into MySQL db > MySQL export tables into .txt files.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":70,"Q_Id":73469208,"Users Score":1,"Answer":"From What you have mentioned , they seemed to be the minimum steps you have to go through to achieve your outcome.\nNone of them can be skipped.\n\nUsing OdiOSCommand you can run your script, which will convert your\nsheet to csv.\n\nThen you have to `reverse engineer, map to target.\n\nQuery script will used to write to a .txt file, run this under procedure . All of them under single\npackage.","Q_Score":0,"Tags":"python,excel,oracle-data-integrator","A_Id":73506337,"CreationDate":"2022-08-24T07:40:00.000","Title":"How to extract Excel sheets into multiple CSVs then transform into .txt file on Oracle Data Integrator (ODI)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a directory with roughly 100 excel files each ~50MB in size. Each file has multiple worksheets with inconsistent names.\nI would like to concatenate them together.\nWhat I have tried so far in python:\n\nconcatenate and append via pandas (computer runs out swap memory [10G], also tried multiprocessing)\nput the files in a local sqlite3 database (too many columns)\n\nThanks in advance\nLukas","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":73499153,"Users Score":0,"Answer":"Kourosh's answer is completely appropriate, I would only add for the first file in a directory argument header=True in the to_csv() function, and for each subsequent file we can stay with header=False.","Q_Score":0,"Tags":"python,excel","A_Id":73499420,"CreationDate":"2022-08-26T09:51:00.000","Title":"Efficient way to concatenate excel files","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently, I am writing a python script that handles excel data using OpenPYXL. I am trying to create an interface in excel that updates data in the spreadsheet cells in a cyclic way. I understand that I cannot write to an open excel file, as it gives me a permission error. Is there a way to bypass this without having to close the file and run the script every time new data is to appear?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":73529415,"Users Score":0,"Answer":"I'm afraid no, you must close files before writing.","Q_Score":0,"Tags":"python,excel","A_Id":73529439,"CreationDate":"2022-08-29T13:16:00.000","Title":"Writing data to Excel spread sheet while open","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI'm creating a Notion DB that will contain data about different analyzers my team uses (analyzer name, location, last time the analyzer sent data, etc.). Since I'm using live data I need to have a way to quickly update the data of all analyzers in the notion db.\nI'm currently using a python script to get the analyzers data and upload it to the Notion DB. Currently I read each row, get it's ID that I use to update the row's data - but this is too slow: it takes more than 30 seconds to update 100 rows.\nThe Question\nI'd like to know if there's a way to quickly update the data of many rows (maybe in one big bulk operation). The goal is perhaps 100 row updates per second (instead of 30 seconds).","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":254,"Q_Id":73609002,"Users Score":1,"Answer":"There are multiple things one could do here - sadly none of it will improve the updates drastically. Currently there is no way to update multiple rows, or to be more precise pages. I am not sure what \"read each row\" refers to, but you can retrieve multiple pages of a database at once - up to 100. If you are retrieving them one by one, this could be updated.\nSecondly, I'd like to know how often the analyzers change and if, will they be altered by the Python script or updated in Notion? If this does not happen too often, you might be able to cache the page_ids and retrieve the ids not every time you update. Sadly the last_edited_time of the database does not reflect any addition or removal of it's children, so simply checking this is not an option.\nThe third and last way to improve performance is multi-threading. You can send multiple requests at the same time as the amount of requests is usually the bottleneck.\nI know none of these will really help you, but sadly no efficient method to update multiple pages exists.","Q_Score":0,"Tags":"python,notion-api","A_Id":73612239,"CreationDate":"2022-09-05T11:56:00.000","Title":"Notion API quickly delete and repopulate entire DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a pretty amateur data science student and I am working on a project where I compared two servers in a team based game but my two datasets are formatted differently from one another. One column for instance would be first blood, where one set of data stores this information as \"blue_team_first_blood\" and is stored as True or False where as the other stores it as just \"first blood\" and stores integers, (1 for blue team, 2 for red team, 0 for no one if applicable)\nI feel like I can code around these difference but whats the best practice? should I take the extra step to make sure both data sets are formatted correctly or does it matter at all?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":73612523,"Users Score":0,"Answer":"Data cleaning is usually the first step in any data science project. It makes sense to transform the data into a consistent format before any further processing steps.","Q_Score":0,"Tags":"python,sql,pandas,data-science,data-analysis","A_Id":73620360,"CreationDate":"2022-09-05T16:51:00.000","Title":"About Data Cleaning","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When a PL\/Python procedure is executed, the python code is executed by a Python interpreter. My question is, is the Python interpreter running as a separate process, or is it a shared library that gets linked to the calling databases process?\nI'm concerned about what happens when we call something like plpy.execute(...). If the python interpreter is running as a separate process I imagine there would be a lot of overhead involved in passing the result of the sql query back to the python interpreter, which would require reading from a file or pipe.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":73643020,"Users Score":1,"Answer":"The language handler function (plpython3_call_handler()) loads the plpython3.so library into the PostgreSQL process, which is linked to libpython3.so. So the interpreter is loaded into the backend, it is not executed as a separate process (multiprocessing\/multithreading is not allowed in PostgreSQL client backends, with the exception of parallel workers).","Q_Score":1,"Tags":"postgresql,plpython","A_Id":73643241,"CreationDate":"2022-09-08T01:56:00.000","Title":"How is PL\/Python code executed by Postgresql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built an internal app that used django-safedelete. I was working fine for months, until i recently upgraded my distro, and tried to add a field to my model.\nI also upgraded my python modules, everything is up-to-date, and no errors during the upgrade.\nNow I cannot migrate anymore:\n\nif I \"makemigrations\" I get an error message \"django.db.utils.OperationalError: (1054, \"Unknown column 'gestion_ltqmappsetting.deleted_by_cascade' in 'field list'\")\"\n\nif I add a boolean \"deleted_by_cascade\" field in my ltqmappsetting table, then the \"makemigration\" works, but the \"migrate\" fails with \"MySQLdb.OperationalError: (1060, \"Duplicate column name 'deleted_by_cascade'\")\"\n\n\nI tried removing the field after makemigrations, but the migrate fails with the first error message.\nI also tried removing the \"migration\" operations in the 0087...migration.py file, but it does not have any impact.\nIs there anyway to update the migration file between the makemigrations and the migrate commands ?\nThanks a lot for any help on this.\njm","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":73666971,"Users Score":0,"Answer":"It's a wonder how I can actually look for answers for hours, then post on this site, and a few minutes later, I actually find the answer ...\nAnyway.\nIt looked like I was mistaken on the meaning of the second error message. It failed because a previous migration already create the deleted_by_cascade field in another table, not in the appsetting one.\nSo the steps to solve my issue where:\n\nmigrate migrate 0086 to remove the latest migrations\nmake sure the DB only had the deleted_by_cascade column in the appsetting table\nmakemigrations - this creates the 0087 migration to add deleted_by_cascade column in all tables\nedit the 0087 migration to not create the new column in the appsetting table\nrun makemigrations once again - - this creates the 0088 migration to add deleted_by_cascade column in the appsetting table\nmigrate 0087\nmigrate --fake 0088\n\nEt voila ...\nThere was probably something wrong in the safe_delete update.\nHope this can help others.\njm","Q_Score":0,"Tags":"python,mysql,django,django-models,django-migrations","A_Id":73667123,"CreationDate":"2022-09-09T20:00:00.000","Title":"django-safedelete 1.3.0 - Cannot migrate","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've a fresh install of Postgresql 14. I can get in with no problems on the local machine in the terminal, and remotely with pgAdmin 4.\nHowever, I have a Python script that passes a connection string to psycopg2 and sqlalchemy. Using exactly the same credentials and config, authentication fails.\nThis Python script works perfectly fine when pointed at a managed Postgres instance.\nThe Python outputs: sqlalchemy.exc.OperationalError: (psycopg2.OperationalError)\nFrom the Postgres logs: FATAL: 28P01: password authentication failed for user \"user\"\nThe Postgres user's password is stored in scram-sha-256. Any pointers welcome!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":73674173,"Users Score":0,"Answer":"Thank you very much to jjanes above, it was just a problem with special characters! I had an unhandled %.","Q_Score":0,"Tags":"python,postgresql,sqlalchemy,psycopg2","A_Id":73674388,"CreationDate":"2022-09-10T18:12:00.000","Title":"Postgresql Authentication Failing from Python Connection String","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In one huge Excel file, I have two different \"date formats\", for example 17.08.2022 and 2022-08-17.\nThese dates are all written in one column, I want to write all of them in the same format (preferably in this format, 17.08.2022) and rewrite them in the same column.\nHow should I do it?\nPlease have it in mind that I am not good at writing code :(.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":73687170,"Users Score":0,"Answer":"I guess the 17.08.2022 format are not actual dates but text. Anyway you can try to highlight the column -> Data -> Text to Columns -> Next -> Next -> Date: DMY > Finish.","Q_Score":0,"Tags":"python,excel,dataframe,csv,date","A_Id":73687404,"CreationDate":"2022-09-12T09:31:00.000","Title":"Converting multiple date formats into one formatin Excel with pthon","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to insert\/append into access a dataframe using pyodbc. However; when I run the code, I get an error: ProgrammingError: ('The SQL contains 21 parameter markers, but 1 parameter were supplied', 'HY000')\nmy sample code is: for row in tDjango: cursor.execute( 'INSERT INTO TDjango (Eid, Eventtype, Location, Lat, Lon, Created, TMCClosed,FirstArrival(min), PatrolArrival(min), TowArrival(min), LanesCleared(min), RoadwayCleared(min),Camera, DayofWeekOpened, DayofWeekClosed, sameDay, confirmClosed, confirmFirstAr, confirmPtrl, confirmTow, confirmLnClear) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)',tDjango) conn.commit() \nI\u2019m not entirely sure what I am missing in the SQL statement to make the error go away.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":73695824,"Users Score":0,"Answer":"Instead of using cursor.execute(), I used cursor.executemany().","Q_Score":0,"Tags":"python,ms-access,pyodbc","A_Id":73773742,"CreationDate":"2022-09-12T22:24:00.000","Title":"Parameter error for pyodbc insert\/apppend","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table in a postgresql database where there are 4 columns:\n\nid (serial)\ntotal (int)\ndone (int)\nstatus (bool).\n\nNow, I have some asyncronous processes that do some work and update this table. Basically, they add 1 to the done field, but I also want to update the status to True if total=done after upgrading done.\nHowever, these processes can have the same id, and therefore I should block all the transaction (Update done, check done=total and update status if required) to avoid any problem. How can I do this?\nI'm working with python3 and psycopg2.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":73716175,"Users Score":1,"Answer":"If you are using v12 or greater the define status as a generated column. status boolean generated always as (total = done). But make sure none of your DML sets or inserts a value for status","Q_Score":0,"Tags":"python,sql,postgresql,psycopg2","A_Id":73777256,"CreationDate":"2022-09-14T11:36:00.000","Title":"Block a row in sql after an update, and trigger a new update if necessary","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using ArangoDB 3.9.2 for search task. The number of items in dataset is 100.000. When I pass the entire dataset as an input list to the engine - the execution time is around ~10 sec, which is pretty quick. But if I pass the dataset in small batches one by one - 100 items per batch, the execution time is rapidly growing. In this case, to process the full dataset takes about ~2 min. Could you explain please, why is it happening? The dataset is the same.\nI'm using python driver \"ArangoClient\" from python-arango lib ver 0.2.1\nPS: I had the similar problem with Neo4j, but the problem was solved using transactions committing with HTTP API. Does the ArangoDB have something similar?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":73725440,"Users Score":0,"Answer":"Every time you make a call to a remote system (Neo4J or ArangoDB or any database) there is overhead in making the connection, sending the data, and then after executing your command, tearing down the connection.\nWhat you're doing is trying to find the 'sweet spot' for your implementation as to the most efficient batch size for the type of data you are sending, the complexity of your query, the performance of your hardware, etc.\nWhat I recommend doing is writing a test script that sends the data in varying batch sizes to help you determine the optimal settings for your use case.\nI have taken this approach with many systems that I've designed and the optimal batch sizes are unique to each implementation. It totally depends on what you are doing.\nSee what results you get for the overall load time if you use batch sizes of 100, 1000, 2000, 5000, and 10000.\nThis way you'll work out the best answer for you.","Q_Score":0,"Tags":"python,arangodb","A_Id":73739997,"CreationDate":"2022-09-15T03:52:00.000","Title":"Query execution time with small batches vs entire input set","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a difference between pandas sum() function and SQL SUM(...) function. I'm using tables with around 100k rows. My current test runs were not good. The runtime was always different with both being not predictable (problem might be my bad wifi...)\nIt will run on a server later, but maybe someone knows it already and I don't have to pay for my server now.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":73746573,"Users Score":0,"Answer":"It might be hard to get a clear answer without actual tests because it depends so much on what machines are used, what you are willing to pay for each part, ...\nHowever, aggregating the data in SQL gives you less network traffic, which can be valuable a lot of the time.","Q_Score":0,"Tags":"python,sql,pandas,sum","A_Id":73746660,"CreationDate":"2022-09-16T14:42:00.000","Title":"Pandas sum vs. SQL sum","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be the best approach to handle the following case with Django?\nDjango needs access to a database (in MariaDB) in which datetime values are stored in UTC timezone, except for one table that has all values for all of its datetime columns stored in local timezone (obviously different that UTC). This particular table is being populated by a different system, not Django, and for some reasons we cannot have the option to convert the timestamps in that table to UTC or change that system to start storing the values in UTC. The queries involving that table are read-only, but may join data from other tables. The table itself does not have a foreign key but there are other tables with a foreign key to that table. The table is very big (millions of rows) and one of its datetime columns is part of more than one indexes that help for making optimized queries.\nI am asking your opinion for an approach to the above case that would be as seamless as it can be, preferably without doing conversions here and there in various parts of the codebase while accessing and filtering on the datetime fields of this \"problematic\" table \/ model. I think an approach at the model layer, which will let Django ORM work as if the values for that table were stored in UTC timezone, would be preferable. Perhaps a solution based on a custom model field that does the conversions from and back to the database \"transparently\". Am I thinking right? Or perhaps there is a better approach?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":73817657,"Users Score":0,"Answer":"It is what it is. If you have different timezones then you need to convert different timezones to the one you prefer. Plus, there is no such thing as for reasons we cannot have the option to convert the timestamps in that table to UTC - well, too bad for you, should have thought about that, now you need to deal with it (if that is the case, which it is not - this is \"programming\", after all. Of course everything can be changed)","Q_Score":0,"Tags":"python,django,datetime,timezone","A_Id":73817922,"CreationDate":"2022-09-22T16:01:00.000","Title":"How to best handle the access with Django to a database that has some DateTime fields stored in local timezone while others are stored in UTC?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m extracting a PDS excel file using python jupyter but i could not able to extract the checked checkbox. I just want to extract the value \u201cSingle\u201d if Single is checked or the value \u201cMarried\u201d if Married is checked.\nFor Example : Marital Status: Single \u2610 Married \u2612\nOutput: Marital Status: Married","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":73957495,"Users Score":0,"Answer":"You can see the \"TRUE\" or \"FALSE\" values like that:\nright click on your check box > Format Control > Control > Cell Link\nThe selected cell will change as per the status of the assigned checkbox","Q_Score":0,"Tags":"python,excel,checkbox,jupyter-notebook","A_Id":73958159,"CreationDate":"2022-10-05T08:07:00.000","Title":"Extract value of checked checkboxes in an excel file using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"hello I would like to know if it is possible to store data in a database with GraphQL using python without going through mongodb or sql?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":73991276,"Users Score":0,"Answer":"Yes. GraphQL doesn't care about the underlying database. You can connect to a csv file, a JSON file, a REST interface whatever.","Q_Score":0,"Tags":"python-3.x,graphql","A_Id":73993231,"CreationDate":"2022-10-07T18:24:00.000","Title":"is it possible to use graphql without store data in data base?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Data base tables get deleted on heroku, even i am using cloudinary storage for static-files. I am using default database sqlite3.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17,"Q_Id":73994812,"Users Score":0,"Answer":"Now i found out that heroku cleans the database tables of sqlite3. The solution is to use a different database, like postgreSQL.","Q_Score":0,"Tags":"python,django,database,heroku","A_Id":74002363,"CreationDate":"2022-10-08T06:12:00.000","Title":"Data base tables get deleted on heroku in django app","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python code wherein I take certain data from an excel and work with that data. Now I want that at the end of my code in the already existing Excel table a new column named XY is created. What would be your approach to this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":74082468,"Users Score":0,"Answer":"The easiest way get the right code is to record a macro in Excel. Go to your table in Excel, command 'Record macro' and manually perform required actions. Then command 'Stop recording' and go to VBA to discover the code. Then use the equivalent code in your Python app.","Q_Score":0,"Tags":"python,excel,selenium","A_Id":74082513,"CreationDate":"2022-10-15T19:40:00.000","Title":"Insert new Column in Excel (Python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I deleted my migrations folder accidently so to make things work again I dropped all tables in my database as well. But now even tho python manage.py makemigrations is working, python manage.py migrate still says 'No migrations to apply' why?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":74083479,"Users Score":0,"Answer":"Try these commands :\n\npy manage.py migrate zero\npy manage.py makemigrations\npy manage.py migrate","Q_Score":2,"Tags":"python,django","A_Id":74083591,"CreationDate":"2022-10-15T22:36:00.000","Title":"Deleted the migrations folder accidently so I dropped all my tables in database but still not working","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm a newbie in python and I got this issue. I have a column in my database which has type is datetime and precision is 23 (Ex: 2022-08-22 11:18:00.000)\nWhen I retrive data with sqlalchemy, it seem convert to python datetime (2022-08-22 11:18:00). How can I avoid this and get original data? I have no idea now\nThank you for reading my question","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":74085671,"Users Score":0,"Answer":"Maybe there is a presentation issue - zeroed milliseconds are not show. Try with a different value.","Q_Score":0,"Tags":"python,sql-server,sqlalchemy","A_Id":74098968,"CreationDate":"2022-10-16T08:42:00.000","Title":"How to avoid loss precision for datetime type in sqlalchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python job that connects to Oracle database to extract data from multiple tables, generate both a flat file and export to another database. This job is intended to be scheduled in a server for nightly runs.\nCurrently, I have a configuration file that contains username, password, host, service_name, dbname, etc\u2026\nWould it be possible to create an encrypted string that can be used in place of password, which is specifically meant for this job and set of tables.\nMy investigations led me to maskpass(), cryptography(), etc... However, I still end up specifying that password somewhere.\nI also discovered OAuth 2.0 but not sure if I can stand up that service, link that to Oracle database to generate an access token that can be used in place of a password.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":74169490,"Users Score":0,"Answer":"To prevent storing password on the server, do this:\n\nWrite a script that takes the plain text password and generates the encrypted password. You store this script and the associated key and salt in a secure place (like a password protected git repo), not on the server.\nYou specify the encrypted password generated above in the decrypt function and then connect to the database.\n\nYou can use any of the Python encryption libraries like cryptography or rsa to encrypt and decrypt the password.","Q_Score":0,"Tags":"sql,python-3.x,oracle,oauth-2.0","A_Id":74169569,"CreationDate":"2022-10-23T07:53:00.000","Title":"Creating an encrypted password","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Google Colab Notebook that is using psycopg2 to connect with a free Heroku PostgreSQL instance. I'd like to share the notebook with some colleagues for educational purposes to view and run the code.\nThere is nothing sensitive related to the account \/ database but would still like to hide the credentials used to make the initial connection without restricting their access.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":78,"Q_Id":74205261,"Users Score":1,"Answer":"My work around was creating a Python module that contained a function who performed the initial connection with credentials. I converted the module into a binary .pyc, uploaded it to Google Drive, downloaded the binary into the Notebook's contents via shell command then used it as an import.\nIt obviously isn't secure but provides the obfuscation layer I was looking for.","Q_Score":1,"Tags":"python,google-colaboratory","A_Id":74212951,"CreationDate":"2022-10-26T09:12:00.000","Title":"How can I hide non-sensitive credentials on a Google Colab Notebook?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a database with production data in multiple tables. I want to analyze the history of the units produced and create a timeline. I am doing this in Python (jupyter lab notebook) and using a cloud based MySQL 8.0 database. Neither of the IDs (both strings and integers) is the primary ID in the database and the IDs cannot be assumed to be sequential. My current strategy is to\n\nFirst get the IDs from the first event.\nDo a new query with a WHERE IN [previous IDs] cluase.\nExtract ID's from 2.\nRepeat 2-4 until the final stage.\n\nThe IDs are not primary keys in any table. This strategy isn't working as in one stage I have over 800 000 IDs that goes into WHERE IN clause and I can't execute it. Bonus question: Should it work, or is there a limitation in how the query can be formed (such as number of characters or length etc.)?\nWhat I wonder is how to execute this? Is there a way to perform this in a better SQL query or should I split this into multiple queries? Can I use some Python tricks to kind of stream the data in multiple parts?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":74235148,"Users Score":4,"Answer":"I have over 800 000 IDs that goes into WHERE IN clause\n\nThat's way, way too many for IN .\nThe best way to handle this kind of volume is to use a temporary table with CREATE TEMPORARY TABLE and join the tables instead of using IN. A temporary table can have an index so that can help speed things up for the join.\nThis may seem like a very heavy operation but actually it's not; mysql is very good at this kind of thing.","Q_Score":0,"Tags":"python,mysql,sql,where-clause","A_Id":74235230,"CreationDate":"2022-10-28T12:04:00.000","Title":"How to efficiently query with a long \"WHERE IN [list]\" clause?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my project, I used pandas and pymysql to read the database. The default setting is that pymysql will automatically disconnect after 8 hours if you do not perform any operation after creating a link.\nI used close () to close the link, but the database shows that the link exists and has not been operated for more than 80000 seconds\npython 3.10.5\nI tried to close it with close(), but it didn't seem to work","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":74258368,"Users Score":0,"Answer":"I gauss you can try to change \"wait_timeout\" in mysql system setting which default is 28800s","Q_Score":0,"Tags":"python,pandas,pymysql","A_Id":74258417,"CreationDate":"2022-10-31T03:04:00.000","Title":"How to change the default link time of pymysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my project, I used pandas and pymysql to read the database. The default setting is that pymysql will automatically disconnect after 8 hours if you do not perform any operation after creating a link.\nI used close () to close the link, but the database shows that the link exists and has not been operated for more than 80000 seconds\npython 3.10.5\nI tried to close it with close(), but it didn't seem to work","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":74258368,"Users Score":0,"Answer":"To set the timeout for pymysql read and write operations, use the following parameters that can be passed to pymysql.connections.Connection(..) when you establish the connection:\nread_timeout \u2013 The timeout for reading from the connection in seconds (default: None - no timeout)\nSimilarly, you have:\nwrite_timeout \u2013 The timeout for writing to the connection in seconds (default: None - no timeout)","Q_Score":0,"Tags":"python,pandas,pymysql","A_Id":74260244,"CreationDate":"2022-10-31T03:04:00.000","Title":"How to change the default link time of pymysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working a specific algorithm related to matching engine using python and I was wondering where to store the data (orderbook)?\nIs there a fast way (read and write) data to a storage rather than the database? taking in the consideration the matching engine has to be fast in reading and writing the data in the stored place.\nI tried to save the data in a usual database (postgres) but it seems to be slow in writing, reading and updating","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":72,"Q_Id":74322067,"Users Score":1,"Answer":"I was involved with a financial matching engine once. The only way we could manage the volume of data was to forego the dbms. Live data was kept in memory, and an append-only log of order book changes was kept in flat files in the local file system. Actual trades were stored in a proper (beefy) db, but they were orders of magnitude fewer than orderbook changes.\nOn restart, the in-memory order book would be reconstructed from the log data. This was S.L.O.W. In normal operation, there was no need to read from storage. We did sharding to spread orderbooks around to keep the memory requirements manageable, and had multiple parallel instances of each shard to protect against the slow restarts.","Q_Score":0,"Tags":"python,database,postgresql,matching,cryptocurrency","A_Id":74322690,"CreationDate":"2022-11-04T19:26:00.000","Title":"Where am I supposed to store the matching engine data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to conncect MSSQL DB to python, and the DB's password has included a character '#'.\nIs there any way to use '#' in string, not changing the password?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":74355219,"Users Score":0,"Answer":"You can use the character '#' in a string in python programming language by using the escape character ''.\nFor example:\nprint(\"#include \")\nYou can use the character '#' in a string in Python by putting it inside quotes:\nmy_string = \"This is a string with a # in it","Q_Score":0,"Tags":"python,sql-server","A_Id":74355250,"CreationDate":"2022-11-08T02:56:00.000","Title":"How to use a character '# in string?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have got an excel file from work which I amended using pandas. It has 735719 rows \u00d7 31 columns, I made the changes necessary and allocated them to a new dataframe. Now I need to have this dataframe in an Excel format. I have checked to see that in jupyter notebooks the ont_dub works and it shows a dataframe. So I use the following code ont_dub.to_excel(\"ont_dub 2019.xlsx\") which I always use.\nHowever normally this would only take a few seconds, but now it has been 40 minutes and it is still calculating. Sidenote I am working in a onedrive folder from work, but that hasn't caused issues before. Hopefully someone can see the problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":74366492,"Users Score":0,"Answer":"Usually, if you want to save such high amount of datas in a local folder. You don't utilize excel. If I am not mistaken excel has a know limit of displayable cells and it wasnt built to display and query such massive amounts of data (you can use pandas for that). You can either utilize feather files (a known quick save alternative). Or csv files, which are built for this sole purpose.","Q_Score":0,"Tags":"python,excel,pandas,dataframe,jupyter-notebook","A_Id":74366619,"CreationDate":"2022-11-08T20:00:00.000","Title":"Writing dataframe to Excel takes extremely long","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running a generic (because I don't know enough to do anything beyond the basics) Flask-SQLAlchemy 3.0.2 setup on Python 3.10.\nNot sure what happened, but at some point it started throwing this error every time I tried to query the db:\nAttributeError: module 'psycopg2' has no attribute 'paramstyle'\nI'm doing package management through poetry and SQLAlchemy 1.4.44 really wanted to use psycopg2 2.7, which I guess pre-dates paramstyle.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":342,"Q_Id":74513831,"Users Score":0,"Answer":"I uninstalled psycopg2 (and removed its requirement from the poetry lock file), and installed psycopg2-binary 2.9.5 manually. Now it works.","Q_Score":1,"Tags":"python,flask,sqlalchemy,flask-sqlalchemy,psycopg2","A_Id":74513832,"CreationDate":"2022-11-21T03:11:00.000","Title":"Flask-SQLAlchemy raising: AttributeError: module 'psycopg2' has no attribute 'paramstyle'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to add different types of files (CSV, XML, xlsx, etc.) to the database (Postgresql). I know how I can read it via pandas, but I have some issues with adding this to the database.\nWhat libraries do I need to use? And does it need to convert them into one format?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":74515976,"Users Score":1,"Answer":"Read files with pandas:\ncsv_df = pd.read_csv('file.csv')\nxml_df = pd.read_xml('file.xml')\nxlsx_df = pd.read_excel('file.xlsx')\n\nAdd tables in db with columns like in your file\n\nAdd files to db\nxlsx_df.to_sql('table_name', engine, if_exists='replace', index=False)","Q_Score":1,"Tags":"python,xml,postgresql,csv,xlsx","A_Id":74569708,"CreationDate":"2022-11-21T08:21:00.000","Title":"How to add different type of files in postgresql on Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running datapoint in Pycharm on a Mac. When I am connecting to the Database via the Pycharm Terminal, I can execute the code. Also, using the Pycharm Python Console, I am able to connect to the database. However, here, when I want to execute code or have a look at tables, it raises the error \"Datajoint Lost Connection Error (Server connection lost due to an interface error)\". Does anyone know, why this might be\/ How to solve this problem?\nI have tried to connect to other datajoint databanks, but it raises the same error.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":74520847,"Users Score":0,"Answer":"While I'm less familiar with PyCharm, it is the case with Jupyter\/IPython terminals that the connection will lapse after a timeout. By restarting the kernel, you can re-establish this connection.\nIf this is a consistent issue with certain time-intensive autopopulate tables, I would recommend breaking down make functions into separate pieces across multiple tables.","Q_Score":0,"Tags":"python,server,interface,connection,datajoint","A_Id":75290823,"CreationDate":"2022-11-21T14:57:00.000","Title":"Datajoint Lost Connection Error (Server connection lost due to an interface error)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am connecting to some SQL table through sqlserver within Python environment. I have an SQL table like this one but instead of 6 rows it has thousands of rows with several combinations of deal names and Loan IDs. Loan IDs follow a specific pattern with four digits followed by underscore and then the actual Loan ID.\n\n\n\n\ndeal_name\nLOAN_ID\n\n\n\n\nAAAAAAAAA\n0001_LX3333\n\n\nAAAAAAAAA\n0001_LX4444\n\n\nBBBBBBBBB\n0221_LX3333\n\n\nBBBBBBBBB\n0001_LX4444\n\n\nCCCCCCCCC\n4401_LX3333\n\n\nCCCCCCCCC\n0001_LX4444\n\n\n\n\nI would like to select rows from this table based on a Python list of loan IDs (~1,000 entries) without the prefix (i.e., LX3333, LX4444, etc) which is not fixed and is being updated every month. If loan IDs were fixed I could use some LIKE statement, but that is not possible as loan IDs are updated and they are thousands. Is there way to provide a list of loan IDs and then look into the SQL table using some kind of LIKE statement?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74532823,"Users Score":0,"Answer":"Thanks for your responses. I managed to resolve this issue by doing the following:\nI first extracted the loan id from the given table by doing:\nRIGHT(unique_id, 8) AS id\nI then looked-up to the given list of IDs by doing:\nWHERE id IN {tuple(loan_idlist)}","Q_Score":0,"Tags":"python,sql","A_Id":74536209,"CreationDate":"2022-11-22T12:43:00.000","Title":"Select rows from SQL table based on multiple LIKE","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a DynamoDB with hashes as UserIDs and set as partition key.\nI want to know whether an Item exists in the table or not.\nI gonna pass an array of User-Hashes. Each Hash in this array should be checked whether it exists or not.\nI already found a solution with GetItem. But that would mean, that i have to loop over all the User-Hashes in the array, right?\nDoes anybody has a solution how to do this without looping? Looping takes too much of the performance.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":25,"Q_Id":74629090,"Users Score":2,"Answer":"There is no shortcut here. You could do parallel (multi-threaded client) calls to reduce the overall latency.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,for-loop,amazon-dynamodb","A_Id":74661425,"CreationDate":"2022-11-30T14:04:00.000","Title":"Checking if an array of items exist in a DynamoDB Python without looping","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have the following requirements: i need to process per day around 20.000 elements (lets call them baskets) which generate each between 100 and 1.000 records (lets call them products in basket). A single record has about 10 columns, each row has about 500B - 1KB size (in total).\nThat means, that i produce around 5 to max. 20 Mio. records per day.\nFrom analytical perspective i need to do some sum up, filtering, especially show trends over multiple days etc.\nThe solution is Python based and i am able to use anything Hadoop, Microsoft SQL Server, Google Big Query etc. I am reading through lots of articles about Avro, Parquet, Hive, HBASE, etc.\nI tested in the first something small with SQL Server and two tables (one for the main elements and the other one the produced items over all days). But with this, the database get very fast quite large + it is not that fast when trying to acess, filter, etc.\nSo i thought about using Avro and creating per day a single Avro file with the corresponding items. And when i want to analyse them, read them with Python or multiple of them, when i need to analyse multiple of them.\nWhen i think about this, this could be way to large (30 days files with each 10 mio. records) ...\nThere must be something else. Then i came aroung HIVE and HBASE. But now i am totally confused.\nAnyone out there who can sort things in the right manner? What is the easiest or most general way to handle this kind of data?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32,"Q_Id":74655522,"Users Score":0,"Answer":"If you want to analyze data based on columns and aggregates, ORC or Parquet are better. If you don't plan on managing Hadoop infrastructure, then Hive or HBase wouldn't be acceptable. I agree a SQL Server might struggle with large queries... Out of the options listed, that narrows it down to BigQuery.\nIf you want to explore alternative solutions in the same space, Apache Pinot or Druid support analytical use cases.\nOtherwise, throw files (as parquet or ORC) into GCS and use pyspark","Q_Score":0,"Tags":"python,hive,hbase,parquet,avro","A_Id":74677926,"CreationDate":"2022-12-02T12:05:00.000","Title":"Avro, Hive or HBASE - What to use for 10 mio. records daily?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm restricted to a PostgreSQL as 'model storage' for the models itself or respective components (coefficients, ..). Obviously, PostgreSQL is far from being a fully-fledged model storage, so I can't rule out that I have to implement the whole model training process in Java [...].\nI couldn't find a solution that involves a PostgreSQL database as intermediate storage for the models. Writing files directly to the disk\/other storages isn't really an option for me. I considered calling Python code from within the Java application but I don't know whether this would be an efficient solution for subsequent inference tasks and beyond [...]. Are there ways to serialize PMML or other formats that can be loaded via Java implementations of the algorithms? Or ways to use the model definitions\/parameters directly for reproducing the model [...]?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":74656521,"Users Score":0,"Answer":"Using PostgreSQL as dummy model storage:\n\nTrain a model in Python.\nEstablish PostgreSQL connection, dump your model in Pickle data format to the \"models\" table. Obviously, the data type of the main column should be BLOB.\nAnytime you want to use the model for some application, unpickle it from the \"models\" table.\n\nThe \"models\" table may have extra columns for storing the model in alternative data formats such as PMML. Assuming you've used correct Python-to-PMML conversion tools, you can assume that the Pickle representation and the PMML representation of the same model will be functionally identical (ie. making the same prediction when given the same input). Using PMML in Java\/JVM applications is easy.","Q_Score":0,"Tags":"python,java,machine-learning,xgboost,lightgbm","A_Id":74659868,"CreationDate":"2022-12-02T13:28:00.000","Title":"How to save XGBoost\/LightGBM model to PostgreSQL database in Python for subsequent inference in Java?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i need to shuffle a small list (100 elements) in my Redis DB with Python.\nOr is it easier to do the shuffling locally and then save it to the DB?\nIs it possible?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":74692903,"Users Score":1,"Answer":"There is no Redis list command to shuffle the order, so you'll need to shuffle it using Python (likely you can use the random.shuffle method) and then store that shuffled list in Redis.","Q_Score":0,"Tags":"python,redis","A_Id":74745930,"CreationDate":"2022-12-05T18:39:00.000","Title":"Shuffle a small list in Redis DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data pipeline which parses, cleans and creates a data file with a few thousand rows. I need to move this data into mySQL into different tables. New data comes in every hour and my pipeline generates a new data file. Currently I am inserting\/updating mySQL tables row by row iterating the data file.\nI wanted to ask, is there a more efficient way to insert this data in mySQL?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":74732926,"Users Score":0,"Answer":"I'd suggest one of the following approach\n\nWhile parsing, do not insert data in the table, create a bulk query that will inert batches of data and execute it every X rows (depending on your pipeline size)\nINSERT INTO table (id, x)\nVALUES\n(id1, x1),\n(id2, x2)...\n\nDump your data into CSV and import resulting CSV file using LOAD DATA INFILE query","Q_Score":0,"Tags":"python,mysql,sql","A_Id":74733041,"CreationDate":"2022-12-08T15:50:00.000","Title":"Inserting data into MySQL from batch process","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an xlsx file where there are hyphens for some values. Upon clicking on the hyphens in the Excel sheet, they turn into 0 I noticed, too.\nI am bringing this xlsx file into Python and even if I put na_values='-'... etc. at the very beginning of reading the file (pd.read_excel), Python will not recognize the hyphens by anything else other than 0. For my use case, this is wrong.\nHow can I make these hyphens into NANs?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":74746155,"Users Score":1,"Answer":"This seems to be an Excel caused problem. Excel often formats zeros as \"-\" in specific formats, such as currency. If they are showing up as 0s in Excel when you click them, then they are actually 0s, not the display format, e.g., \"-\".\nThis raises the question, are the dashes distinct from what you believe are actual true 0s? If so, this must be distinguished somehow. If not and all 0s are actually NaN, replace the 0s with NaN.\nHowever, from what you've shared thus far, it sounds like you may have fallen victim to someone else's poorly formatted Excel workbook.","Q_Score":1,"Tags":"python,excel,pandas,nan,xlsx","A_Id":74746241,"CreationDate":"2022-12-09T17:01:00.000","Title":"Python recognizing hyphens (-) as 0, want NANs though","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I get this error \"ImportError: The 'pyparsing' package is required\" after trying to run .py file with from google.cloud import bigquery line. Import was working before and is still working in the Jupyter Notebook or in Ipython.\nI looked at existing options here and tried:\n\npip install pyparsing\ndowngrade setuptools\nuninstall pyparsing and setuptools and installing them back\nuninistall and purge pip and install it back\n\nDoes anyone have suggestions? Thanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":74799058,"Users Score":1,"Answer":"I found the problem. It is silly, but happens to me from time to time. Do not name files in your project like - html.py =) . It was in one of the folders of my project. Really annoying, but nevertheless, hope it will help someone. Maybe you have same problem with different file name, but look up for files with common use names!)","Q_Score":0,"Tags":"python,google-bigquery,setuptools,pyparsing","A_Id":74803188,"CreationDate":"2022-12-14T13:32:00.000","Title":"Bigquery import asks for pyparsing in shell run","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have configured a DMS migration instance that replicates data from Mysql into a AWS Kinesis stream, but I noticed that when I process the kinesis records I pick up duplicate records.This does not happen for every record.\nHow do I prevent these duplicate records from being pushed to the kinesis data stream or the S3 bucket?\nI'm using a lambda function to process the records, so I thought of adding logic to de-duplicate the data, but I'm not sure how to without persisting the data somewhere. I need to process the data in real-time so persisting the data would not be idle.\nRegards\nPragesan","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":74800371,"Users Score":0,"Answer":"I added a global counter variable that stores the pk of each record,so each invocation checks the previous pk value,and if it is different I insert the value.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,amazon-kinesis,dms","A_Id":74802789,"CreationDate":"2022-12-14T15:13:00.000","Title":"AWS DMS inserts duplicate records into kinesis and S3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I just want to read a excel file in Sharepoint, but with no authentication.\nRead a sharepoint excel file using the file link, without a authentication.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":74816606,"Users Score":0,"Answer":"If you don't need to authenticate (or program the authentication in), to download, you can try requests.get(url=\"link\")\nor you could use selenium, to browse the website, and download the file.\nAnd then you can open it with pandas.","Q_Score":0,"Tags":"python,excel,pandas,authentication,sharepoint","A_Id":74817350,"CreationDate":"2022-12-15T19:28:00.000","Title":"Python - how to read Sharepoint excel sheet without authentication","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an ETL pipeline that extracts data from several sources and stores it within a database table (which has a fixed schema).\nI also have a separate FASTAPI service that allows me to query the database through a REST endpoint, which is called to display data on the frontend (React TS).\nThe issue now is that my ETL Pipeline, FASTAPI service, and frontend all have a separate version of the schema, and in the case where the data schema needs to be changed, this change has to be done to the schema specifications on all 3 services.\nI have thought about creating a python package that contains this schema, but this can only be shared between the services that uses Python, and my frontend still has to keep its own version of the schema.\nIs there some sort of \"schema service\" that I should be having? What can I do to reduce this coupling?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":82,"Q_Id":74911490,"Users Score":2,"Answer":"For fastAPI + frontend => use swagger, where your schema auto generated for frontend based on your pydantic models (for more consistency you can add version to each API endpoint and change version on schema changes at this endpoint)\nFor fastAPI + database + elt => in our company we use 'mono repository'. One git repo with ORM database schema in libs folder + folder for fastAPI microservice (with Dockerfile for FastAPI app) + folder for etl service (with enother Dockerfile) and in one commit you can change consistently elt script + fastAPI app + add your database migrations.","Q_Score":3,"Tags":"python,database,architecture","A_Id":74993026,"CreationDate":"2022-12-25T04:20:00.000","Title":"Sharing Schema across different services","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a use case where I need to read tables from MS access file (.mdb or .accdb) which is placed on AWS s3 bucket and converting it into csv or excel file in AWS lambda function and again upload the converted file to s3 bucket.\nI got the ways through pyodbc library but it's not working on AWS cloud especially when the file is placed on s3 bucket.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":74926667,"Users Score":0,"Answer":"That's because the S3 bucket isn't an SMB file share.\nDownload the database file to a SMB file server - a server running Windows Server or Linux with Samba - and access the file at that location.","Q_Score":0,"Tags":"python,ms-access,amazon-s3,aws-lambda,aws-glue","A_Id":74926703,"CreationDate":"2022-12-27T07:26:00.000","Title":"Reading .mdb or .accdb file from s3 bucket in AWS lambda function and converting it into excel or csv using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on a pathfinding project that use topographic data of huge areas.\nIn order to reduce the huge memory load, my plan is to pre-process the map data by creating nodes that are saved in a PostgresDB on start-up, and then accessed as needed by the algorithm.\nI've created 3 docker containers for that, the postgres DB, Adminer and my python app.\nIt works as expected with small amount of data, so the communications between the containers or the application isn't a problem.\nThe way it works is that you give a 2D array, it takes the first row, convert each element in node and save it in the DB using an psycopg2.extras.execute_value before going to the second row, then third...\nOnce all nodes are registered, it updates each of them by searching for their neighbors and adding their id in the right column. That way it takes longer to pre-process the data, but I have easier access when running the algorithm.\nHowever, I think the DB have trouble processing the data past a certain point. The map I gave comes from a .tif file of 9600x14400, and even when ignoring useless\/invalid data, that amount to more than 10 millions of nodes.\nBasically, it worked quite slow but okay, until around 90% of the node creation process, where the data stopped being processed. Both python and postgres container were still running and responsive, but there was no more node being created, and the neighbor-linking part of the pre-processing didn't start either.\nAlso there were no error message in either sides.\nI've read that the rows limit in a postgres table is absurdly high, but the table also become really slow once a lot of elements are in it, so could it be that it didn't crash or freeze, but just takes an insane amount of time to complete the remaining node creations request?\nWould reducing the batch size even more help in that regard?\nOr would maybe splitting the table into multiple smaller ones be better?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":74927786,"Users Score":1,"Answer":"My queries and psycopg function I've used were not optimized for the mass inserts and update I was doing.\nThe changes I've made were:\n\nReduce batch size from 14k to 1k\nMaking a larger SELECT queries instead of smaller ones\nCreating indexes on importants columns\nChanging a normal UPDATE query to the format of an UPDATE FROM with also an executing_value instead of cursor.execute\n\nIt made the execution time go from around an estimated 5.5 days to around 8 hours.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":74949834,"CreationDate":"2022-12-27T09:46:00.000","Title":"How can I optimize postgres insert\/update request of huge amount of data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create two python programs namely A and B. A will access 'test.xlsx'(excel file), create a sheet called 'sheet1' and write to 'sheet1'. Python program B will access 'test.xlsx'(excel file), create a sheet called 'sheet2' and write to 'sheet2' simultaneously.\nIs it possible to do the above process?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":74986033,"Users Score":0,"Answer":"Generally operation of opening a file performed on an object is to associate it to a real file. An open file is represented within a program by a stream and any input or output operation performed on this stream object will be applied to the physical file associated to it.\nThe act of closing the file (actually, the stream) ends the association; the transaction with the file system is terminated, and input\/output may no longer be performed on the stream. Python doesn't flush the buffer\u2014that is, write data to the file\u2014until it's sure you're done writing, and one way to do this is to close the file. If you write to a file without closing, the data won't make it to the target file.\nWhen we are finished with our input and output operations on a file we shall close it so that the operating system is notified and its resources become available again.\nThere are to ways you can pick, either you open\/close file synchronically or you will make a copy of your file and destroy it afterwards.","Q_Score":0,"Tags":"python,excel,pandas,openpyxl","A_Id":74986201,"CreationDate":"2023-01-02T18:50:00.000","Title":"Can two Python programs write to different sheets in the same .xlsx file simultaneously?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't install mysql connector with below error, please help advise needed action to proceed installation of module..See below command\/errors:\nC:\\Users\\a0229010>python -m pip install mysql-connector-python==3.7.3\nCollecting mysql-connector-python==3.7.3\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nWARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')': \/simple\/mysql-connector-python\/\nERROR: Could not find a version that satisfies the requirement mysql-connector-python==3.7.3 (from versions: none)\nERROR: No matching distribution found for mysql-connector-python==3.7.3","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":75002679,"Users Score":0,"Answer":"I suggest you change your Python Package Index\nOr you can use follow code to have a try:\npip --default-timeout=100 install -i --trusted-host \ne.g when i install pandas\npip --default-timeout=100 install pandas -i https:\/\/pypi.tuna.tsinghua.edu.cn\/simple --trusted-host pypi.tuna.tsinghua.edu.cn","Q_Score":0,"Tags":"python-3.x","A_Id":75003593,"CreationDate":"2023-01-04T08:20:00.000","Title":"Unable to install mysql-connector-python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Please help. I have two tables: 1 report and 1 data file.\nThe data table is presented as follows:\n\n\n\n\nPATIENTS_ID\nPOL\nAge\nICD10\n\n\n\n\n10848754\n0\n22\nH52\n\n\n10848754\n0\n22\nR00\n\n\n10848754\n0\n22\nZ01\n\n\n10848754\n0\n22\nZ02\n\n\n10850478\n1\n26\nH52\n\n\n\n\nAnd etc.\nThe report file asks to collect the following data:\n\n\n\n\nICD10\nMale (20-29)\nMale (30-39)\nFemale (20-29)\nFemale (30-39)\n\n\n\n\nC00 - C97\n\n\n\n\n\n\nE10 - E14\n\n\n\n\n\n\nI00 - I99\n\n\n\n\n\n\n\n\nSo... I need to collect all \"ICD10\" data which include the gap between C00 to C99, and aggregate together with gender and age span. I know that in SQL there is a \"BETWEEN \" that will quite easily build a range and select values like this without additional conditions: \"C00, C01, C02\".\nIs there something similar in python\/pandas?\nLogical expressions like \">= C00 <= C99\" will include other letters, already tried. I would be grateful for help. Creating a separate parser\/filter seems too massive for such a job.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":75006304,"Users Score":0,"Answer":"If there is only one letter as \"identifier\", like C02, E34, etc. you can split your column ICD10 into two columns, first one is the first character of ICD10, and second are the numbers.\ndf.loc[:, \"Letter_identifier\"] = df[\"ICD10\"].str[0]\ndf.loc[:, \"Number_identifier\"] = df[\"ICD10\"].str[1:].astype(int) \nThen you can create a masks like:\n(df[\"Letter_identifier\"] == \"C\") & (df[\"Number_identifier\"] > 0) & (df[\"Number_identifier\"] <= 99)\nYou can split your dataframe as shown, aggregate on those sub-dataframes and concat your result.","Q_Score":0,"Tags":"python,sql,excel,pandas,report","A_Id":75014870,"CreationDate":"2023-01-04T13:38:00.000","Title":"Selection of a condition by a range that includes strings (letter + numbers)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently, we have a table containing a varchar2 column with 4000 characters, however, it became a limitation as the size of the 'text' being inserted can grow bigger than 4000 characters, therefore we decided to use CLOB as the data type for this specific column, what happens now is that both the insertions and selections are way too slow compared to the previous varchar2(4000) data type.\nWe are using Python combined with SqlAlchemy to do both the insertions and the retrieval of the data. In simple words, the implementation itself did not change at all, only the column data type in the database.\nDoes anyone have any idea on how to tweak the performance?","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":75,"Q_Id":75008445,"Users Score":-1,"Answer":"You could also ask your DBA if possible to upgrade the DB to max_string_size=EXTENDED, then the max VARCHAR2 size would be 32K.","Q_Score":0,"Tags":"python,oracle","A_Id":75010399,"CreationDate":"2023-01-04T16:31:00.000","Title":"Why CLOB slower than VARCHAR2 in Oracle?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python script that downloads some excel spreadsheets from a website, and then uploads these spreadsheets to a folder on OneDrive, at the moment I have to run this script on my machine every day, I would like to know if there is a way to run this script on a server or something, so I don't have to keep my computer on all the time.\nI thought about uploading the script to Heroku and using the platform's scheduling service, but I don't know how to integrate with OneDrive","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":75013429,"Users Score":0,"Answer":"Yes, it is possible to schedule a python script to run without using your local machine. There are a few options for doing this:\nUse a cloud-based computing service, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). These services allow you to set up virtual machines and run your python scripts on them.\nUse a scheduling service, such as Cron or Windows Task Scheduler. These services allow you to set up a schedule for your python script to run at specific intervals.\nUse a remote server or virtual private server (VPS). These allow you to access a machine remotely and run your python scripts on it.","Q_Score":0,"Tags":"python,heroku,onedrive","A_Id":75013492,"CreationDate":"2023-01-05T02:56:00.000","Title":"How can I schedule a Python script to upload files to One drive?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a query in which I want to Sum amount using annotate and Sum decimal field in a foreign key relationship.\nThe field is summed correctly but it returns the Sum field in integer instead of decimal. In the database the field is in decimal format.\nThe query is like:\n***models.objects.filter(SourceDeletedFlat=False).annotate(TotalAmount=Sum(\"RequestOrderList__PurchaseOrderAmount\")).all()\nI do not want to use aggregate because I don't need overall column sum.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":75020025,"Users Score":0,"Answer":"Can you try this\n\n**models.objects.filter(SourceDeletedFlat=False).annotate(TotalAmount=Sum(\"RequestOrderList__PurchaseOrderAmount\", output_field=DecimalField())).all()","Q_Score":0,"Tags":"django,django-models,django-rest-framework,python-3.7","A_Id":75020795,"CreationDate":"2023-01-05T14:32:00.000","Title":"Sum and Annotate does not returns a decimal field Sum as a decimal","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We have a Django 4.0.4 site running. Since upgrading from Python 3.10->3.11 and Psycopg2 from 2.8.6->2.9.3\/5 and gunicorn 20.0.4->20.1.0 we've been getting random InterfaceError: cursor already closed errors on random parts of our codebase. Rarely the same line twice. Just kind of happens once every 5-10k runs. So it feels pretty rare, but does keep happening a few times every day. I've been assuming it's related to the ugprade, but it may be something else. I don't have a full grap on why the cursor would be disconnecting and where I should be looking to figure out the true issue.\nPsycopg version: 2.9.5 & 2.9.3\nPython version: 3.11\nPostgreSQL version: 12.11\nGunicorn\nThe site had been running for 1-2 years without this error. Now it happens a few times every day after a recent upgrade.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":75064349,"Users Score":0,"Answer":"We are having the same 'heisenbug' in our system and are attempting to solve it (unsuccessfully so far) ...","Q_Score":0,"Tags":"django,gunicorn,psycopg2,python-3.11,postgres-12","A_Id":75160751,"CreationDate":"2023-01-10T00:42:00.000","Title":"InterfaceError: cursor already closed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I don't have a concrete project yet, but in anticipation I would like to know if it is possible to fill a pdf with data stored in mysql?\nIt would be a question of a form with several lines and column history not to simplify the thing... If yes, what technology\/language to use?\nI found several tutorials which however start from a blank pdf. I have the constraint of having to place the data in certain specific places.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":75099759,"Users Score":0,"Answer":"Try using PyFPDF or ReportLab to create and manipulate PDF documents in Python.","Q_Score":0,"Tags":"python,pdf,pdf-writer","A_Id":75099817,"CreationDate":"2023-01-12T16:55:00.000","Title":"How to fill a pdf with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to SQLITE. I am trying to open a database through Sqlite3.exe shell. My database file path has hyphen in it..\non entering\n.open C:\\Users\\Admin\\OneDrive - batch\\db.sqlite3\ni am getting below error\nunknown option: -\ncan anyone help..\nI tried double quote around path but in that case I am getting\nError: unable to open database\nThanks in advance..","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18,"Q_Id":75123102,"Users Score":0,"Answer":"changing\n\nbackward slashes to forward\n\nadding double quotes\nworked...\n\n\nbelow is the solution\n.open \"C:\/Users\/Admin\/OneDrive - batch\/db.sqlite3\"","Q_Score":0,"Tags":"sqlite,sqlite3-python","A_Id":75200831,"CreationDate":"2023-01-15T05:17:00.000","Title":"sqlite exe : database file path has hyphen : \"unknown option: -\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have been trying to improve the db performance and had done some basic research regarding having a db partition and db sharding and also having 2 dbs one for write and other for read .\nHowever i found out that the db sharding is the best way out of all as the mapping provided by sharding is dynamic that is one of the requirement to put it bluntly i have provided the 2 cases below\nCase 1:- we need to get all the transaction of a user (which is huge)\nCase 2:- we need all the data for a particular time interval for all the user (which is again huge)\nBecause of the above scenerios I'm looking to implement db sharding\nNote:- I have already segregated some db into multiple databases already and they sit on different machines so i want it to be applied to all those multiple databases\nWhat I'm Looking for :\n\nAny link that could be helpful\nAny snippet code that could be helpful\n\nDjango==3.2.13\nMySql == 5.7","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":75142891,"Users Score":0,"Answer":"Let me define some terms so that were are \"on the same page\":\nReplication or Clustering -- Multiple servers having identical datasets. They are kept in sync by automatically transferring all writes from one server to the others. One main use is for scaling reads; it allows many more clients to connect simultaneously.\nPARTITION -- This splits one table into several, based on date or something else. This is done in a single instance of MySQL. There are many myths about performance. The main valid use is for purging old data in a huge dataset.\nSharding -- This involves splitting up a dataset across multiple servers. A typical case is splitting by user_id (or some other column in the data). The use case is to scale writes. (On pure MySQL, the developer has to develop a lot of code to implement Sharding. There are add-ons, especially in MariaDB, that help.)\nYour use case\nYour \"2 dbs one for write and other for read\" sounds like Replication with 2 servers. It may not give you as much benefit as you hope for.\nYou are talking about SELECTs that return millions of rows. None of the above inherently benefits such, even if you have several simultaneous connections doing such.\nPlease provide some numbers -- RAM size, setting of innodb_buffer_pool_size, and dataset size (in GB) of the big SELECTs. With those numbers, I can discuss \"caching\" and I\/O and performance. Performing multiple queries on the same dataset may benefit from caching on a single server.\nReplication and Sharding cannot share the caching; Partitioning has essentially no impact. That is, I will try to dissuade you from embarking on a technique that won't help and could hurt.\nPlease further describe your task; maybe one of the techniques will help.\nP.S., Replication, Partitioning, and Sharding are mostly orthogonal. That is any combination of them can be put together. (But rarely is.)","Q_Score":1,"Tags":"mysql,python-3.x,django,sharding","A_Id":75163095,"CreationDate":"2023-01-17T07:00:00.000","Title":"Data Base Sharding In django using MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building a DWH based on data I am collecting from an ERP API.\ncurrently, I am fetching the data from the API based on an incremental mechanism I built using python: The python script fetches all invoices whose last modified date is in the last 24 hours and inserts the data into a \"staging table\" (no changes are required during this step).\nThe next step is to insert all data from the staging area into the \"final tables\". The final tables include primary keys according to the ERP (for example invoice number).\nThere are no primary keys defined at the staging tables.\nFor now, I am putting aside the data manipulation and transformation.\nIn some cases, it's possible that a specific invoice is already in the \"final tables\", but then the user updates the invoice at the ERP system which causes the python script to fetch the data again from the API into the staging tables. In the case when I try to insert the invoice into the \"final table\" I will get a conflict due to the primary key restriction at the \"final tables\".\nAny idea of how to solve this?\nI am thinking to add a field that details the date and timestamp at which the record land at the staging table (\"insert date\") and then upsert the records if\ninsert date at the staging table > insert date at the final tables\nIs this best practice?\nAny other suggestions? maybe use a specific tool\/data solution?\nI prefer using python scripts since it is part of a wider project.\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":75161670,"Users Score":0,"Answer":"Instead of a straight INSERT use an UPSERT pattern. Either the MERGE statement if your database has it, or UPDATE the existing rows, followed by INSERTing the new ones.","Q_Score":0,"Tags":"python,etl,data-warehouse","A_Id":75196474,"CreationDate":"2023-01-18T15:36:00.000","Title":"DWH primary key conflict between staging tables and DWH tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Working on a data transfer program, to move data from an oracle database to another\napplication that I cant see or change. I have to create several text files described below and drop them off on sftp site.\nI am converting from a 20+ year old SQR report. (yes SQR) :(\nI have to create text files that have a format as such an_alpa_code:2343,34533,4442,333335,.....can be thousands or numbers separated by comma.\nThe file may have only 1 line, but the file might be 48k in size.\nThere is no choice on the file format, it is required this way.\nTried using Oracle UTL_FILE, but that cannot deal with a line over 32k in length, so looking for an alterative. Python is a language my company has approved for use, so I am hoping it could do this","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":75216312,"Users Score":0,"Answer":"This gave me one long line\nfile_obj = open(\"writing.txt\", \"w\")\nfor i in range(0,10000):\nfile_obj.write(\"mystuff\"+str(i)+\",\")\n# file_obj.write('\\n')\nfile_obj.close()","Q_Score":0,"Tags":"python,oracle,text","A_Id":75224268,"CreationDate":"2023-01-24T00:11:00.000","Title":"In Python, how can i write multiple times to a file and keep everything on 1 long line line? (40k plus characters)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to alter my model but before doing so I want to delete all the records from my database, is there any dajngo ORM query for doing that cuz I don't want to do it manually.\nThanks.\nI tried to alter my model but when I migrated the changes an error occured.\nit was a long error but the last line was this.\nFile \"C:\\\\Users\\\\ALI SHANAWER.virtualenvs\\\\PiikFM-App-Backend-O_dKS6jY\\\\Lib\\\\site-packages\\\\MySQLdb\\\\connections.py\", line 254, in query \\_mysql.connection.query(self, query) django.db.utils.OperationalError: (3140, 'Invalid JSON text: \"Invalid value.\" at position 0 in value for column '#sql-45_2d01.qbo_class'.')\nany one knows what this is?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":75218078,"Users Score":0,"Answer":"You can simple delete db.sqlite ie, database file \nthen run python manage.py makemigration and then python manage.py migrate. \n\nI hope this is what you were looking for","Q_Score":0,"Tags":"python,django,django-models,orm","A_Id":75218297,"CreationDate":"2023-01-24T06:26:00.000","Title":"Is there any way I can delete all the rows from database before altering the model in django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to alter my model but before doing so I want to delete all the records from my database, is there any dajngo ORM query for doing that cuz I don't want to do it manually.\nThanks.\nI tried to alter my model but when I migrated the changes an error occured.\nit was a long error but the last line was this.\nFile \"C:\\\\Users\\\\ALI SHANAWER.virtualenvs\\\\PiikFM-App-Backend-O_dKS6jY\\\\Lib\\\\site-packages\\\\MySQLdb\\\\connections.py\", line 254, in query \\_mysql.connection.query(self, query) django.db.utils.OperationalError: (3140, 'Invalid JSON text: \"Invalid value.\" at position 0 in value for column '#sql-45_2d01.qbo_class'.')\nany one knows what this is?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":75218078,"Users Score":0,"Answer":"If you want to truncate only a single table then use {ModelName}.objects.all().delete() otherwise use can use \"python manage.py flush\" for truncate database.","Q_Score":0,"Tags":"python,django,django-models,orm","A_Id":75220062,"CreationDate":"2023-01-24T06:26:00.000","Title":"Is there any way I can delete all the rows from database before altering the model in django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"HI I am writing a server script in Frappe Cloud where I am trying to update a particular doctype(which is NOT THE DOCTYPE I HAVE CHOSEN IN DOCTYPE EVENT) using frappe.db.set_value(), then in order to save it i use frappe.db.commit().\nBut when the script tries to run I get the following error\nAttributeError: module has no attribute 'commit'\nAny ideas to whats wrong\nchange in the saved document data","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":75218794,"Users Score":0,"Answer":"Use of frappe.db.commit mid transaction can lead to unintended side effects like partial updates.\nYou don't need to explicitly commit in your Server Script, Frappe handles those bits for you.","Q_Score":0,"Tags":"python,erpnext,server-side-scripting,frappe","A_Id":75262689,"CreationDate":"2023-01-24T08:01:00.000","Title":"does frappe.db.commit() not work in server script in Frappe Cloud?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Just for reference I am coming from AWS so any comparisons would be welcome.\nI need to create a function which detects when a blob is placed into a storage container and then downloads the blob to perform some actions on the data in it.\nI have created a storage account with a container in, and a function app with a python function in it. I have then set up a event grid topic and subscription so that blob creation events trigger the event. I can verify that this is working. This gives me the URL of the blob which looks something like https:\/\/.blob.core.windows.net\/\/. However then when I try to download this blob using BlobClient I get various errors about not having the correct authentication or key. Is there a way in which I can just allow the function to access the container in the same way that in AWS I would give a lambda an execution role with S3 permissions, or do I need to create some key to pass through somehow?\nEdit: I need this to run ASAP when the blob is put in the container so as far as I can tell I need to use EventGrid triggers not the normal blob triggers","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":75223506,"Users Score":0,"Answer":"The answer lied somewhere between @rickvdbosch's answer and Abdul's comment. I first had to assign an identity to the function giving it permission to access the storage account. Then I was able to use the azure.identity.DefaultAzureCredential class to automatically handle the credentials for the BlobClient","Q_Score":0,"Tags":"python,amazon-web-services,azure,azure-functions,azure-blob-storage","A_Id":75231879,"CreationDate":"2023-01-24T15:15:00.000","Title":"Access blob in storage container from function triggered by Event Grid","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a question regarding Python\/cx-Oracle.\nThe Oracle SQLcl and SQL*Developer tools, both support proxy server connections (not to be confused with proxy users).\nFor example, on SQLcl their is a command line option, \"--proxy\", which is nothing to do with proxy users.\nI can't say that I know exactly how they work, but the options are there, and I assume that there is an option in an API in there to support it.\nIs this something which cx-Oracle supports?\nThanks,\nClive\nI tried looking at the cx-Oracle docs, but couldn't spot anything which might help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":75238215,"Users Score":0,"Answer":"I had another through the docs and it appears that you are expected to make changes to oracle config files (sqlnet.ora and testament.ora). That said, it also appears that newer EZconnect string syntax supports the proxy server requirement.","Q_Score":0,"Tags":"python,api,server,proxy,cx-oracle","A_Id":75264569,"CreationDate":"2023-01-25T18:19:00.000","Title":"Proxy Server Connections via Python cx-Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple excel files with different columns and some of them have same columns with additional data added as additional columns. I created a masterfile which contain all the column headers from each excel file and now I want to export data from individual excel files into the masterfile. Ideally, each row representing all the information about one single item.\nI tried merging and concatenating the files, it adds all the data as new rows so, now I have some columns with repeated data but they also contain additional data in different columns.\nWhat I want now is to recognize the columns that are already present and fill in the new data instead of repeating the all columns using python. I cannot share the data or the code so, looking for some help or idea to get this done. Any help would be appreciated, Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":75247876,"Users Score":0,"Answer":"You are probably merging the wrong way.\nNot sure about your masterfile, sounds not very intuitive.\nMake sure your rows have a specific ID that identifies it.\nThen always perform the merge with that id and the 'inner' merge type.","Q_Score":0,"Tags":"python,excel,pandas,merge","A_Id":75284917,"CreationDate":"2023-01-26T15:00:00.000","Title":"Merging multiple excel files into a master file using python with out any repeated values","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting below error while executing SQL statement using pd.read_sql()\n\nsqlalchemy.exc.OperationalError: (pyodbc.OperationalError) ('08001',\n'[08001] [unixODBC][Microsoft][ODBC Driver 17 for SQL Server]SSL\nProvider: [error:0A0C0103:SSL routines::internal error] (-1)\n(SQLDriverConnect)')\n\nPython version 3.10.8. Other packages:\n\npyodbc==4.0.35\npandas==1.5.2\npymysql==1.0.2\nsqlalchemy==1.4.46\n\nI want to execute the above command successfully.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":75283975,"Users Score":0,"Answer":"The error message 'ODBC 17' distinctively specifies a failure in establishing a connection to a SQL Server database through ODBC driver. This can be caused by;\n\nIncorrect connection details such as server name, database name, username, and password.\nODBC driver not installed or not properly configured on the machine.\n\nIn order to solve this, try and verify the connection details and ensure they are correct or Install or reinstall the ODBC driver and make sure it's properly configured.\nIf the above doesn't work, kindly check the firewall settings and ensure that the connection is not being blocked.\nHope this helps.","Q_Score":0,"Tags":"python-3.x,sql-server,odbc,pyodbc","A_Id":75287526,"CreationDate":"2023-01-30T11:48:00.000","Title":"Python: Error while connecting to SQL Server (ODBC 17)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an update to the library?\nBefore it worked perfectly, and today I updated and it no longer loads\nI searched but I can't find any other option","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":311,"Q_Id":75299506,"Users Score":0,"Answer":"I solve this issue by installing an older version 3.0.10\npip install openpyxl==3.0.10","Q_Score":1,"Tags":"python,excel,openpyxl","A_Id":75321733,"CreationDate":"2023-01-31T15:37:00.000","Title":"cannot import name 'save_virtual_workbook' from 'openpyxl.writer.excel'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to save Polars DataFrame into a database, MS SQL for example?\nConnectorX library doesn\u2019t seem to have that option.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1076,"Q_Id":75320233,"Users Score":3,"Answer":"Polars exposes the write_database method on the DataFrame class.","Q_Score":3,"Tags":"python-polars,rust-polars","A_Id":76234129,"CreationDate":"2023-02-02T08:10:00.000","Title":"Polars DataFrame save to sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to save Polars DataFrame into a database, MS SQL for example?\nConnectorX library doesn\u2019t seem to have that option.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1076,"Q_Id":75320233,"Users Score":2,"Answer":"Polars doesen't support direct writing to a database. You can proceed in two ways:\n\nExport the DataFrame in an intermediate format (such as .csv using .write_csv()), then import it into the database.\nProcess it in memory: you can convert the DataFrame in a simpler data structure using .to_dicts(). The result will be a list of dictionaries, each of them containing a row in key\/value format. At this point is easy to insert them into a database using SqlAlchemy or any specific library for your database of choice.","Q_Score":3,"Tags":"python-polars,rust-polars","A_Id":75396733,"CreationDate":"2023-02-02T08:10:00.000","Title":"Polars DataFrame save to sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have big polars dataframe that I want to write into external database (sqlite for example)\nHow can I do it?\nIn pandas, you have to_sql() function, but I couldn't find any equivalent in polars","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":939,"Q_Id":75559239,"Users Score":1,"Answer":"You can use the DataFrame.write_database method.","Q_Score":2,"Tags":"python,sqlite,rust,python-polars","A_Id":76377130,"CreationDate":"2023-02-24T16:47:00.000","Title":"How do I write polars dataframe to external database?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm investigating the possibility of storing MOJOs in cloud storage blobs and\/or a database. I have proof-of-concept code working that saves the MOJO to a file then loads the file and stores to the target (and vice-versa for loading), but I'd like to know if there's any way to skip the file step? I've looked into python's BytesIO, but since the h2o mojo APIs all require a file-path I don't think I can use it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":75653463,"Users Score":2,"Answer":"It's possible using the H2O's REST API. Have a look at model.download_mojo() for the reference which gets the model from the backend and then persists it using the _process_response() method. You can have a look at h2o.upload_mojo() for the uploading part.","Q_Score":1,"Tags":"python,machine-learning,h2o","A_Id":75659358,"CreationDate":"2023-03-06T16:37:00.000","Title":"Storing H2o models\/MOJO outside the file system","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using langchain tool with streamlit and after running py file getting error that cannot import CursorResult from sqlalchemy","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":900,"Q_Id":76172817,"Users Score":1,"Answer":"Try this:\npip install -U sqlalchemy","Q_Score":1,"Tags":"python,sqlalchemy,langchain","A_Id":76176790,"CreationDate":"2023-05-04T11:46:00.000","Title":"can not import CursorResult from sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using langchain tool with streamlit and after running py file getting error that cannot import CursorResult from sqlalchemy","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":900,"Q_Id":76172817,"Users Score":1,"Answer":"Do pip install langchain==0.0.157\nIt will solve the issue.","Q_Score":1,"Tags":"python,sqlalchemy,langchain","A_Id":76176879,"CreationDate":"2023-05-04T11:46:00.000","Title":"can not import CursorResult from sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am relatively new to application development, but I'm working on a personal project and I would like it to automatically deploy a mariadb\/mysql db on first install\/through an option in the application. Now, I understand how to create a db, after the mariadb server has been set up, and I've got that part implemented and working. But what I would like to do is not have to install mariadb, configure the server, etc, and have the application handle that automatically. I feel like it must be possible, but I haven't been able to find an answer on how to implement it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32,"Q_Id":76297581,"Users Score":1,"Answer":"You can't have your application automatically install MariaDB & configure it on the machine you're running it on. And even if it could, then you wouldn't want to do that.\nIf you were to automatically install and configure the DB, then whenever you run your program for the first time on a new machine, it could take a really long time to install the DB. If you do want to automatically install & configure, then you're better off writing (or finding online) a bash script to do it. Then, you can just run the script separately and don't have to worry about any unexpected side effects.\nAlso, most of the time in production your DB isn't even on the same machine as your web app, especially if you're running it on Docker or some other form of containerization. The point of this is to take the strain off of the web app and let another isolated machine handle all the DB stuff. You could also use a service to handle your DB for you, so you don't have to install or configure anything, just provide your web app the URL, username, and password of your DB. This is likely your best bet if you don't want to do any DB configuration. I won't spend any time here listing services that can do it for you, but you can find hundred by just doing a Google search for \"MariaDB hosting\".","Q_Score":2,"Tags":"python,mysql,database,mariadb","A_Id":76297648,"CreationDate":"2023-05-20T22:32:00.000","Title":"Automatically create database in python application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"import dbf\nopen file\ntable = dbf.Table(str(\"\/art_gr.dbf\"))\nopen table\ntable.open(mode=dbf.READ_WRITE)\nadd data to table\ntable.append({'DENUMIRE': '2', 'COD': '3'})\nopen cdx file\nindex = dbf.Idx(filename='C:\/SAGA C.3.0\/salv_bd\/0001\/art_gr.cdx', table=table)\n#how can i update the index with new data??\nclose table\ntable.close()\nI want to update an existing .cdx file with data inserted in .dbf file","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":76480738,"Users Score":1,"Answer":"My python dbf library doesn't support index files at this time. If no other means of updating the .cdx index file is available, you'll need to rebuild the indices in the accounting program.","Q_Score":1,"Tags":"python,visual-foxpro,dbf","A_Id":76484690,"CreationDate":"2023-06-15T08:49:00.000","Title":"How to create .cdx file from .dbf file in pyhton?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a Connect 4 AI, and in order to increase its speed, I have saved previously evaluated positions in an SQLite3 database. (MySQL can be considered.) Take an example item that I would save:\n[' br rbb brbrb rb ',0.786] <-- the string at the start is the board state, and the real is the evaluation for that position.\nFor my AI, it turns out to be slightly slower to fetch data rather than re-evaluate if I commit the saved data at the end (only if the evaluation function stays the same; if it changes and becomes more time-consuming to run, the speed of the data fetching may outweigh the evaluation). otherwise, the speed is perfectly fine. I want to be able to fetch the evaluation in the shortest time possible.\nThe only problem that I might have is with fetching the evaluation and only the evaluation. I am not very good with fetching and writing data to text files, and I'm not sure if isolating the evaluation will end up slowing down the process. What's the best way forward?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":76523391,"Users Score":0,"Answer":"Fast is relative, and here you are talking about Input\/Outputs (IOs)\nIO speed depends on several factors, if you use a naive 'file' (CSV) then you are bound to the speed of your local file system. If you use a DB, you are bound to the connection to the DB.\nThe fastest you'll get is if you keep the data in memory rather than save it somewhere. You don't say how many rows you need to access, but a decent laptop computer has 16GB of memory these days, even with raw (uncompressed) data, that's 100's millions of rows of your type of data.\nIf you want speed, just keep the data in memory.\nNow, your key to finding the 'evaluation' is a string. If you create a dictionary of the key and the value, as opposed to the array representation you have now, the lookup will be much faster as you don't need to scan the array to find the value; just lookup the key in the dict.\nIf you really have more rows than can fit into memory, then you might want to consider an in-memory key value store (sharded) rather than a DB. Something like a Redis cluster would be ideal for this use case. A key value store is like a dictionary you access over a connection like a DB. It can be sharded across many servers if size is an issue. If you know the key, retrieval is faster than a DB engine. (Although most DB engines will have cached data in memory too, it will need to go fetch data on disk at first)\nRedis can be backed up to disk too; it just does so upon change of a number of keys (or time) that is configurable, so it is not spending its time trying to save to disk.","Q_Score":1,"Tags":"python,mysql,python-3.x","A_Id":76526627,"CreationDate":"2023-06-21T12:39:00.000","Title":"Would it be faster to store data for a transposition table in a text file or an SQL database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0}]