Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Im creating Excel file from pandas and I'm using worksheet.hide_gridlines(2) the problem that all gridlines are hide in my current worksheet.I need to hide a range of cells, for example A1:I80.How can I do that?
3
1
0.099668
1
false
46,747,332
0
2,139
1
0
0
46,745,120
As far as I know that isn't possible in Excel to hide gridlines for a range. Gridlines are either on or off for the entire worksheet. As a workaround you could turn the gridlines off and then add a border to each cell where you want them displayed. As a first step you should figure out how you would do what you want to do in Excel and then apply that to an XlsxWriter program.
1
0
0
Set worksheet.hide_gridlines(2) to certain range of cells
2
excel,python-2.7,xlsxwriter
0
2017-10-14T13:29:00.000
I have a website which has been built using HTML and PHP. I have a Microsoft SQL Server database. I have connected to this database and created several charts using Python. I want to be able to publish these graphs on my website and make the graphs live (so that they are refreshed every 5 minutes or so with latest data). How do I do this?
0
0
0
0
false
46,752,813
1
161
1
0
0
46,752,760
You could add a process to crontab to run the Python program every 5 minutes (assuming Linux). You could, alternatively, have the PHP call Python and await the refreshed file before responding with the page.
1
0
0
Live graphs using python on website
1
python,html,graph
0
2017-10-15T07:33:00.000
I'm trying to find a way to log all queries done on a Cassandra from a python code. Specifically logging as they're done executing using a BatchStatement Are there any hooks or callbacks I can use to log this?
10
1
0.066568
0
false
46,839,220
0
2,113
1
1
0
46,773,522
Have you considered creating a decorator for your execute or equivalent (e.g. execute_concurrent) that logs the CQL query used for your statement or prepared statement? You can write this in a manner that the CQL query is only logged if the query was executed successfully.
1
0
0
Logging all queries with cassandra-python-driver
3
python,cassandra,cassandra-python-driver
0
2017-10-16T15:12:00.000
The program would follow the below steps: Click on executable program made through python File explorer pops up for user to choose excel file to alter Choose excel file for executable program to alter Spits out txt file OR excel spreadsheet with newly altered data to same folder location as the original spreadsheet
0
0
1.2
0
true
46,803,941
0
123
1
0
0
46,803,803
Yes this is perfectly doable. I suggest you look at PyQT5 or TkInter for the user interface, pyexcel for the excel interface and pyinstaller for packaging up an executable as you asked. There are many great tutorials on all of these modules.
1
0
0
Python - how to get executable program to get the windows file browser to pop up for user to choose an excel file or any other document?
1
python,excel,file,exe,explorer
0
2017-10-18T06:09:00.000
After running some tests (casting a PyMongo set to a list vs iterating over the cursor and saving to a list) I've noticed that the step from cursor to data in memory is negligible. For a db cursor of about 160k records, it averages about 2.3s. Is there anyway to make this conversion from document to object faster? Or will I have to choose between casting to a list and iterating over the cursor?
3
0
0
0
false
53,745,939
0
871
1
0
0
46,817,939
After some A/B testing, it seems like there isn't really a way to speed this up, unless you change your Python interpreter. Alternatively, bulk pulling from the DB could speed this up.
1
0
1
PyMongo Cursor to List Fastest Way Possible
1
python,database,python-3.x,mongodb,pymongo
0
2017-10-18T19:37:00.000
I have different threads running which all write to the same database (though not the same table). Currently I have it setup that I create a connection, and pass that to each thread, which then creates it own cursor for writing. I haven't implementing the writing to db part yet, but am wondering if not every thread needs it's own connection? Thanks!
3
0
0
0
false
46,941,465
0
2,665
1
0
0
46,869,761
Each thread should use a distinct connection to avoid problems with inconsistent states and to make debugging easier. On web servers, this is typically achieved by using a pooled connection. Each thread (http request processor) picks up a connection from the pool when it needs it and then returns it back to the pool when done. In your case, you can just create a new connection for each thread and pass it to the thread which can close it when done.
1
0
0
using python psycopg2: multiple cursors (1 per thread) on same connection
1
multithreading,psycopg2,python-3.6
0
2017-10-22T01:46:00.000
I have a PostgreSQL database in which I am collecting reports from 4 different producers. Back when I wrote this I defined 4 different schemas (one per producer) and since the reports are similar in structure each schema has exactly the same tables inside. I'd like to combine the schemas into one and add an extra column with the producer id to the tables. At the moment I have 4 python processes running - one per producer. A process collects a report and inserts it in the DB. My very simple code has been running without crashing for the past few months. The current design makes it impossible for 2 processes to want to insert data into the DB at the same time. If I made the DB changes (single schema with single table) several processes might want to insert data simultaneously. For the moment, I will exclude combining the processes into a single one, please assume I don't do this. I am unsure if I need to worry about any special code to handle the case of more than one process inserting data into the DB? I am using python3 + SQLAlchemy + Flask. I would imagine the ACID properties of a DB should automatically handle the case of 2 or more processes wanting to insert data simultaneously (data in report is small and insertion will take less than 1s). Can I combine the schemas without worrying about processes insert collisions?
2
0
0
0
false
46,879,648
0
80
2
0
0
46,879,611
For simple INSERTs, yes, you can safely have four producers adding rows. I'm assuming you don't have long running queries, as consistent reads can require allocating an interesting amount of log space if inserts keep happening during an hour-long JOIN. if I am inserting large amounts of data and one insert causes another to timeout? You suggest that a timeout could arise from multiple competing INSERTs, but I don't understand what might produce that. I don't believe that is a problem you have thus far observed. Readers and writers can contend for locks, but independent INSERTing processes are quite safe. If four processes were doing BEGIN, UPDATE 1, ... UPDATE N, COMMIT, then respecting a global order would matter, but your use case has the advantage of being very simple.
1
0
0
Can I safely combine my schemas
2
python,postgresql,sqlalchemy,flask-sqlalchemy
0
2017-10-22T22:06:00.000
I have a PostgreSQL database in which I am collecting reports from 4 different producers. Back when I wrote this I defined 4 different schemas (one per producer) and since the reports are similar in structure each schema has exactly the same tables inside. I'd like to combine the schemas into one and add an extra column with the producer id to the tables. At the moment I have 4 python processes running - one per producer. A process collects a report and inserts it in the DB. My very simple code has been running without crashing for the past few months. The current design makes it impossible for 2 processes to want to insert data into the DB at the same time. If I made the DB changes (single schema with single table) several processes might want to insert data simultaneously. For the moment, I will exclude combining the processes into a single one, please assume I don't do this. I am unsure if I need to worry about any special code to handle the case of more than one process inserting data into the DB? I am using python3 + SQLAlchemy + Flask. I would imagine the ACID properties of a DB should automatically handle the case of 2 or more processes wanting to insert data simultaneously (data in report is small and insertion will take less than 1s). Can I combine the schemas without worrying about processes insert collisions?
2
1
1.2
0
true
46,879,647
0
80
2
0
0
46,879,611
This won't be a problem if you are using a proper db such as Postgres or MySQL. They are designed to handle this. If you are using sqlite then it could break.
1
0
0
Can I safely combine my schemas
2
python,postgresql,sqlalchemy,flask-sqlalchemy
0
2017-10-22T22:06:00.000
I want to add docx.table.Table and docx.text.paragraph.Paragraph objects to documents. Currently table = document.add_table(rows=2, cols=2) Would create a new table inside the document, and table would hold the docx.table.Table object with all its properties. What I want to do instead is add a table OBJECT to the document that I previously read from another document for example. I'm guessing the iterating through every property of the newly added table and the table object I read earlier and setting the values would be enough, but is there an alternative method? Thank you!
0
2
0.379949
0
false
46,897,992
0
1,713
1
0
0
46,897,003
There are a few different possibilities your description would admit, but none of them have direct API support in python-docx. The simplest case is copying a table from one part of a python-docx Document object to another location in the same document. This can probably be accomplished by doing a deep copy of the XML for the table. The details of how to do this are beyond the scope of this question, but there are some examples out there if you search on "python-docx" OR "python-pptx" deepcopy. More complex is copying a table between one Document object and another. A table may contain external references that are available in the source document but not the target document. Consequently, a deepcopy approach will not always work in this case without locating and resolving any dependencies. Finally, there is copying/embedding a table OLE object, such as might be found in a PowerPoint presentation or formed from a range in an Excel document. Embedding OLE objects is not supported and is not likely to be added anytime soon, mostly because of the obscurity of the OLE format embedding format (not well documented).
1
0
0
python-docx add table object to document
1
python,python-docx
0
2017-10-23T19:22:00.000
I am building a warehouse consisting of data that's found from a public facing API. In order to store & analyze the data, I'd like to save the JSON files I'm receiving into a structured SQL database. Meaning, all the JSON contents shouldn't be contained in 1 column. The contents should be parsed out and stored in various other tables in a relational database. From a process standpoint, I need to do the following: Call API Receive JSON Parse JSON file Insert/Update table(s) in a SQL database (This process will be repeated hundreds and hundreds of times) Is there a best practice to accomplish this - from either a process or resource standpoint? I'd like to do this in Python if possible. Thanks.
1
0
0
0
false
46,899,529
0
1,594
1
0
0
46,898,834
You should be able to use json.dumps(json_value) to convert your JSON object into a JSON string that can be put into an sql database.
1
0
1
Save JSON file into structured database with Python
1
python,sql,json,database,data-warehouse
0
2017-10-23T21:28:00.000
I am running a uwsgi application on my linux mint. it has does work with a database and shows it on my localhost. i run it on 127.0.0.1 IP and 8080 port. after that i want to test its performance by ab(apache benchmark). when i run the app by command uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi and get test of it, it works correctly but slowly. so i want to run the app with more than one thread to speed up. so i use --threads option and command is uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi --threads 8 for example. but when i run ab to test it, after 2 or 3 request, my application stops with some errors and i don't know how to fix it. every time i run it, type of errors are different. some of errors are like these: (Traceback (most recent call last): 2014, 'Command Out of Sync') or (Traceback (most recent call last): File "./wsgi.py", line 13, in application return show_description(id) File "./wsgi.py", line 53, in show_description cursor.execute("select * from info where id = %s;" %id) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query conn.query(q) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) 'Packet sequence number wrong - got 1 expected 2',) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in _read_query_result or ('Packet sequence number wrong - got 1 expected 2',) Traceback (most recent call last): or ('Packet sequence number wrong - got 1 expected 2',) Traceback (most recent call last): File "./wsgi.py", line 13, in application return show_description(id) File "./wsgi.py", line 52, in show_description cursor.execute('UPDATE info SET views = views+1 WHERE id = %s;', id) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) Please help me how to run my uwsgi application wiht more than one thread safety. any help will be welcome
0
0
0
0
false
47,568,008
1
238
1
1
0
46,927,517
it has solved. the point is that you should create separate connection for each completely separate query to avoid missing data during each query execution
1
0
0
uwsgi application stops with error when running it with multi thread
1
python,multithreading,server,uwsgi
1
2017-10-25T08:26:00.000
I am trying to get the current user of the db I have. But I couldn't find a way to do that and there are no questions on stackoverflow similar to this. In postgresql there is a method current_user. For example I coudl just say SELECT current_user and I would get a table with the current user's name. Is there something similar in Sqlalchemy?
1
1
0.066568
0
false
65,727,720
0
2,399
1
0
0
47,038,961
If you use flask-login module of Flask you could just import a function current_user with from flask_login import current_user. Then you could just get it from the database and db model (for instance Sqlite/SqlAlchemy) if you save it in a database: u_id = current_user.id u_email = current_user.email u_name = current_user.name etc.
1
0
0
SqlAlchemy current db user
3
python,python-2.7,sqlalchemy
0
2017-10-31T15:24:00.000
I use Pandas with Jupyter notebook a lot. After I ingest a table in from using pandas.read_sql, I would preview it by doing the following: data = pandas.read_sql("""blah""") data One problem that I have been running into is that all my preview tables will disappear if I reopen my .ipynb Is there a way to prevent that from happening? Thanks!
0
0
0
1
false
47,042,891
0
71
1
0
0
47,042,689
Are you explicitly saving your notebook before you re-open it? A Jupyter notebook is really just a large json object, eventually rendered as a fancy html object. If you save the notebook, illustrations and diagrams should be saved as well. If that doesn't do the trick, try putting the one-liner "data" in a different cell than read_sql().
1
0
0
How to prevent charts or tables to disappear when I re-open Jupyter Notebook?
1
python,ipython,jupyter-notebook,ipython-notebook
0
2017-10-31T18:53:00.000
I need to read the whole geoip2 database and insert that data into SQL lite database. I tried to read the .mmdb file in the normal way but it prints random characters.
0
1
0.197375
0
false
47,048,122
0
845
1
0
0
47,047,727
You should be able to download CSV file and import it into SQL lite.
1
0
1
Can we read the geoip2 database file with .mmdb format like normal file in Python?
1
python,maxmind,geoip2
0
2017-11-01T03:21:00.000
When I run from flask.ext.mysql import MySQL I get the warning Importing flask.ext.mysql is deprecated, use flask_mysql instead. So I installed flask_mysql using pip install flask_mysql,installed it successfully but then when I run from flask_mysql import MySQL I get the error No module named flask_mysql. In the first warning I also get Detected extension named flaskext.mysql, please rename it to flask_mysql. The old form is deprecated. .format(x=modname), ExtDeprecationWarning. Could you please tell me how exactly should I rename it to flask_mysql? Thanks in advance.
4
3
1.2
0
true
47,117,043
0
3,318
1
0
0
47,116,912
flask.ext. is a deprecated pattern which was used prevalently in older extensions and tutorials. The warning is telling you to replace it with the direct import, which it guesses to be flask_mysql. However, Flask-MySQL is using an even more outdated pattern, flaskext.. There is nothing you can do about that besides convincing the maintainer to release a new version that fixes it. from flaskext.mysql import MySQL should work and avoid the warning, although preferably the package would be updated to use flask_mysql instead.
1
0
0
Python flask.ext.mysql is deprecated?
2
python,mysql,flask
0
2017-11-05T00:06:00.000
I'm trying to use wb = load_workbook(filename) but either I work in Python console or call it from a script, it hangs for a while, then my laptop completely freezes. I can't switch to console to reboot, can't restart X etc. (UPD: CPU consumption is 100% in this moment, memory consump. is 5% only). Has anybody met such issue? Python 2.7, openpyxl 2.4.9
1
0
0
0
false
47,125,299
0
1,046
2
0
0
47,123,188
The warning is exactly that, a warning about some aspect of the file being removed. But it has nothing to do with the rest of the question. I suspect you are running out of memory. How much memory is openpyxl using when the laptop freezes?
1
0
0
openpyxl load_workbook() freezes
2
python,openpyxl
0
2017-11-05T15:19:00.000
I'm trying to use wb = load_workbook(filename) but either I work in Python console or call it from a script, it hangs for a while, then my laptop completely freezes. I can't switch to console to reboot, can't restart X etc. (UPD: CPU consumption is 100% in this moment, memory consump. is 5% only). Has anybody met such issue? Python 2.7, openpyxl 2.4.9
1
0
0
0
false
64,410,841
0
1,046
2
0
0
47,123,188
I had this issue kinda.... I had been editing my excel workbook. I ended up accidentally pasting a space into an almost infinite amount of rows. ya know... like a lot. I selected all empty cells and hit delete, saved workbook, problem gone.
1
0
0
openpyxl load_workbook() freezes
2
python,openpyxl
0
2017-11-05T15:19:00.000
I want to insert date and time into mongo ,using pymongo. However, I can insert datetime but not just date or time . here is the example code : now = datetime.datetime.now() log_date = now.date() log_time = now.time() self.logs['test'].insert({'log_date_time': now, 'log_date':log_date, 'log_time':log_time}) it show errors : bson.errors.InvalidDocument: Cannot encode object: datetime.time(9, 12, 39, 535769) in fact , i don't know how to insert just date or time in mongo shell too. i know insert datetime is new Date(), but I just want the date or time filed.
0
0
0
0
false
47,148,634
0
515
1
0
0
47,148,516
You are experiencing the defined behavior. MongoDB has a single datetime type (datetime). There are no separate, discrete types of just date or just time. Workarounds: Plenty, but food for thought: Storing just date is straightforward: assume Z time, use a time component of 00:00:00, and ignore the time offset upon retrieval. Storing just time is trickier but doable: establish a base date like the epoch and only vary the time component, and ignore the date component upon retrieval.
1
0
1
questions about using pymongo to insert date and time into mongo
1
python,mongodb,datetime,pymongo
0
2017-11-07T01:23:00.000
I wonder how does Postgres sever determine to close a DB connection, if I forgot at the Python source code side. Does the Postgres server send a ping to the source code? From my understanding, this is not possible.
0
2
0.197375
0
false
47,166,411
0
41
1
0
0
47,166,301
When your script quits your connection will close and the server will clean it up accordingly. Likewise, it's often the case in garbage collected languages like Python that when you stop using the connection and it falls out of scope it will be closed and cleaned up. It is possible to write code that never releases these resources properly, that just perpetually creates new handles, something that can be problematic if you don't have something server-side that handles killing these after some period of idle time. Postgres doesn't do this by default, though it can be configured to, but MySQL does. In short Postgres will keep a database connection open until you kill it either explicitly, such as via a close call, or implicitly, such as the handle falling out of scope and being deleted by the garbage collector.
1
0
0
How does Postges Server know to keep a database connection open
2
python,database,postgresql
0
2017-11-07T19:49:00.000
I am deploying a Jupyter notebook(using python 2.7 kernel) on client side which accesses data on a remote and does processing in a remote Spark standalone cluster (using pyspark library). I am deploying spark cluster in Client mode. The client machine does not have any Spark worker nodes. The client does not have enough memory(RAM). I wanted to know that if I perform a Spark action operation on dataframe like df.count()on client machine, will the dataframe be stored in Client's RAM or will it stored on Spark worker's memory?
0
0
0
1
false
47,173,911
0
187
1
0
0
47,173,286
If i understand correctly, then what you will get on the client side is an int. At least should be, if setup correctly. So the answer is no, the DF is not going to hit your local RAM. You are interacting with the cluster via SparkSession (SparkContext for earlier versions). Even though you are developing -i.e. writing code- on the client machine, the actual computation of spark operations -i.e. running pyspark code- will not be performed on your local machine.
1
0
0
Where is RDD or Spark SQL dataframe stored or persisted in client deploy mode on a Spark 2.1 Standalone cluster?
1
python,pyspark,apache-spark-sql,spark-dataframe
0
2017-11-08T06:50:00.000
I have made several tables in a Postgres database in order to acquire data with time values and do automatic calculation in order to have directly compiled values. Everything is done using triggers that will update the right table in case of modification of values. For example, if I update or insert a value measured @ 2017-11-06 08:00, the trigger will detect this and do the update for daily calculations; another one will do the update for monthly calculations, and so... Right now, everything is working well. Data acquisition is done in python/Qt to update the measured values using pure SQL instruction (INSERT/UPDATE/DELETE) and automatic calculation are working. Everything is working well too when I use an interface like pgAdmin III to change values. My problem comes with development in django to display and modify the data. Up to now, I did not have any problem as I just displayed data without trying to modify them. But now I don't understand what's going on... If I insert a new value using model.save(), eveything is working: the hourly measure is written, the daily, monthly and yearly calculation are done. But if I update an existing value, the triggers seem to not see the modification: the hourly measure is updated (so model.save() do the job), but the daily calculation trigger seems not to be launched as the corresponding table is not updated. As said previously, manually updating the same value with pgAdmin III works: the hourly value is updated, the daily calculation is done. I do not understand why the update process of django seems to disable my triggers... I have tried to use the old save algorithm (select_on_save = True), but without success. The django account of the database is owning all the tables, triggers and functions. He has execute permission on all triggers and functions. And again, inserting an item with django is working using the same triggers and functions. My solution for the moment is to use direct SQL instruction with python/Qt to do the job, but I feel a bit frustrating not to be able to use only django API... Does anybody have some idea to debug or solve this issue?
0
0
0
0
false
47,226,539
1
1,413
1
0
0
47,209,114
Problem was resulting from an error of time zone management.
1
0
0
Issues with django and postgresql triggers
1
python,django,postgresql,database-trigger
0
2017-11-09T18:29:00.000
I am looking for another method to convert .accdb to .db without using csv exporting and separator method to create the new database. Do Access has built-in option to export files into .db?
0
2
1.2
0
true
47,248,148
0
1,624
1
0
0
47,247,790
Access has built-in ODBC support. You can use ODBC to export tables to SQLite. You do need to create the database first.
1
0
0
How to convert .accdb to db
1
python,sqlite,ms-access
0
2017-11-12T10:35:00.000
I've got a google app engine application that loads time series data real-time into a google datastore nosql style table. I was hoping to get some feedback around the right type of architecture to pull this data into a web application style chart (and ideally something I could also plug into a content management system like Word Press). Most of my server-side code is python. What's a reasonable client-server setup to pull the data from the datastore database and display into my webpage? Ideally I'd have something that scales and doesn't cause an unnecessary number of reads on my database (potentially using google-app-engine's built in caching/etc). I'm guessing this is a common use-case but I'd like to get an idea of what might be some best practices around this. I've seen some examples using client web side javascript/ajax with server side php to read the DB- is this really the best way?
0
0
0
0
false
47,257,054
1
70
1
1
0
47,254,930
Welcome to "it depends". You have some choices. Imagine the classic four-quadrant chart. Along one axis is data size, along the other is staleness/freshness. If your time-series data changes rapidly but is small enough to safely be retrieved within a request, you can query for it on demand, convert it to JSON, and squirt it to the browser to be rendered by the JavaScript charting package of your choice. If the data is large, your app will need to do some sort of server-side pre-processing so that when the data is needed, it can be retrieved in sufficiently fewer requests that that the request won't time out. This might involve something data dependent like pre-bucketing the time series. If the data changes slowly, you have the option of generating your chart on the server side, perhaps using matplotlib. When new data is ingested, or perhaps at intervals, spawn off a task to generate and cache the chart (or JSON to hand to the front-end) as a blob in the datastore. If the data is sufficiently large that a task will timeout, you might need to use a backend process. If the data is sufficiently large and you don't pre-process, you're in the quadrant of unhappiness. In my experience, GAE memcache is best for caching data between requests where the time between requests is very short. Don't rely on generating artifacts, stuff them in memcache and hoping that they'll be there a few minutes later. I've rarely seen that work.
1
0
0
Load chart data into a webapp from google datastore
1
python,wordpress,google-app-engine,charts,google-cloud-datastore
0
2017-11-12T22:55:00.000
I've used Excel in the past to fetch daily price data on more than 1000 equity securities over a period of a month and it was a really slow experience (1 hour wait in some circumstances) since I was making a large amount of calls using the Bloomberg Excel Plugin. I've always wondered if there was a substantial performance improvement to do the same task if I was using Python or Java to pull data from Bloomberg's API instead. Would like to know from those who have had experience with both Excel and a programming language before I dive head first into trying to implement a Python or Java solution.
0
1
1.2
0
true
47,331,288
0
418
1
0
0
47,319,322
I have only used the Python API, and via wrappers. As such I imagine there are ways to get data faster than what I currently do. But for what I do, I'd say I can get a few years of daily data for roughly 50 securities in a matter of seconds. So I imagine it could improve your workflow to move to a more robust API. Regarding intraday data on the other hand I don't find much improvement. But I am not using concurrent calls (which I'm sure would help my speed on that front).
1
0
0
Accessing Bloomberg's API through Excel vs. Python / Java / other programming languages
1
java,python,excel,api,bloomberg
0
2017-11-15T23:54:00.000
I am using the Python3, Django and database as Postgresql, and I wanted to use the Thingsboard dashboard in my web application. can anyone pls guide me how can I use this
0
0
1.2
0
true
47,798,648
1
304
1
0
0
47,454,686
ThingsBoard has APIs which you can use. You may also customise it based on your requirements.
1
0
0
I am using the python django, and i wanted to know to get use thingsboard dashboard, and database as postgresql
2
django,python-3.x,postgresql,mqtt,thingsboard
0
2017-11-23T11:41:00.000
OS: Ubuntu 17.10 Python: 2.7 SUBLIME TEXT 3: I am trying to import mysql.connector, ImportError: No module named connector Although, when i try import mysql.connector in python shell, it works. Earlier it was working fine, I just upgraded Ubuntu and somehow mysql connector is not working. I have tried reinstalling mysql connector using pip and git both. Still no luck. Please help!
2
0
0
0
false
52,473,839
0
722
1
0
0
47,588,910
I am now using Python 3.6, mysql.connector is working for me best. OS: Ubuntu 18.04
1
0
0
MySQL Connector not Working: NO module named Connector
2
python,mysql,ubuntu,sublimetext3,mysql-connector
0
2017-12-01T07:55:00.000
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-000000000046x0x1x6' has no attribute 'MinorVersion' Has anyone faced a similar situation and, if yes, what can a possible remedy for this? (I've had a look at the source code for win32com on GitHub, but haven't been able to make much sense from it.)
14
0
0
0
false
61,532,508
0
17,855
3
0
0
47,608,506
Deletion of the folder as mentioned previously did not work for me. I solved this problem by installing a new version of pywin32 using conda. conda install -c anaconda pywin32
1
0
0
Issue in using win32com to access Excel file
6
python,excel,win32com
0
2017-12-02T13:41:00.000
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-000000000046x0x1x6' has no attribute 'MinorVersion' Has anyone faced a similar situation and, if yes, what can a possible remedy for this? (I've had a look at the source code for win32com on GitHub, but haven't been able to make much sense from it.)
14
5
0.16514
0
false
61,842,925
0
17,855
3
0
0
47,608,506
A solution is to locate the gen_py folder (C:\Users\\AppData\Local\Temp\gen_py) and delete its content. It works for me when using the COM with another program.
1
0
0
Issue in using win32com to access Excel file
6
python,excel,win32com
0
2017-12-02T13:41:00.000
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-000000000046x0x1x6' has no attribute 'MinorVersion' Has anyone faced a similar situation and, if yes, what can a possible remedy for this? (I've had a look at the source code for win32com on GitHub, but haven't been able to make much sense from it.)
14
6
1
0
false
55,256,887
0
17,855
3
0
0
47,608,506
Renaming the GenPy folder should work. It's present at: C:\Users\ _insert_username_ \AppData\Local\Temp\gen_py Renaming it will create a new Gen_py folder and will let you dispatch Excel properly.
1
0
0
Issue in using win32com to access Excel file
6
python,excel,win32com
0
2017-12-02T13:41:00.000
I have an existing sqlite db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory. Is there a way in Python to load the existing file into memory in order to speed up the calculations?
0
-2
1.2
0
true
47,702,482
0
231
1
0
0
47,702,450
You could read all the tables into DataFrames with Pandas, though I'm surprised it's slow. sqlite has always been really fast for me.
1
0
0
Load existing db file to memory Python sqlite?
1
python,sqlite
0
2017-12-07T19:31:00.000
If I have two backends, one NodeJS and one Python both of them are accessing the same database. Is it possible to use an ORM for both or is that really bad practice? It seems like that would lead to a maintenance nightmare.
0
0
0
0
false
47,708,755
1
184
2
0
0
47,707,608
It is possible, but it may cause conflicts with table names, constraint names, sequence names and other names which are depend on ORM naming strategy.
1
0
0
Node ORM and Python ORM for same DB?
2
python,node.js,postgresql,orm
0
2017-12-08T04:13:00.000
If I have two backends, one NodeJS and one Python both of them are accessing the same database. Is it possible to use an ORM for both or is that really bad practice? It seems like that would lead to a maintenance nightmare.
0
0
0
0
false
47,708,025
1
184
2
0
0
47,707,608
so long as both ORMs put few constraints on the database structure it should be fine.
1
0
0
Node ORM and Python ORM for same DB?
2
python,node.js,postgresql,orm
0
2017-12-08T04:13:00.000
I have MySQL DB/table with column "name" containing one value. Multiple python scripts are accessing the same DB/table and the same column. There are also two more columns called "locked" and "locked_by", each script is reading the table and selects 10 entries from "name" where "locked" is false and update the locked value to True so other script can't take them and do the same work again. At least that is the solution I have for multiple script accessing one column and not tripping all over each other.. BUT! I'm worried that between time when one script is updating the "locked" status other script takes that value and try to update it and so on.. ending in mess Is there some solution to this or am I just worried about non exitant issue ?
1
0
0
0
false
47,729,223
0
79
1
0
0
47,728,923
Looks like SQLs: SELECT ... FOR UPDATE would lock the selected row and other processes can't read/update it until I commit changes.. if I understand correctly
1
0
1
How to prevent conflicts while multiple python scripts accessing MySQL DB at once ?
1
python,mysql
0
2017-12-09T13:11:00.000
I want to store a list within another list in a database (SQL) without previous data being lost.This is one example of values i have in my database (1, 'Haned', 15, 11, 'Han15', 'password', "['easymaths', 6]"). What i want to do is store another piece of information/data within the list [] without it getting rid of "['easymaths', 6]" so it would look something like "['easymaths', 6,'mediummaths', 6]" and so on.Thank you
0
0
0
0
false
47,731,116
0
20
1
0
0
47,730,994
Though not intended you could join the list by a specific seperator. In turn when you query the selected field you have to convert it into a list again.
1
0
0
Storing a list within another list in a database without previous information in that list getting lost
1
python,sql,database,list,append
0
2017-12-09T17:06:00.000
In the book grokking algorithms, the author said that In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow. In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for search it will take linear time because you have to scan all the items one by one. What I don't understand is that for insert and delete, that hash table take linear time to perform. In worst case, all the items are stored in the same slot which points to a linked list. And for linked list, delete and insert take constant time. Why hash table take linear time? For insert, can hash table just append the item at the end of linked list? It will take constant time.
3
0
0
0
false
47,738,572
0
2,136
3
0
0
47,738,554
Because in order to insert and delete, you need to make search and search takes O(n) in worst case. Therefore, it should also takes at least O(n) in worst case as well.
1
0
0
Why in worst case insert and delete take linear time for hash table?
3
python,algorithm,data-structures,hash
0
2017-12-10T11:52:00.000
In the book grokking algorithms, the author said that In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow. In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for search it will take linear time because you have to scan all the items one by one. What I don't understand is that for insert and delete, that hash table take linear time to perform. In worst case, all the items are stored in the same slot which points to a linked list. And for linked list, delete and insert take constant time. Why hash table take linear time? For insert, can hash table just append the item at the end of linked list? It will take constant time.
3
0
1.2
0
true
47,738,575
0
2,136
3
0
0
47,738,554
And for linked list, delete and insert take constant time. They don't. They take linear time, because you have to find the item to delete (or the place to insert) first.
1
0
0
Why in worst case insert and delete take linear time for hash table?
3
python,algorithm,data-structures,hash
0
2017-12-10T11:52:00.000
In the book grokking algorithms, the author said that In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow. In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for search it will take linear time because you have to scan all the items one by one. What I don't understand is that for insert and delete, that hash table take linear time to perform. In worst case, all the items are stored in the same slot which points to a linked list. And for linked list, delete and insert take constant time. Why hash table take linear time? For insert, can hash table just append the item at the end of linked list? It will take constant time.
3
3
0.197375
0
false
47,738,580
0
2,136
3
0
0
47,738,554
Delete will not be constant: you will have to visit the whole worst case linked-list to find the object you want to delete. So this would also be a O(n) complexity. You will have the same problem to insert: you don't want any duplicates, therefore, to be sure not to create some of them ,you will have to check the whole linked list.
1
0
0
Why in worst case insert and delete take linear time for hash table?
3
python,algorithm,data-structures,hash
0
2017-12-10T11:52:00.000
I want to have a checkboxcolumn in my returned table via Django-filter, then select certain rows via checkbox, and then do something with these rows. This is Django-filter: django-filter.readthedocs.io/en/1.1.0 This is an example of checkboxcolumn being used in Django-tables2: stackoverflow.com/questions/10850316/… My question is: can I use the checkboxcolumn for a table returned via Django-filter? Thanks
1
0
0
0
false
47,835,665
1
1,965
1
0
0
47,783,328
What django-filter does from the perspective of django-tables2 is supplying a different (filtered) queryset. django-tables2 does not care about who composed the queryset, it will just iterate over it and render rows using the models form the queryset. So if you a checkbox column to the table or not, or use django-filter or not, django-tables2 will just render any queryset it gets. If you want to use the checked records for some custom filter, you'll have to do some manual coding, it's not supported out of the box. Short answer: yes, you can use django-tables2 with a CheckboxColumn together with django-filter.
1
0
0
Django-filter AND Django-tables2 CheckBoxColumn compatibility
2
python,mysql,django,django-filter,django-tables2
0
2017-12-12T23:40:00.000
I am looking for a way to require users of a SQL query system to include certain columns in the SELECT query for example require select to have transaction_id column else return error. This is to insure compatibility with other functions. I'm using EXPLAIN (FORMAT JSON) to parse query plan as a dictionary but it doesn't provide information about the column names.
0
0
1.2
0
true
47,789,777
0
32
1
0
0
47,789,320
Have you tried EXPLAIN (VERBOSE)? That shows the column names. But I think it will be complicated – you'd have to track table aliases to figure out which column belongs to which table.
1
0
0
Requiring certain columns in SELECT SQL query for it to go through?
1
python,postgresql,sqlalchemy
0
2017-12-13T09:13:00.000
So I'm trying to store a LOT of numbers, and I want to optimize storage space. A lot of the numbers generated have pretty high precision floating points, so: 0.000000213213 or 323224.23125523 - long, high memory floats. I want to figure out the best way, either in Python with MySQL(MariaDB) - to store the number with smallest data size. So 2.132e-7 or 3.232e5, just to basically store it as with as little footprint as possible, with a decimal range that I can specify - but removing the information after n decimals. I assume storing as a DOUBLE is the way to go, but can I truncate the precision and save on space too? I'm thinking some number formating / truncating in Python followed by just normal storage as a DOUBLE would work - but would that actually save any space as opposed to just immediately storing the double with N decimals attached. Thanks!
2
1
1.2
0
true
47,843,118
0
317
1
0
0
47,842,966
All python floats have the same precision and take the same amount of storage. If you want to reduce overall storage numpy arrays should do the trick.
1
0
0
Most efficient way to store scientific notation in Python and MySQL
2
python,mysql,types,double,storage
0
2017-12-16T05:53:00.000
Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case? I am not getting any error messages. The column names appear fine, but the table is entirely empty. When I try to send over a single column (i.e. data.ix[2]), it actually works. However, if I try to send over more than one column (data.ix[1:3]), I again get a completely blank table in sql. I have been using this code for other dataframes and have never encountered this problem. It still runs for other dataframes in my set.
1
-1
-0.099668
1
false
55,399,149
0
2,183
2
0
0
47,878,076
I was also facing same issue because dot was added in header. remove dot then it will work.
1
0
0
Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case?
2
python,sql,python-3.x,postgresql,pandas
0
2017-12-18T23:46:00.000
Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case? I am not getting any error messages. The column names appear fine, but the table is entirely empty. When I try to send over a single column (i.e. data.ix[2]), it actually works. However, if I try to send over more than one column (data.ix[1:3]), I again get a completely blank table in sql. I have been using this code for other dataframes and have never encountered this problem. It still runs for other dataframes in my set.
1
0
0
1
false
47,896,038
0
2,183
2
0
0
47,878,076
I fixed this problem - it was becomes some of the column headers had '%' in it. I accidentally discovered this reason for the empty tables when I tried to use io and copy_from a temporary csv, instead of to_sql. I got a transaction error based on a % placeholder error. Again, this is specific to passing to PSQL; it went through to SQL Server without a hitch.
1
0
0
Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case?
2
python,sql,python-3.x,postgresql,pandas
0
2017-12-18T23:46:00.000
I have two projects under the same account: projectA with BQ and projectB with cloud storage projectA has BQ with dataset and table - testDataset.testTable prjectB has cloud storage and bucket - testBucket I use python, google cloud rest api account key credentials for every project, with different permissions: projectA key has permissions only for BQ; projectB has permissions only for cloud storage What I need: import data from projectA testDataset.testTable to projectB testBucket Problems of course, I'm running into error Permission denied while I'm trying to do it, because apparently, projectA key does not have permissions for projectB storage and etc another strange issue as I have testBucket in projetB I can't create a bucket with the same name in projectA and getting This bucket name is already in use. Bucket names must be globally unique. Try another name. So, looks like all accounts are connected I guess it means should be possible to import data from one account to another one via API What can I do in this case?
1
0
0
0
false
47,884,399
0
716
1
0
0
47,884,227
You put this wrong. You need to provide access to the user account on both projects to have accessible across projects. So there needs to be a user authorized to do the BQ thing and also the GCP thing on the different project. Also Bucket names must be globally unique it means I can't create the name as well, it's global (for the entire planet you reserved that name, not just for project)
1
0
0
Import data from BigQuery to Cloud Storage in different project
1
google-bigquery,google-cloud-platform,google-cloud-storage,google-python-api
0
2017-12-19T09:53:00.000
I have a problem figuring out how I can create a table using psycopg2, with IF NOT EXISTS statement, and getting the NOT EXISTS result The issue is that I'm creating a table, and running some CREATE INDEX / UNIQUE CONSTRAINT after it was created. If the table already exists - there is no need to create the indexes or constraints
2
0
0
0
false
48,137,413
0
1,166
1
0
0
47,912,529
Eventually I ended up adding AUTOCOMMIT = true This is the only way I can make sure all workers see when a table is created
1
0
0
psycopg2 create table if not exists and return exists result
2
python,postgresql,psycopg2
0
2017-12-20T18:47:00.000
I am using pymysql to connect to a database. I am new to database operations. Is there a status code that I can use to check if the database is open/alive, something like db.status.
7
-4
-1
0
false
47,973,362
0
13,383
1
0
0
47,973,320
It looks like you can create the database object and check if it has been created if it isn't created you can raise an exception Try connecting to an obviously wrong db and see what error it throws and you can use that in a try and except block I'm new to this as well so anyone with a better answer please feel free to chime in
1
0
0
How to check the status of a mysql connection in python?
3
python,pymysql
0
2017-12-26T02:11:00.000
I am working with alembic and it automatically creates a table called alembic_revision on your database. How, do I specify the name of this table instead of using the default name?
4
8
1.2
0
true
47,979,603
1
874
1
0
0
47,979,390
After you run your init. Open the env.py file and update context.configure, add version_table='alembic_version_your_name as a kwarg.
1
0
0
Alembic, how do you change the name of the revision database?
1
python,sqlalchemy,alembic
0
2017-12-26T13:36:00.000
I am trying to build a AWS Lambda function using APi Gateway which utlizes pyodbc python package. I have followed the steps as mentioned in the documentation. I keep getting the following error Unable to import module 'app': libodbc.so.2: cannot open shared object file: No such file or directory when I test run the Lambda function. Any help appreciated. I am getting the same error when I deployed my package using Chalice. It seems it could be that I need to install unixodbc-dev. Any idea how to do that through AWS Lambda?
3
1
0.033321
0
false
50,925,535
1
6,014
1
0
0
48,016,091
Fisrt, install unixODBC and unixODBC-devel packages using yum install unixODBC unixODBC-devel. This step will install everything required for pyodbc module. The library you're missing is located in /usr/lib64 folder on you Amazon Linux instance. Copy the library to your python project's root folder (libodbc.so.2 is just a symbolic link, make sure you copy symbolic link and library itself as listed): libodbc.so, libodbc.so.2 and libodbc.so.2.0.0
1
0
0
Unable to use pyodbc with aws lambda and API Gateway
6
python,amazon-web-services,aws-lambda,chalice
1
2017-12-29T00:40:00.000
I have a file that has several tabs that have pivot tables that are based on one data tab. I am able to write the data to the data tab without issue, but I can't figure out how to get all of the tabs with pivot tables to refresh. If this can be accomplished with openpyxl that would be ideal.
6
0
0
0
false
49,428,497
0
7,810
2
0
0
48,016,206
Currently what I do is in my template I create a dynamic data range that gets the data from the raw data sheet and then I set that named range to the tables data source. Then in the pivot table options there is a "refresh on open" parameter and I enable that. When the excel file opens it refreshes and you can see it refresh. Currently looking for a way to do it in openpyxl but this is where im at
1
0
0
Using openpyxl to refresh pivot tables in Excle
4
python,excel,openpyxl
0
2017-12-29T01:00:00.000
I have a file that has several tabs that have pivot tables that are based on one data tab. I am able to write the data to the data tab without issue, but I can't figure out how to get all of the tabs with pivot tables to refresh. If this can be accomplished with openpyxl that would be ideal.
6
1
0.049958
0
false
52,547,674
0
7,810
2
0
0
48,016,206
If the data source range is always the same, you can set each pivot table as "refresh when open". To do that, just go to the pivot table tab, click on the pivot table, under "Analyze" - > Options -> Options -> Data -> select "Refresh data when opening the file". If the data source range is dynamic, you can set a named range, and in the pivot table tab, Change Data Source to the named range. And again set "refresh when open". So the above is achieved without using any python package, alternatively you can use openpyxl to refresh. However make sure that you're using the 2.5 release or above, because otherwise the pivot table format will be lost.
1
0
0
Using openpyxl to refresh pivot tables in Excle
4
python,excel,openpyxl
0
2017-12-29T01:00:00.000