Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
My Django app, deployed in mod_wsgi under Apache using Django's standard WSGIHandler, authenticates users via form login on the Django side. So to Apache, the user is anonymous. This makes the Apache access log less useful. Is there a way to pass the username back through the WSGI wrapper to Apache after handling the request, so that it appears in the Apache access log? (Versions: Django 1.1.1, mod_wsgi 2.5, Apache 2.2.9)
9
1
0.039979
0
false
2,244,295
1
2,209
2
0
0
2,244,244
This probably isn't what you're expecting, but you could use the username in your URL scheme. That way the user will be in the path section of your apache logs. You'd need to modify your authentication so that auth-required responses are obvious in the apache logs, otherwise when viewing the logs you may attribute unauthenticated requests to authenticated users. E.g. return a temporary redirect to the login page if the request isn't authenticated.
1
0
0
WSGI/Django: pass username back to Apache for access log
5
python,django,apache,authentication,mod-wsgi
0
2010-02-11T12:03:00.000
My Django app, deployed in mod_wsgi under Apache using Django's standard WSGIHandler, authenticates users via form login on the Django side. So to Apache, the user is anonymous. This makes the Apache access log less useful. Is there a way to pass the username back through the WSGI wrapper to Apache after handling the request, so that it appears in the Apache access log? (Versions: Django 1.1.1, mod_wsgi 2.5, Apache 2.2.9)
9
1
0.039979
0
false
10,406,967
1
2,209
2
0
0
2,244,244
Correct me if I'm wrong, but what's stopping you from creating some custom middleware that sets a cookie equal to the display name of the current user logged in. This middleware will run on every view, so even though technically the user could spoof his username to display whatever he wants it to display, it'll just be reset anyway and it's not like its a security risk because the username itself is just for log purposes, not at all related to the actual user logged in. This seems like a simple enough solution, and then Apache log can access cookies so that gives you easiest access. I know some people wouldn't like the idea of a given user spoofing his own username, but i think this is the most trivial solution that gets the job done. Especially, in my case, when it's an iPhone app and the user doesn't have any direct access to a javascript console or the cookies itself.
1
0
0
WSGI/Django: pass username back to Apache for access log
5
python,django,apache,authentication,mod-wsgi
0
2010-02-11T12:03:00.000
I am using PostgreSQL 8.4. I really like the new unnest() and array_agg() features; it is about time they realize the dynamic processing potential of their Arrays! Anyway, I am working on web server back ends that uses long Arrays a lot. Their will be two successive processes which will each occur on a different physical machine. Each such process is a light python application which ''manage'' SQL queries to the database on each of their machines as well as requests from the front ends. The first process will generate an Array which will be buffered into an SQL Table. Each such generated Array is accessible via a Primary Key. When its done the first python app sends the key to the second python app. Then the second python app, which is running on a different machine, uses it to go get the referenced Array found in the first machine. It then sends it to it's own db for generating a final result. The reason why I send a key is because I am hopping that this will make the two processes go faster. But really what I would like is for a way to have the second database send a query to the first database in the hope of minimizing serialization delay and such. Any help/advice would be appreciated. Thanks
3
0
0
0
false
2,277,362
0
785
1
0
0
2,263,132
I am thinking either listen/notify or something with a cache such as memcache. You would send the key to memcache and have the second python app retrieve it from there. You could even do it with listen/notify... e.g; send the key and notify your second app that the key is in memcache waiting to be retrieved.
1
0
0
Inter-database communications in PostgreSQL
3
python,arrays,postgresql,database-connection
0
2010-02-14T22:40:00.000
I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run: some CREATE / UPDATE / DELETE tests against a temporary database that is thrown away at the end of each test case, and some report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change. Some of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices: only put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing. fill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long. I'm not sure which approach to take. Any advice?
1
1
1.2
0
true
2,273,471
0
107
2
0
0
2,273,414
I'd do both. Run against the small set first to make sure all the code works, then run against the large dataset for things which need to be tested for time, this would be selects, searches and reports especially. If you are doing inserts or deletes or updates on multiple row sets, I'd test those as well against the large set. It is unlikely that simple single row action queries will take too long, but if they involve a lot alot of joins, I'd test them as well. If the queries won't run on prod within the timeout limits, that's a fail and far, far better to know as soon as possible so you can fix before you bring prod to it's knees.
1
0
0
Should pre-commit tests use a big data set and fail if queries take too long, or use a small test database?
2
python,sql,mysql,tdd,automated-tests
0
2010-02-16T14:06:00.000
I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run: some CREATE / UPDATE / DELETE tests against a temporary database that is thrown away at the end of each test case, and some report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change. Some of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices: only put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing. fill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long. I'm not sure which approach to take. Any advice?
1
1
0.099668
0
false
2,273,476
0
107
2
0
0
2,273,414
The problem with testing against real data is that it contains lots of duplicate values, and not enough edge cases. It is also difficult to know what the expected values ought to be (especially if your live database is very big). Oh, and depending on what the live application does, it can be illegal to use the data for the purposes of testing or development. Generally the best thing is to write the test data to go with the tests. This is labourious and boring, which is why so many TDD practitioners abhor databases. But if you have a live data set (which you can use for testing) then take a very cut-down sub-set of data for your tests. If you can write valid assertions against a dataset of thirty records, running your tests against a data set of thirty thousand is just a waste of time. But definitely, once you have got the queries returning the correct results put the queries through some performance tests.
1
0
0
Should pre-commit tests use a big data set and fail if queries take too long, or use a small test database?
2
python,sql,mysql,tdd,automated-tests
0
2010-02-16T14:06:00.000
I'm going to write my first non-Access project, and I need advice on choosing the platform. I will be installing it on multiple friends' and family's computers, so (since I'm sure many, many platforms would suffice just fine for my app), my highest priority has two parts: 1) ease of install for the non-technical user and, 2) minimizing compatibility problems. I want to be able to fix bugs and make changes and roll them out without having to troubleshoot OS and program conflicts on their computers (or at least keeping those things to the absolute minimum-this is why these concerns are my highest priority in choosing a platform.) I have narrowed it down to Python or Java. I like Java's use of the JVM, which seems like would serve to protect against incompatibilities on individual computers nicely. And I've heard a lot of good things about Python, but I don't know how much more prone to incompatibilities it is vs Java. In case it is important, I know the app will definitely use some flavor of a free server-enabled SQL db (server-enabled because I want to be able to run the app from multiple computers), but I don't know which to use yet. I thought I could decide that next. My experience level: I've taken a C++ (console app only) class and done some VBA in Access, but mostly I'm going to have to jump in and learn as I go. So of course I don't know much about all of this. I'm not in the computer field, this is just a hobby. So, which would be better for this app, Java or Python? (In case it comes up, I don't want to make it browser-based at all. I've dealt with individual computers' browser settings breaking programs, and that goes against part 2 of my top priority - maximum compatibility.) Thank you. Update: It will need a gui, and I'd like to be able to do a little bit of customization on it (or use a non-standard, or maybe a non-built-in one) to make it pop a little. Update 2: Truthfully, I really am only concerned with Windows computers. I am considering Java only for its reliability as a platform.
0
1
0.039979
0
false
2,282,470
1
594
2
0
0
2,282,360
The largest issue I can think of is the need to install an interpreter. With Java, a lot of people will already have that interpreter installed, although you won't necessarily know which version. It may be wise to include the installer for Java with the program. With Python, you're going to have to install the interpreter on each computer, too. One commenter mentioned .NET. .NET 2.0 has a fairly high likelyhood of being installed than either Java or Python on Windows machines. The catch is that you can't (easily) install it on OSX or Linux.
1
0
0
Help for novice choosing between Java and Python for app with sql db
5
java,python
0
2010-02-17T16:20:00.000
I'm going to write my first non-Access project, and I need advice on choosing the platform. I will be installing it on multiple friends' and family's computers, so (since I'm sure many, many platforms would suffice just fine for my app), my highest priority has two parts: 1) ease of install for the non-technical user and, 2) minimizing compatibility problems. I want to be able to fix bugs and make changes and roll them out without having to troubleshoot OS and program conflicts on their computers (or at least keeping those things to the absolute minimum-this is why these concerns are my highest priority in choosing a platform.) I have narrowed it down to Python or Java. I like Java's use of the JVM, which seems like would serve to protect against incompatibilities on individual computers nicely. And I've heard a lot of good things about Python, but I don't know how much more prone to incompatibilities it is vs Java. In case it is important, I know the app will definitely use some flavor of a free server-enabled SQL db (server-enabled because I want to be able to run the app from multiple computers), but I don't know which to use yet. I thought I could decide that next. My experience level: I've taken a C++ (console app only) class and done some VBA in Access, but mostly I'm going to have to jump in and learn as I go. So of course I don't know much about all of this. I'm not in the computer field, this is just a hobby. So, which would be better for this app, Java or Python? (In case it comes up, I don't want to make it browser-based at all. I've dealt with individual computers' browser settings breaking programs, and that goes against part 2 of my top priority - maximum compatibility.) Thank you. Update: It will need a gui, and I'd like to be able to do a little bit of customization on it (or use a non-standard, or maybe a non-built-in one) to make it pop a little. Update 2: Truthfully, I really am only concerned with Windows computers. I am considering Java only for its reliability as a platform.
0
1
1.2
0
true
2,283,347
1
594
2
0
0
2,282,360
If you're going to install only (or mostly) on Windows, I'd go with .Net. If you have experience with C++, then C# would be natural to you, but if you're comfortable with VBA, you can try VB.NET, but if you prefer Python, then there is IronPython or can give a try to IronRuby, but the best of all is you can mix them all as they apply to different parts of your project. In the database area you'll have excellent integration with SQL Server Express, and in the GUI area, Swing can't beat the ease of use of WinForms nor the sophistication of WPF/Silverlight. As an added bonus, you can have your application automatically updated with ClickOnce.
1
0
0
Help for novice choosing between Java and Python for app with sql db
5
java,python
0
2010-02-17T16:20:00.000
How do I completely reset my Django (1.2 alpha) DB (dropping all tables, rather than just clearing them)? manage.py flush does too little (won't work if there are schema changes) and manage.py reset requires me to specify all apps (and appears to take a format that is different from just " ".join(INSTALLED_APPS)). I can obviously achieve this in a DB specific way, but I figured there must be a sane, DB backend agnostic way to do this. [Edit: I'm looking for something that I can call from a script, e.g. a Makefile and that continues to work if I change the backend DB or add to settings.INSTALLED_APPS]
20
0
0
0
false
2,289,931
1
24,412
3
0
0
2,289,187
Hm, maybe you lie to manage.py, pretending to make fixtures, but only to look for apps: apps=$(python manage.py makefixture 2>&1 | egrep -v '(^Error|^django)'|awk -F . '{print $2}'|uniq); for i in $apps; do python manage.py sqlreset $i; done| grep DROP That prints out a list of DROP TABLE statements for all apps tables of your project, excluding django tables itself. If you want to include them, remove the |^django pattern vom egrep. But how to feed the correct database backend? sed/awk-ing through settings.conf? Or better by utilizing a little settings.conf-reading python script itself.
1
0
0
Complete django DB reset
7
python,django
0
2010-02-18T14:16:00.000
How do I completely reset my Django (1.2 alpha) DB (dropping all tables, rather than just clearing them)? manage.py flush does too little (won't work if there are schema changes) and manage.py reset requires me to specify all apps (and appears to take a format that is different from just " ".join(INSTALLED_APPS)). I can obviously achieve this in a DB specific way, but I figured there must be a sane, DB backend agnostic way to do this. [Edit: I'm looking for something that I can call from a script, e.g. a Makefile and that continues to work if I change the backend DB or add to settings.INSTALLED_APPS]
20
0
0
0
false
2,289,445
1
24,412
3
0
0
2,289,187
Just assign a new database and drop this db from the db console. Seems to me to be the simplest.
1
0
0
Complete django DB reset
7
python,django
0
2010-02-18T14:16:00.000
How do I completely reset my Django (1.2 alpha) DB (dropping all tables, rather than just clearing them)? manage.py flush does too little (won't work if there are schema changes) and manage.py reset requires me to specify all apps (and appears to take a format that is different from just " ".join(INSTALLED_APPS)). I can obviously achieve this in a DB specific way, but I figured there must be a sane, DB backend agnostic way to do this. [Edit: I'm looking for something that I can call from a script, e.g. a Makefile and that continues to work if I change the backend DB or add to settings.INSTALLED_APPS]
20
-2
-0.057081
0
false
2,289,727
1
24,412
3
0
0
2,289,187
take a look at reset command in django's code, and write your own which drops/creates DB first.
1
0
0
Complete django DB reset
7
python,django
0
2010-02-18T14:16:00.000
i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown: Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x20108150>> ignored I have many "try" and "exception" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown. I'm very puzzled, can someone help me out?
8
0
0
0
false
55,394,190
1
6,125
3
0
0
2,291,714
The exceptions in object destructors (__del__) are ignored, which this message indicates. If you execute some MySQL command without fetching results from the cursor (e.g. 'create procedure' or 'insert') then the exception is unnoticed until the cursor is destroyed. If you want to raise and catch an exception, call explicitly cursor.close() somewhere before going out of the scope.
1
0
0
Who is throwing (and catching) this MySQL Exception?
5
python,mysql,django,exception
0
2010-02-18T19:50:00.000
i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown: Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x20108150>> ignored I have many "try" and "exception" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown. I'm very puzzled, can someone help me out?
8
2
0.07983
0
false
2,300,154
1
6,125
3
0
0
2,291,714
After printing out a bunch of stuff and debugging, I figured out the problem I think. One of the libraries that I used didn't close the connection or the cursor. But this problem only shows up if I iterate through a large amount of data. The problem is also very intermittent and I still don't know who's throwing the "command out of sync" exception. But now that we closed both the connection and cursor, I don't see the errors anymore.
1
0
0
Who is throwing (and catching) this MySQL Exception?
5
python,mysql,django,exception
0
2010-02-18T19:50:00.000
i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown: Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x20108150>> ignored I have many "try" and "exception" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown. I'm very puzzled, can someone help me out?
8
2
0.07983
0
false
2,292,145
1
6,125
3
0
0
2,291,714
I believe this error can occur if you are using the same connection/cursor from multiple threads. However, I dont think the creators of Django has made such a mistake, but if you are doing something by yourself it can easily happen.
1
0
0
Who is throwing (and catching) this MySQL Exception?
5
python,mysql,django,exception
0
2010-02-18T19:50:00.000
I'm planning on building a Django log-viewing app with powerful filters. I'd like to enable the user to finely filter the results with some custom (possibly DB-specific) SELECT queries. However, I dislike giving the user write access to the database. Is there a way to make sure a query doesn't change anything in the database? Like a 'dry run' flag? Or is there a way to filter SELECT queries so that they can't be harmful in any way? I thought about running the queries as a separate MySQL user but I'd rather avoid the hassle. I also thought about using Google App Engine's GQL 'language', but if there is a cleaner solution, I'd certainly like to hear it :) Thanks.
2
1
0.099668
0
false
2,305,359
1
543
2
0
0
2,305,353
Create and use non-modifiable views.
1
0
0
How can I limit an SQL query to be nondestructive?
2
python,sql,django,security,sql-injection
0
2010-02-21T08:48:00.000
I'm planning on building a Django log-viewing app with powerful filters. I'd like to enable the user to finely filter the results with some custom (possibly DB-specific) SELECT queries. However, I dislike giving the user write access to the database. Is there a way to make sure a query doesn't change anything in the database? Like a 'dry run' flag? Or is there a way to filter SELECT queries so that they can't be harmful in any way? I thought about running the queries as a separate MySQL user but I'd rather avoid the hassle. I also thought about using Google App Engine's GQL 'language', but if there is a cleaner solution, I'd certainly like to hear it :) Thanks.
2
14
1.2
0
true
2,305,379
1
543
2
0
0
2,305,353
Connect with a user that has only been granted SELECT permissions. Situations like this is why permissions exist in the first place.
1
0
0
How can I limit an SQL query to be nondestructive?
2
python,sql,django,security,sql-injection
0
2010-02-21T08:48:00.000
Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?
20
24
1.2
0
true
2,306,070
1
4,303
4
0
0
2,306,048
I'd highly recommend using the same database backend in production as in development, and all stages in between. Django will abstract the database stuff, but having different environments will leave you open to horrible internationalisation, configuration issues, and nasty tiny inconsistencies that won't even show up until you push it live. Personally, I'd stick to mysql, but I never got on with postgres :)
1
0
0
Django: sqlite for dev, mysql for prod?
6
python,mysql,django,sqlite,dev-to-production
0
2010-02-21T13:45:00.000
Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?
20
7
1
0
false
9,401,789
1
4,303
4
0
0
2,306,048
Use the same database in all environments. As much as the ORM tries to abstract the differences between databases, there will always be certain features that behave differently based on the database. Database portability is a complete myth. Plus, it seems pretty insane to test and develop against code paths that you will never use in production, doesn't it?
1
0
0
Django: sqlite for dev, mysql for prod?
6
python,mysql,django,sqlite,dev-to-production
0
2010-02-21T13:45:00.000
Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?
20
3
0.099668
0
false
2,306,069
1
4,303
4
0
0
2,306,048
In short, no; unless you want to unnecessarily double development time.
1
0
0
Django: sqlite for dev, mysql for prod?
6
python,mysql,django,sqlite,dev-to-production
0
2010-02-21T13:45:00.000
Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?
20
3
0.099668
0
false
12,684,980
1
4,303
4
0
0
2,306,048
Just made this major mistake starting off with sqlite and when i try to deploy on production server with mysql, things didn't work as smooth as i expected. I tried dumpdata/loaddata with various switches but somehow keep getting errors thrown one after another. Do yourself a big favor and use the same db for both production and development.
1
0
0
Django: sqlite for dev, mysql for prod?
6
python,mysql,django,sqlite,dev-to-production
0
2010-02-21T13:45:00.000
Is there another way to connect to a MySQL database with what came included in the version of Python (2.5.1) that is bundled with Mac OS 10.5.x? I unfortunately cannot add the the MySQLdb module to the client machines I am working with...I need to work with the stock version of Python that shipped with Leopard.
3
1
0.049958
0
false
9,170,459
0
3,698
1
0
0
2,313,307
If the problem is the inability, as so many people have mentioned, that the msqldb module is a problem a simpler way is 1. install the mysql db 2. install the pyodbc module 3. Load and configure the odbc mysql driver 4. perform sql manipulations with pyodbc, which is very mature and full functional. hope this helps
1
0
0
Python: Access a MySQL db without MySQLdb module
4
python,mysql,macos
0
2010-02-22T18:54:00.000
What is the best way to access sql server from python is it DB-API ? Also could someone provide a such code using the DB-API how to connect to sql server from python and excute query ?
26
1
0.049958
0
false
2,314,282
0
28,195
1
0
0
2,314,178
ODBC + freetds + a python wrapper library for ODBC.
1
0
0
Python & sql server
4
python,sql,sql-server
0
2010-02-22T21:04:00.000
I'm seeking a way to let the python logger module to log to database and falls back to file system when the db is down. So basically 2 things: How to let the logger log to database and how to make it fall to file logging when the db is down.
48
2
0.07983
0
false
46,617,613
0
48,780
1
0
0
2,314,307
Old question, but dropping this for others. If you want to use python logging, you can add two handlers. One for writing to file, a rotating file handler. This is robust, and can be done regardless if the dB is up or not. The other one can write to another service/module, like a pymongo integration. Look up logging.config on how to setup your handlers from code or json.
1
0
0
python logging to database
5
python,database,logging
0
2010-02-22T21:22:00.000
I am developing a Python web app using sqlalchemy to communicate with mysql database. So far I have mostly been using sqlalchemy's ORM layer to speak with the database. The greatest benefit to me of ORM has been the speed of development, not having to write all these sql queries and then map them to models. Recently, however, I've been required to change my design to communicate with the database through stored procedures. Does any one know if there is any way to use sqlalchemy ORM layer to work with my models through the stored procedures? Is there another Python library which would allow me to do this? The way I see it I should be able to write my own select, insert, update and delete statements, attach them to the model and let the library do the rest. I've gone through sqlalchemy's documentation multiple times but can't seem to find a way to do this. Any help with this would be great!
4
3
1.2
0
true
2,338,360
0
1,754
1
0
0
2,330,278
SQLAlchemy doesn't have any good way to convert inserts, updates and deletes to stored procedure calls. It probably wouldn't be that hard to add the capability to have instead_{update,insert,delete} extensions on mappers, but no one has bothered yet. I consider the requirement to have simple DML statements go through stored procedures rather silly. It really doesn't offer anything that you couldn't do with triggers. If you can't avoid the silliness, there are some ways that you can use SQLAlchemy to go along with it. You'll lose some of the ORM functionality though. You can build ORM objects from stored procedure results using query(Obj).from_statement(text("...")), just have the column labels in the statement match the column names that you told SQLAlchemy to map. One option to cope with DML statements is to turn autoflush off and instead of flushing go through the sessions .new, .dirty and .deleted attributes to see what has changed, issue corresponding statements as stored procedure calls and expunge the objects before committing. Or you can just forgo SQLAlchemy state tracking and issue the stored procedure calls directly.
1
0
0
Keeping ORM with stored procedures
1
python,mysql,database,stored-procedures,sqlalchemy
0
2010-02-24T22:51:00.000
Here is the scenario. In your function you're executing statements using a cursor, but one of them fails and an exception is thrown. Your program exits out of the function before closing the cursor it was working with. Will the cursor float around taking up space? Do I have to close the cursor? Additionally, the Python documentation has an example of cursor use and says: "We can also close the cursor if we are done with it." The keyword being "can," not "must." What do they mean precisely by this?
48
13
1
0
false
2,330,380
0
27,863
3
0
0
2,330,344
You're not obliged to call close() on the cursor; it can be garbage collected like any other object. But even if waiting for garbage collection sounds OK, I think it would be good style still to ensure that a resource such as a database cursor gets closed whether or not there is an exception.
1
0
0
In Python with sqlite is it necessary to close a cursor?
8
python,sqlite
0
2010-02-24T23:02:00.000
Here is the scenario. In your function you're executing statements using a cursor, but one of them fails and an exception is thrown. Your program exits out of the function before closing the cursor it was working with. Will the cursor float around taking up space? Do I have to close the cursor? Additionally, the Python documentation has an example of cursor use and says: "We can also close the cursor if we are done with it." The keyword being "can," not "must." What do they mean precisely by this?
48
7
1
0
false
2,416,354
0
27,863
3
0
0
2,330,344
I haven't seen any effect for the sqlite3.Cursor.close() operation yet. After closing, you can still call fetch(all|one|many) which will return the remaining results from the previous execute statement. Even running Cursor.execute() still works ...
1
0
0
In Python with sqlite is it necessary to close a cursor?
8
python,sqlite
0
2010-02-24T23:02:00.000
Here is the scenario. In your function you're executing statements using a cursor, but one of them fails and an exception is thrown. Your program exits out of the function before closing the cursor it was working with. Will the cursor float around taking up space? Do I have to close the cursor? Additionally, the Python documentation has an example of cursor use and says: "We can also close the cursor if we are done with it." The keyword being "can," not "must." What do they mean precisely by this?
48
0
0
0
false
71,683,829
0
27,863
3
0
0
2,330,344
Yes, we should close our cursor. I once encountered an error when I used my cursor to configure my connection object: 'PRAGMA synchronous=off' and 'PRAGMA journal_mode=off' for faster insertion. Once I closed the cursor, the error went away. I forgot what type of error I encountered.
1
0
0
In Python with sqlite is it necessary to close a cursor?
8
python,sqlite
0
2010-02-24T23:02:00.000
I'm writing a python script to select, insert, update, and delete data in SimpleDB. I've been using the simpledb module written by sixapart so far, and it's working pretty well. I've found one potential bug/feature that is problematic for me when running select queries with "limit", and I'm thinking of trying it with the boto module to see if it works better. Has anyone used these two modules? Care to offer an opinion on which is better? Thanks!
4
3
1.2
0
true
2,336,902
0
833
1
0
0
2,336,822
I've found boto to be effective and straight forward and I've never had any trouble with queries with limits. Although I've never used the sixapart module.
1
0
0
What's the best module to access SimpleDB in python?
1
python,amazon-simpledb
0
2010-02-25T19:10:00.000
SQLAlchemy seems really heavyweight if all I use is MySQL. Why are convincing reasons for/against the use of SQLAlchemy in an application that only uses MySQL.
3
4
0.26052
0
false
2,359,697
0
275
3
0
0
2,358,822
I don't think performance should be much of a factor in your choice. The layer that an ORM adds will be insignificant compared to the speed of the database. Databases always end up being a bottleneck. Using an ORM may allow you to develop faster with less bugs. You can still access the DB directly if you have a query that doesn't work well with the ORM layer.
1
0
0
If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?
3
python,mysql,sqlalchemy,pylons
0
2010-03-01T20:24:00.000
SQLAlchemy seems really heavyweight if all I use is MySQL. Why are convincing reasons for/against the use of SQLAlchemy in an application that only uses MySQL.
3
0
0
0
false
2,359,777
0
275
3
0
0
2,358,822
sqlalchemy provides more than just an orm, you can select/insert/update/delete from table objects, join them etc.... the benefit of using those things over building strings with sql in them is guarding against sql injection attacks for one. You also get some decent connection management that you don't have to write yourself. The orm part may not be appropriate for your application, but rolling your own sql handling and connection handling would be really really stupid in my opinion.
1
0
0
If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?
3
python,mysql,sqlalchemy,pylons
0
2010-03-01T20:24:00.000
SQLAlchemy seems really heavyweight if all I use is MySQL. Why are convincing reasons for/against the use of SQLAlchemy in an application that only uses MySQL.
3
7
1.2
0
true
2,358,852
0
275
3
0
0
2,358,822
ORM means that your OO application actually makes sense when interpreted as the interaction of objects. No ORM means that you must wallow in the impedance mismatch between SQL and Objects. Working without an ORM means lots of redundant code to map between SQL query result sets, individual SQL statements and objects. SQLAchemy partitions your application cleanly into objects that interact and a persistence mechanism that (today) happens to be a relational database. With SQLAlchemy you stand a fighting chance of separating the core model and processing from the odd limitations and quirks of a SQL RDBMS.
1
0
0
If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?
3
python,mysql,sqlalchemy,pylons
0
2010-03-01T20:24:00.000
I’m trying to bulk insert data to SQL server express database. When doing bcp from Windows XP command prompt, I get the following error: C:\temp>bcp in -T -f -S Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file 0 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 4391 So, there is a problem with EOF. How to append a correct EOF character to this file using Perl or Python?
0
1
0.066568
0
false
2,371,680
0
3,283
2
1
0
2,371,645
This is not a problem with missing EOF, but with EOF that is there and is not expected by bcp. I am not a bcp tool expert, but it looks like there is some problem with format of your data files.
1
0
0
How to append EOF to file using Perl or Python?
3
python,sql-server,perl,bcp
0
2010-03-03T13:35:00.000
I’m trying to bulk insert data to SQL server express database. When doing bcp from Windows XP command prompt, I get the following error: C:\temp>bcp in -T -f -S Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file 0 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 4391 So, there is a problem with EOF. How to append a correct EOF character to this file using Perl or Python?
0
3
1.2
0
true
2,371,725
0
3,283
2
1
0
2,371,645
EOF is End Of File. What probably occurred is that the file is not complete; the software expects data, but there is none to be had anymore. These kinds of things happen when: the export is interrupted (quit dump software while dumping) while copying the dumpfile aborting the copy disk full during dump these kinds of things. By the way, though EOF is usually just an end of file, there does exist an EOF character. This is used because terminal (command line) input doesn't really end like a file does, but it sometimes is necessary to pass an EOF to such a utility. I don't think it's used in real files, at least not to indicate an end of file. The file system knows perfectly well when the file has ended, it doesn't need an indicator to find that out. EDIT shamelessly copied from a comment provided by John Machin It can happen (uninentionally) in real files. All it needs is (1) a data-entry user to type Ctrl-Z by mistake, see nothing on the screen, type the intended Shift-Z, and keep going and (2) validation software (written by e.g. the company president's nephew) which happily accepts Ctrl-anykey in text fields and your database has a little bomb in it, just waiting for someone to produce a query to a flat file.
1
0
0
How to append EOF to file using Perl or Python?
3
python,sql-server,perl,bcp
0
2010-03-03T13:35:00.000
I have a big DBF file (~700MB). I'd like to select only a few lines from it using a python script. I've seen that dbfpy is a nice module that allows to open this type of database, but for now I haven't found any querying capability. Iterating through all the elements from python is simply too slow. Can I do what I want from python in a reasonable time?
9
2
0.132549
0
false
2,375,874
0
5,441
1
0
0
2,373,086
Chances are, your performance is more I/O bound than CPU bound. As such, the best way to speed it up is to optimize your search. You probably want to build some kind of index keyed by whatever your search predicate is.
1
0
0
Python: Fast querying in a big dbf (xbase) file
3
python,performance,python-3.x,dbf,xbase
0
2010-03-03T16:38:00.000
I am able to get the feed from the spreadsheet and worksheet ID. I want to capture the data from each cell. i.e, I am able to get the feed from the worksheet. Now I need to get data(string type?) from each of the cells to make a comparison and for input. How exactly can I do that?
6
1
0.039979
0
false
22,048,019
0
14,275
1
0
0
2,377,301
gspread is probably the fastest way to begin this process, however there are some speed limitations on updating data using gspread from your localhost. If you're moving large sets of data with gspread - for instance moving 20 columns of data over a column, you may want to automate the process using a CRON job.
1
0
0
How to write a python script to manipulate google spreadsheet data
5
python,google-sheets,gspread
0
2010-03-04T06:32:00.000
1.I have a list of data and a sqlite DB filled with past data along with some stats on each data. I have to do the following operations with them. Check if each item in the list is present in DB. if no then collect some stats on the new item and add them to DB. Check if each item in DB is in the list. if no delete it from DB. I cannot just create a new DB, coz I have other processing to do on the new items and the missing items. In short, i have to update the DB with the new data in list. What is best way to do it? 2.I had to use sqlite with python threads. So I put a lock for every DB read and write operation. Now it has slowed down the DB access. What is the overhead for thread lock operation? And Is there any other way to use the DB with multiple threads? Can someone help me on this?I am using python3.1.
0
0
0
0
false
2,378,530
0
227
1
0
0
2,378,364
It does not need to check anything, just use INSERT OR IGNORE in first case (just make sure you have corresponding unique fields so INSERT would not create duplicates) and DELETE FROM tbl WHERE data NOT IN ('first item', 'second item', 'third item') in second case. As it is stated in the official SQLite FAQ, "Threads are evil. Avoid them." As far as I remember there were always problems with threads+sqlite. It's not that sqlite is not working with threads at all, just don't rely much on this feature. You can also make single thread working with database and pass all queries to it first, but effectiveness of such approach is heavily dependent on style of database usage in your program.
1
0
1
Need help on python sqlite?
1
python,sqlite,multithreading
0
2010-03-04T10:14:00.000
Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin. I have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why? Whilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(
43
15
1
0
false
2,390,062
0
10,874
3
0
0
2,388,870
Compared to "any key-value store", the key features for ZODB would be automatic integration of attribute changes with real ACID transactions, and clean, "arbitrary" references to other persistent objects. The ZODB is bigger than just the FileStorage used by default in Zope: The RelStorage backend lets you put your data in an RDBMS which can be backed up, replicated, etc. using standard tools. ZEO allows easy scaling of appservers and off-line jobs. The two-phase commit support allows coordinating transactions among multiple databases, including RDBMSes (assuming that they provide a TPC-aware layer). Easy hierarchy based on object attributes or containment: you don't need to write recursive self-joins to emulate it. Filesystem-based BLOB support makes serving large files trivial to implement. Overall, I'm very happy using ZODB for nearly any problem where the shape of the data is not obviously "square".
1
0
0
ZODB In Real Life
5
python,zodb
0
2010-03-05T18:01:00.000
Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin. I have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why? Whilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(
43
5
0.197375
0
false
2,391,063
0
10,874
3
0
0
2,388,870
I would recommend it. I really don't have any criticisms. If it's an object store your looking for, this is the one to use. I've stored 2.5 million objects in it before and didn't feel a pinch.
1
0
0
ZODB In Real Life
5
python,zodb
0
2010-03-05T18:01:00.000
Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin. I have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why? Whilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(
43
2
0.07983
0
false
2,389,155
0
10,874
3
0
0
2,388,870
ZODB has been used for plenty of large databases Most ZODB usage is/was probably Zope users who migrated away if they migrate away from Zope Performance is not so good as relatonal database+ORM especially if you have lots of writes. Long term maintenance is not so bad, you want to pack the database from time to time, but that can be done live. You have to use ZEO if you are going to use more than one process with your ZODB which is quite a lot slower than using ZODB directly I have no idea how ZODB performs on flash disks.
1
0
0
ZODB In Real Life
5
python,zodb
0
2010-03-05T18:01:00.000
I process a lot of text/data that I exchange between Python, R, and sometimes Matlab. My go-to is the flat text file, but also use SQLite occasionally to store the data and access from each program (not Matlab yet though). I don't use GROUPBY, AVG, etc. in SQL as much as I do these operations in R, so I don't necessarily require the database operations. For such applications that requires exchanging data among programs to make use of available libraries in each language, is there a good rule of thumb on which data exchange format/method to use (even XML or NetCDF or HDF5)? I know between Python -> R there is rpy or rpy2 but I was wondering about this question in a more general sense - I use many computers which all don't have rpy2 and also use a few other pieces of scientific analysis software that require access to the data at various times (the stages of processing and analysis are also separated).
8
15
1.2
1
true
2,392,026
0
3,563
1
0
0
2,392,017
If all the languages support SQLite - use it. The power of SQL might not be useful to you right now, but it probably will be at some point, and it saves you having to rewrite things later when you decide you want to be able to query your data in more complicated ways. SQLite will also probably be substantially faster if you only want to access certain bits of data in your datastore - since doing that with a flat-text file is challenging without reading the whole file in (though it's not impossible).
1
0
0
SQLite or flat text file?
2
python,sql,database,r,file-format
0
2010-03-06T09:30:00.000
They will also search part of their name. Not only words with spaces. If they type "Matt", I expect to retrieve "Matthew" too.
3
0
0
0
false
2,395,473
0
291
1
0
0
2,394,870
If you are trying to search for the names through any development Language, you can use the Regular expression package in Java. Some thing like java.util.regex.*;
1
0
0
Suppose I have 400 rows of people's names in a database. What's the best way to do a search for their names?
4
python,mysql,database,search,indexing
0
2010-03-07T01:54:00.000
Say I have a simple table that contains username, firstname, lastname. How do I express this in berkeley Db? I'm currently using bsddb as the interface. Cheers.
1
4
1.2
0
true
2,399,691
0
1,199
1
0
0
2,399,643
You have to pick one "column" as the key (must be unique; I imagine that would be "username" in your case) -- the only way searches will ever possibly happen. The other columns can be made to be the single string value of that key by any way you like, from pickling to simple joining with a character that's guaranteed to never occur in any of the columns, such as `\0' for many kind of "readable text strings". If you need to be able to search by different keys you'll need other, supplementary and separate bsddb databases set up as "indices" into your main table -- it's lots of work, and there's lots of literature on the subject. (Alternatively, you move to a higher-abstraction technology, such as sqlite, which handles the indexing neatly on your behalf;-).
1
0
0
Expressing multiple columns in berkeley db in python?
2
python,berkeley-db,bsddb,okvs
0
2010-03-08T06:25:00.000
Suppose I have 500 rows of data, each with a paragraph of text (like this paragraph). That's it.I want to do a search that matches part of words. (%LIKE%, not FULL_TEXT) What would be faster? SELECT * FROM ...WHERE LIKE "%query%"; This would put load on the database server. Select all. Then, go through each one and do .find >= 0 This would put load on the web server. This is a website, and people will be searching frequently.
0
1
1.2
0
true
2,401,635
0
105
1
0
0
2,401,508
This is very hard for us to determine without knowing: the amount of text to search the load and configuration on the database server the load and configuration on on the webserver etc etc ... With that said i would conceptually definitely go for the first scenario. It should be lightening-fast when searching only 500 rows.
1
0
0
What would be the most efficient way to do this search (mysql or text)?
2
python,mysql,database,regex,search
0
2010-03-08T13:18:00.000
I have an idea for a product that I want to be web-based. But because I live in a part of the world where the internet is not always available, there needs to be a client desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig
1
0
0
0
false
2,430,572
1
174
3
0
0
2,428,077
If you wan't to run a version of the server on desktops, your best options would be Python, Rails, or Java servlets, all of which can be easily packaged into small self contained servers with no dependencies. My recommendation for the desktop would be HTML 5 local storage. The standard hasn't been finalized, but there is experimental support in Google Chrome. If you can force your users to use a specific browser version, you should be OK, until it is finalized. I would recommend looking at Django and Rails before any other framework. They have different design philosophies, so one of them might be better suited for your application. Another framework to consider is Grails, which is essentially a clone of Rails in the groovy language.
1
0
0
Old desktop programmer wants to create S+S project
3
php,python,ruby-on-rails,programming-languages,saas
0
2010-03-11T19:38:00.000
I have an idea for a product that I want to be web-based. But because I live in a part of the world where the internet is not always available, there needs to be a client desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig
1
0
0
0
false
2,429,484
1
174
3
0
0
2,428,077
The languages you list are all serverside components. The big question is whether you can sensibly build a thick client - effectively you could develop a multi-tier application where the webserver sits on the client and uses a webservice as a datafeed if/when its available but the solution is not very portable. You could build a purely ajax driven website in javascript then deploy it to the client as signed javascripts on the local filesystem (they need to be signed to get around the restriction that javscripts can only connect back to the server where they served from normally). Another approach would be to use Google Gears - but that would be a single browser solution. C.
1
0
0
Old desktop programmer wants to create S+S project
3
php,python,ruby-on-rails,programming-languages,saas
0
2010-03-11T19:38:00.000
I have an idea for a product that I want to be web-based. But because I live in a part of the world where the internet is not always available, there needs to be a client desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig
1
0
0
0
false
2,428,452
1
174
3
0
0
2,428,077
If you want a 'desktop component' that is available for you to do development on whenever your internet is out, you could really choose any of those technologies. You can always have a local server (like apache) running on your machine, as well as a local sql database, though if your database contains a large amount of data you may need to scale it down. Ruby on Rails may be the easiest for you to get started with, though, since it comes packaged with WEBrick (a ruby library that provides HTTP services), and SQLite, a lightweight SQL database management system. Ruby on Rails is configured by default to use these.
1
0
0
Old desktop programmer wants to create S+S project
3
php,python,ruby-on-rails,programming-languages,saas
0
2010-03-11T19:38:00.000
Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a "PathToPhoto" column. My question is how do I fill in the "PathToPhoto" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next. I'm using python and postgresql.
0
0
1.2
0
true
2,435,639
0
397
1
0
0
2,435,281
To make sure we're on the same page, is the following correct? You're inserting the photo information into the Photo table immediately after the user uploads the photo but before he/she submits the form; When the user submits the form, you're inserting a row into the User table; One of the items in that row is information about the previously created photo entry. If so, you should be able to store the "path to photo" information in a Python variable until the user submits the form, and then use the value from that variable in your User-table insert.
1
0
0
Database: storing data from user registration form
1
python,database,postgresql
0
2010-03-12T19:32:00.000
I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time). For performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL. My question is: how web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope, How is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql? Could you advice me please.
3
1
0.066568
0
false
9,985,357
1
1,388
2
0
0
2,459,549
Zope and the ZODB have been used with big applications, but I'd still consider linking Zope with MySQL or something like that for serious large-scale applications. Even though Zope has had a lot of development cycles, it is usually used with another database engine for good reason. As far as I know, the argument applies doubly for web2py.
1
0
0
web2py or grok (zope) on a big portal,
3
python,zope,web2py,zodb,grok
0
2010-03-17T02:26:00.000
I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time). For performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL. My question is: how web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope, How is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql? Could you advice me please.
3
7
1
0
false
2,459,620
1
1,388
2
0
0
2,459,549
First, don't assume that a data abstraction layer will have unacceptable performance, until you actually see it in practice. It is pretty easy to switch to RAW sql if and when you run into a problem. Second, most users who worry about there server technology handling a million users never finish their applications. Pick whatever technology you think will enable you to build the best application in the shortest time. Any technology can be scaled, at the very least, through clustering.
1
0
0
web2py or grok (zope) on a big portal,
3
python,zope,web2py,zodb,grok
0
2010-03-17T02:26:00.000
I am using Python MySQLDB, and I want to insert this into DATETIME field in Mysql . How do I do that with cursor.execute?
5
1
0.066568
0
false
2,460,546
0
9,981
1
0
0
2,460,491
Solved. I just did this: datetime.datetime.now() ...insert that into the column.
1
0
0
In Python, if I have a unix timestamp, how do I insert that into a MySQL datetime field?
3
python,mysql,database,datetime,date
0
2010-03-17T07:28:00.000
I want to store the images related to a each person's profile in the DB and retrieve them when requested and save it as .jpg file - and display it to the users. How could I render the image data stored in the DB as an image and store it locally??
3
1
0.066568
0
false
2,477,074
0
19,327
1
0
0
2,477,045
Why don't you simply store the images on the file system, and only store their references on the database. That's a lot more elegant, and won't consume loads of your database. Also, you won't have to use any kind of binary functions to read them from the DB, saving memory and loading time. Is there a very specific reason why you wanna store it on the DB? Cheers
1
0
0
Storing and Retrieving Images from Database using Python
3
python,image-manipulation
0
2010-03-19T12:08:00.000
A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all. I'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java. According to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database. With Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. The database in question contains all kinds of primary keys, including lots of composite keys. Also I appreciate suggestions of other frameworks worth considering.
1
0
0
0
false
2,512,975
1
1,525
2
0
0
2,507,463
There are no clear cut winners when picking a web framework. Each platform you mentioned has its benefits and drawbacks (cost of hardware, professional support, community support, etc.). Depending on your time table, project requirements, and available hardware resources you are probably going to need some different answers.Personally, I would start your investigation with a platform where you and your team are most experienced. Like many of the other posters I can only speak to what I'm actively using now, and in my case it is Java. If Java seems to match your projects requirements, you probably want to go with one of the newer frameworks with an active community. Currently Spring Web MVC, Struts2, and Stripes seem to be fairly popular. These frameworks are mostly, if not totally, independent of the persistence layer, but all integrate well with technologies like hibernate and jpa; although you have to do most, if not all, of the wiring yourself. If you want to take the Java road there are also pre-built application stacks that take care of most of wiring issues for you. For an example you might want to look at Matt Raible's AppFuse. He has built an extensible starter application with many permutations of popular java technologies. If you are interested in the JVM as a platform, you may also want to look at complete stack solutions like Grails, or tools that help you build your stack quickly like Spring Roo. Almost all of the full stack solutions I've seen allow for integration with a legacy database schema. As long as your database is well designed, you should be able to map your tables. The mention of composite keys kind of scares me, but depending on your persistence technology this may or may not be an issue. Hibernate in Java/.NET supports mapping to composite keys, as does GORM in grails (built on hibernate). In almost all cases these mappings are discouraged, but people who build persistence frameworks know you can't always scorch earth and completely recreate your model.
1
0
0
Web framework for an application utilizing existing database?
8
java,php,python,ruby
0
2010-03-24T12:11:00.000
A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all. I'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java. According to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database. With Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. The database in question contains all kinds of primary keys, including lots of composite keys. Also I appreciate suggestions of other frameworks worth considering.
1
2
0.049958
0
false
2,507,492
1
1,525
2
0
0
2,507,463
I have very good experience with Django. Every time I needed it was up to the task for interfacing with existing database. Autogenerated models are the start, as MySQL is not the strictest with its schema. Not that it will not work only that usually some of the db restrictions are held in app itself.
1
0
0
Web framework for an application utilizing existing database?
8
java,php,python,ruby
0
2010-03-24T12:11:00.000
I am in need of a lightweight way to store dictionaries of data into a database. What I need is something that: Creates a database table from a simple type description (int, float, datetime etc) Takes a dictionary object and inserts it into the database (including handling datetime objects!) If possible: Can handle basic references, so the dictionary can reference other tables I would prefer something that doesn't do a lot of magic. I just need an easy way to setup and get data into an SQL database. What would you suggest? There seems to be a lot of ORM software around, but I find it hard to evaluate them.
1
3
0.148885
0
false
2,539,235
0
1,069
1
0
0
2,539,147
SQLAlchemy offers an ORM much like django, but does not require that you work within a web framework.
1
0
1
Lightweight Object->Database in Python
4
python,sql,orm
0
2010-03-29T15:30:00.000
Recently i have developed a billing application for my company with Python/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. So do you guys know any performance optimization techniques in python that will really help me with the scalability issue Guys we are using mysql database and its hosted on apache web server on Linux box. Secondly what i have noticed more is the over all application is slow and not the database transactional part. For example once the application is loaded then it works fine but if they navigate to other link on that application then it takes a whole lot of time. And yes we are using HTML, CSS and Javascript
11
6
1
0
false
2,545,940
1
1,942
3
0
0
2,545,820
ok, not entirely to the point, but before you go and start fixing it, make sure everyone understands the situation. it seems to me that they're putting some pressure on you to fix the "problem". well first of all, when you wrote the application, have they specified the performance requirements? did they tell you that they need operation X to take less than Y secs to complete? Did they specify how many concurrent users must be supported without penalty to the performance? If not, then tell them to back off and that it is iteration (phase, stage, whatever) one of the deployment, and the main goal was the functionality and testing. phase two is performance improvements. let them (with your help obviously) come up with some non functional requirements for the performance of your system. by doing all this, a) you'll remove the pressure applied by the finance team (and i know they can be a real pain in the bum) b) both you and your clients will have a clear idea of what you mean by "performance" c) you'll have a base that you can measure your progress and most importantly d) you'll have some agreed time to implement/fix the performance issues. PS. that aside, look at the indexing... :)
1
0
0
Optimization Techniques in Python
9
python
1
2010-03-30T14:07:00.000
Recently i have developed a billing application for my company with Python/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. So do you guys know any performance optimization techniques in python that will really help me with the scalability issue Guys we are using mysql database and its hosted on apache web server on Linux box. Secondly what i have noticed more is the over all application is slow and not the database transactional part. For example once the application is loaded then it works fine but if they navigate to other link on that application then it takes a whole lot of time. And yes we are using HTML, CSS and Javascript
11
4
1.2
0
true
2,546,955
1
1,942
3
0
0
2,545,820
A surprising feature of Python is that the pythonic code is quite efficient... So a few general hints: Use built-ins and standard functions whenever possible, they're already quite well optimized. Try to use lazy generators instead one-off temporary lists. Use numpy for vector arithmetic. Use psyco if running on x86 32bit. Write performance critical loops in a lower level language (C, Pyrex, Cython, etc.). When calling the same method of a collection of objects, get a reference to the class function and use it, it will save lookups in the objects dictionaries (this one is a micro-optimization, not sure it's worth) And of course, if scalability is what matters: Use O(n) (or better) algorithms! Otherwise your system cannot be linearly scalable. Write multiprocessor aware code. At some point you'll need to throw more computing power at it, and your software must be ready to use it!
1
0
0
Optimization Techniques in Python
9
python
1
2010-03-30T14:07:00.000
Recently i have developed a billing application for my company with Python/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. So do you guys know any performance optimization techniques in python that will really help me with the scalability issue Guys we are using mysql database and its hosted on apache web server on Linux box. Secondly what i have noticed more is the over all application is slow and not the database transactional part. For example once the application is loaded then it works fine but if they navigate to other link on that application then it takes a whole lot of time. And yes we are using HTML, CSS and Javascript
11
2
0.044415
0
false
2,546,996
1
1,942
3
0
0
2,545,820
before you can "fix" something you need to know what is "broken". In software development that means profiling, profiling, profiling. Did I mention profiling. Without profiling you don't know where CPU cycles and wall clock time is going. Like others have said to get any more useful information you need to post the details of your entire stack. Python version, what you are using to store the data in (mysql, postgres, flat files, etc), what web server interface cgi, fcgi, wsgi, passenger, etc. how you are generating the HTML, CSS and assuming Javascript. Then you can get more specific answers to those tiers.
1
0
0
Optimization Techniques in Python
9
python
1
2010-03-30T14:07:00.000
Why do people use SQLAlchemy instead of MySQLdb? What advantages does it offer?
24
6
1
0
false
2,550,578
0
19,410
3
0
0
2,550,292
In addition to what Alex said... "Not wanting to learn SQL" is probably a bad thing. However, if you want to get more non-technical people involved as part of the development process, ORMs do a pretty good job at it because it does push this level of complexity down a level. One of the elements that has made Django successful is its ability to let "newspaper journalists" maintain a website, rather than software engineers. One of the limitations of ORMs is that they are not as scalable as using raw SQL. At a previous job, we wanted to get rid of a lot of manual SQL generation and switched to an ORM for ease-of-use (SQLAlchemy, Elixir, etc.), but months later, I ended up having to write raw SQL again to get around the inefficient or high latency queries that were generated by the ORM system.
1
0
0
Purpose of SQLAlchemy over MySQLdb
3
python,sql,mysql,sqlalchemy
0
2010-03-31T03:19:00.000
Why do people use SQLAlchemy instead of MySQLdb? What advantages does it offer?
24
32
1.2
0
true
2,550,364
0
19,410
3
0
0
2,550,292
You don't use SQLAlchemy instead of MySQLdb—you use SQLAlchemy to access something like MySQLdb, oursql (another MySQL driver that I hear is nicer and has better performance), the sqlite3 module, psycopg2, or whatever other database driver you are using. An ORM (like SQLAlchemy) helps abstract away the details of the database you are using. This allows you to keep from the miry details of the database system you're using, avoiding the possibility of errors some times (and introducing the possibility of others), and making porting trivial (at least in theory).
1
0
0
Purpose of SQLAlchemy over MySQLdb
3
python,sql,mysql,sqlalchemy
0
2010-03-31T03:19:00.000
Why do people use SQLAlchemy instead of MySQLdb? What advantages does it offer?
24
12
1
0
false
2,550,304
0
19,410
3
0
0
2,550,292
Easier portability among different DB engines (say that tomorrow you decide you want to move to sqlite, or PostgreSQL, or...), and higher level of abstraction (and thus potentially higher productivity). Those are some of the good reasons. There are also some bad reasons for using an ORM, such as not wanting to learn SQL, but I suspect SQLAlchemy in particular is not really favored by people for such bad reasons for wanting an ORM rather than bare SQL;-).
1
0
0
Purpose of SQLAlchemy over MySQLdb
3
python,sql,mysql,sqlalchemy
0
2010-03-31T03:19:00.000
I'm going to write the web portal using Cassandra databases. Can you advise me which python interface to use? thrift, lazygal or pycassa? Are there any benefits to use more complicated thrift then cleaner pycassa? What about performace - is the same (all of them are just the layer)? Thanks for any advice.
5
4
1.2
0
true
2,567,396
0
1,065
1
0
0
2,561,804
Use pycassa if you don't know what to use. Use lazyboy if you want it to maintain indexes for you. It's significantly more complex.
1
0
0
Cassandra database, which python interface?
1
python,database,cassandra,thrift
0
2010-04-01T16:05:00.000
I currently have a SQL database of passwords stored in MD5. The server needs to generate a unique key, then sends to the client. In the client, it will use the key as a salt then hash together with the password and send back to the server. The only problem is that the the SQL DB has the passwords in MD5 already. Therefore for this to work, I would have to MD5 the password client side, then MD5 it again with the salt. Am I doing this wrong, because it doesn't seem like a proper solution. Any information is appreciated.
2
1
0.099668
0
false
2,564,367
0
260
1
0
0
2,564,312
You should use SSL to encrypt the connection, then send the password over plain text from the client. The server will then md5 and compare with the md5 hash in the database to see if they are the same. If so auth = success. MD5'ing the password on the client buys you nothing because a hacker with the md5 password can get in just as easy as if it was in plain text.
1
0
0
Server authorization with MD5 and SQL
2
python,sql,database,authorization,md5
0
2010-04-01T23:42:00.000
Is there an easy way (without downloading any plugins) to connect to a MySQL database in Python? Also, what would be the difference from calling a PHP script to retrieve the data from the database and hand it over to Python and importing one of these third-parties plugins that requires some additional software in the server. EDIT: the server has PHP and Python installed by default.
0
-1
-0.066568
0
false
2,569,567
0
332
2
0
0
2,569,427
No, there is no way that I've ever heard of or can think of to connect to a MySQL database with vanilla python. Just install the MySqldb python package- You can typically do: sudo easy_install MySqldb
1
0
0
Python and MySQL
3
php,python,mysql
0
2010-04-02T22:00:00.000
Is there an easy way (without downloading any plugins) to connect to a MySQL database in Python? Also, what would be the difference from calling a PHP script to retrieve the data from the database and hand it over to Python and importing one of these third-parties plugins that requires some additional software in the server. EDIT: the server has PHP and Python installed by default.
0
-1
-0.066568
0
false
2,569,448
0
332
2
0
0
2,569,427
If you don't want to download the python libraries to connect to MySQL, the effective answer is no, not trivially.
1
0
0
Python and MySQL
3
php,python,mysql
0
2010-04-02T22:00:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
1
0.022219
0
false
2,981,162
0
11,416
7
0
0
2,577,967
It has been a couple of months since I posted this question and I wanted to let you all know how I solved this problem. I am using Berkeley DB with the module bsddb instead loading all the data in a Python dictionary. I am not fully happy, but my users are. My next step is trying to get a shared server with redis, but unless users starts complaining about speed, I doubt I will get it. Many thanks everybody who helped here, and I hope this question and answers are useful to somebody else.
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
0
0
0
false
2,581,460
0
11,416
7
0
0
2,577,967
Take a look at mongodb.
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
12
1
0
false
2,577,979
0
11,416
7
0
0
2,577,967
You probably do need a full relational DBMS, if not right now, very soon. If you start now while your problems and data are simple and straightforward then when they become complex and difficult you will have plenty of experience with at least one DBMS to help you. You probably don't need MySQL on all desktops, you might install it on a server for example and feed data out over your network, but you perhaps need to provide more information about your requirements, toolset and equipment to get better suggestions. And, while the other DBMSes have their strengths and weaknesses too, there's nothing wrong with MySQL for large and complex databases. I don't know enough about SQLite to comment knowledgeably about it. EDIT: @Eric from your comments to my answer and the other answers I form even more strongly the view that it is time you moved to a database. I'm not surprised that trying to do database operations on a 900MB Python dictionary is slow. I think you have to first convince yourself, then your management, that you have reached the limits of what your current toolset can cope with, and that future developments are threatened unless you rethink matters. If your network really can't support a server-based database than (a) you really need to make your network robust, reliable and performant enough for such a purpose, but (b) if that is not an option, or not an early option, you should be thinking along the lines of a central database server passing out digests/extracts/reports to other users, rather than simultaneous, full RDBMS working in a client-server configuration. The problems you are currently experiencing are problems of not having the right tools for the job. They are only going to get worse. I wish I could suggest a magic way in which this is not the case, but I can't and I don't think anyone else will.
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
1
0.022219
0
false
2,578,659
0
11,416
7
0
0
2,577,967
It sounds like each department has their own feudal database, and this implies a lot of unnecessary redundancy and inefficiency. Instead of transferring hundreds of megabytes to everyone across your network, why not keep your data in MySQL and have the departments upload their data to the database, where it can be normalized and accessible by everyone? As your organization grows, having completely different departmental databases that are unaware of each other, and contain potentially redundant or conflicting data, is going to become very painful.
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
0
0
0
false
2,578,310
0
11,416
7
0
0
2,577,967
If you have that problem with a CSV file, maybe you can just pickle the dictionary and generate a pickle "binary" file with pickle.HIGHEST_PROTOCOL option. It can be faster to read and you get a smaller file. You can load the CSV file once and then generate the pickled file, allowing faster load in next accesses. Anyway, with 900 Mb of information, you're going to deal with some time loading it in memory. Another approach is not loading it on one step on memory, but load only the information when needed, maybe making different files by date, or any other category (company, type, etc..)
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
1
0.022219
0
false
2,578,080
0
11,416
7
0
0
2,577,967
Have you done any bench marking to confirm that it is the text files that are slowing you down? If you haven't, there's a good chance that tweaking some other part of the code will speed things up so that it's fast enough.
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow. I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time. Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly. Many thanks!
15
1
0.022219
0
false
2,578,751
0
11,416
7
0
0
2,577,967
Does the machine this process runs on have sufficient memory and bandwidth to handle this efficiently? Putting MySQL on a slow machine and recoding the tool to use MySQL rather than text files could potentially be far more costly than simply adding memory or upgrading the machine.
1
0
0
Best DataMining Database
9
python,database,nosql,data-mining
0
2010-04-05T10:59:00.000
I'm writing a database of all DVDs I have at home. One of the fields, actors, I would like it to be a set of values from an other table, which is storing actors. So for every film I want to store a list of actors, all of which selected from a list of actors, taken from a different table. Is it possible? How do I do this? It would be a set of foreign keys basically. I'm using a MySQL database for a Django application (python), so any hint in SQL or Python would be much appreciated. I hope the question is clear, many thanks.
3
1
0.099668
0
false
2,579,922
1
85
1
0
0
2,579,866
The answer is clear too. You will need not a field, but another films_actors table. This table would act as your field, but much more reliable. This is called many-to-many relation.
1
0
0
Using set with values from a table
2
python,sql,mysql,django-models
0
2010-04-05T17:31:00.000
I have a set of .csv files that I want to process. It would be far easier to process it with SQL queries. I wonder if there is some way to load a .csv file and use SQL language to look into it with a scripting language like python or ruby. Loading it with something similar to ActiveRecord would be awesome. The problem is that I don't want to have to run a database somewhere prior to running my script. I souldn't have additionnal installations needed outside of the scripting language and some modules. My question is which language and what modules should I use for this task. I looked around and can't find anything that suits my need. Is it even possible?
26
3
0.085505
0
false
2,580,542
0
12,118
1
0
0
2,580,497
CSV files are not databases--they have no indices--and any SQL simulation you imposed upon them would amount to little more than searching through the entire thing over and over again.
1
0
0
Database on the fly with scripting languages
7
python,sql,database,sqlite,sqlalchemy
0
2010-04-05T19:10:00.000
I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.
2
3
0.148885
0
false
2,610,094
1
1,516
2
0
0
2,610,000
I don't know postgres but normally ISBM would be a unique index key but not the primary. It's better to have an integer as primary/foreign key. That way you only need to add a new field EAN/ISSN as nullable.
1
0
0
ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN?
4
python,django,postgresql,isbn
0
2010-04-09T18:45:00.000
I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.
2
1
0.049958
0
false
2,614,029
1
1,516
2
0
0
2,610,000
A simple solution (although arguably whether good) would be to use (isbn,title) or (isbn,author) which should pretty much guarantee uniqueness. Ideology is great but practicality also serves a purpose.
1
0
0
ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN?
4
python,django,postgresql,isbn
0
2010-04-09T18:45:00.000
Does anyone know if Python's shelve module uses memory-mapped IO? Maybe that question is a bit misleading. I realize that shelve uses an underlying dbm-style module to do its dirty work. What are the chances that the underlying module uses mmap? I'm prototyping a datastore, and while I realize premature optimization is generally frowned upon, this could really help me understand the trade-offs involved in my design.
2
3
0.291313
0
false
2,618,963
0
902
2
0
0
2,618,921
I'm not sure what you're trying to learn by asking this question, since you already seem to know the answer: it depends on the actual dbm store being used. Some of them will use mmap -- I expect everything but dumbdbm to use mmap -- but so what? The overhead in shelve is almost certainly not in the mmap-versus-fileIO choice, but in the pickling operation. You can't mmap the dbm file sensibly yourself in either case, as the dbm module may have its own fancy locking (and it may not be a single file anyway, like when it uses bsddb.) If you're just looking for inspiration for your own datastore, well, don't look at shelve, since all it does is pickle-and-pass-along to another datastore.
1
0
0
Does Python's shelve module use memory-mapped IO?
2
python,mmap,shelve,dbm
0
2010-04-11T22:18:00.000
Does anyone know if Python's shelve module uses memory-mapped IO? Maybe that question is a bit misleading. I realize that shelve uses an underlying dbm-style module to do its dirty work. What are the chances that the underlying module uses mmap? I'm prototyping a datastore, and while I realize premature optimization is generally frowned upon, this could really help me understand the trade-offs involved in my design.
2
4
1.2
0
true
2,618,981
0
902
2
0
0
2,618,921
Existing dbm implementations in the Python standard library all use "normal" I/O, not memory mapping. You'll need to code your own dbmish implementation with memory mapping, and integrate it with shelve (directly, or, more productively, through anydbm).
1
0
0
Does Python's shelve module use memory-mapped IO?
2
python,mmap,shelve,dbm
0
2010-04-11T22:18:00.000
I'm from Brazil and study at FATEC (college located in Brazil). I'm trying to learn about AppEngine. Now, I'm trying to load a large database from MySQL to AppEngine to perform some queries, but I don't know how i can do it. I did some testing with CSV files,but is there any way to perform the direct import from MySQL? This database is from Pentaho BI Server (www.pentaho.com). Thank you for your attention. Regards, Daniel Naito
0
0
0
0
false
2,662,880
1
1,278
1
1
0
2,650,499
If you're using Pentaho BI Server as your data source, why don't you consider using Pentaho Data Integration (ETL tool) to move the data over? At the very least PDI automate any movement of data between your data source and any AppEngine bulk loader tool (it can easily trigger any app with a shell step).
1
0
0
MySQL to AppEngine
3
python,mysql,google-app-engine,bulk-load
0
2010-04-16T03:57:00.000
There is a m2m relation in my models, User and Role. I want to merge a role, but i DO NOT want this merge has any effect on user and role relation-ship. Unfortunately, for some complicate reason, role.users if not empty. I tried to set role.users = None, but SA complains None is not a list. At this moment, I use sqlalchemy.orm.attributes.del_attribute, but I don't know if it's provided for this purpose.
0
0
1.2
0
true
2,667,004
1
396
1
0
0
2,665,253
You'd better fix your code to avoid setting role.users for the item you are going to merge. But there is another way - setting cascade='none' for this relation. Then you lose an ability to save relationship from Role side, you'll have to save User with roles attribute set.
1
0
0
In SqlAlchemy, how to ignore m2m relationship attributes when merge?
1
python,sqlalchemy
0
2010-04-19T05:01:00.000
I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that? Someone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place. Ideas? Thanks.
7
4
1.2
0
true
2,743,692
0
14,434
1
0
0
2,670,887
I am completely ignorant about Python, but if it can call DLLs then it ought to be able to use Microsoft's ADOMD object. This is the best option I can think of. You could look at Office Web Components (OWC) as that has a OLAP control than can be embedded on a web page. I think you can pass MDX to it, but perhaps you want Python to see the results too, which I don't think it allows. Otherwise perhaps you can build your own 'proxy' in another language. This program/webpage could accept MDX in, and return you XML showing the results. Python could then consume this XML.
1
0
0
MS Analysis Services OLAP API for Python
3
python,database,olap
0
2010-04-19T21:05:00.000
I am using python in Linux to automate an excel. I have finished writing data into excel by using pyexcelerator package. Now comes the real challenge. I have to add another tab to the existing sheet and that tab should contain the macro run in the first tab. All these things should be automated. I Googled a lot and found win32come to do a job in macro, but that was only for windows. Anyone have any idea of how to do this, or can you guide me with few suggestions.
1
0
0
0
false
2,697,769
0
1,093
2
0
0
2,697,701
Excel Macros are per sheets, so, I am afraid, you need to copy the macros explicitly if you created new sheet, instead of copying existing sheet to new one.
1
0
0
Automating Excel macro using python
3
python,linux,excel,automation
0
2010-04-23T10:08:00.000
I am using python in Linux to automate an excel. I have finished writing data into excel by using pyexcelerator package. Now comes the real challenge. I have to add another tab to the existing sheet and that tab should contain the macro run in the first tab. All these things should be automated. I Googled a lot and found win32come to do a job in macro, but that was only for windows. Anyone have any idea of how to do this, or can you guide me with few suggestions.
1
0
0
0
false
3,596,123
0
1,093
2
0
0
2,697,701
Maybe manipulating your .xls with Openoffice and pyUno is a better way. Way more powerful.
1
0
0
Automating Excel macro using python
3
python,linux,excel,automation
0
2010-04-23T10:08:00.000
Last night I upgraded my machine to Ubuntu 10.04 from 9.10. It seems to have cluttered my python module. Whenever I run python manage.py I get this error: ImportError: No module named postgresql_psycopg2.base Can any one throw any light on this?
2
1
1.2
0
true
4,505,549
0
1,092
1
0
0
2,711,737
Couple of things. I ran into the same kind of error - but for a different thing (ie. "ImportError: No module named django") when I reinstalled some software. Essentially, it messed up my Python paths. So, you're issue is very reminiscent of the one I had. The issue for me ended up being that the installed I used altered my .profile file (.bash_profile on some systems) in my home directory that messed up the Path environment variable to point to the incorrect Python binaries. This includes, of course, pointing to the wrong site-packages (where many Python extensions are installed). To verify this, I used two Linux shell commands that saved the day for me where: "which python" and "whereis python" The first tells you which version of Python you are running, and the second tells you where it is located. This is important since you can have multiple versions of Python installed on your machine. Hopefully, this is will help you troubleshoot your issue. You may also want to try "$echo Path" (at the command line / terminal) to see where the paths to resolve commands. You can fix your issue either by: 1- fixing your Path variable, and exporting Path, in .profile (or .bash_profile) 2- creating a sym link to the appropriate Python binary Good luck :) ~Aki
1
0
0
Some problem with postgres_psycopg2
2
python,django,postgresql,psycopg2
0
2010-04-26T07:33:00.000
I want to use sqlite memory database for all my testing and Postgresql for my development/production server. But the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial Is it easy to port the SQL script from sqlite to postgresql... what are your solutions? If you want me to use standard SQL, how should I go about generating primary key in both the databases?
5
12
1.2
0
true
2,721,100
0
2,892
2
0
0
2,716,847
Don't do it. Don't test in one environment and release and develop in another. Your asking for buggy software using this process.
1
0
0
SQLAlchemy - SQLite for testing and Postgresql for development - How to port?
3
python,sqlite,postgresql,sqlalchemy
0
2010-04-26T20:59:00.000
I want to use sqlite memory database for all my testing and Postgresql for my development/production server. But the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial Is it easy to port the SQL script from sqlite to postgresql... what are your solutions? If you want me to use standard SQL, how should I go about generating primary key in both the databases?
5
19
1
0
false
2,717,071
0
2,892
2
0
0
2,716,847
My suggestion would be: don't. The capabilities of Postgresql are far beyond what SQLite can provide, particularly in the areas of date/numeric support, functions and stored procedures, ALTER support, constraints, sequences, other types like UUID, etc., and even using various SQLAlchemy tricks to try to smooth that over will only get you a slight bit further. In particular date and interval arithmetic are totally different beasts on the two platforms, and SQLite has no support for precision decimals (non floating-point) the way PG does. PG is very easy to install on every major OS and life is just easier if you go that route.
1
0
0
SQLAlchemy - SQLite for testing and Postgresql for development - How to port?
3
python,sqlite,postgresql,sqlalchemy
0
2010-04-26T20:59:00.000
I have searched high and low for an answer to why query results returned in this format and how to convert to a list. data = cursor.fetchall() When I print data, it results in: (('car',), ('boat',), ('plane',), ('truck',)) I want to have the results in a list as ["car", "boat", "plane", "truck"]
3
1
0.066568
0
false
2,723,548
0
1,007
1
0
0
2,723,432
The result for fetchall() returns an array of rows, where each row is an array with one value per column. Even if you are selecting only one column, you will still get an array of arrays, but only one value for each row.
1
0
0
Why is recordset result being returned in this way for Python database query?
3
python,mysql,list,recordset
0
2010-04-27T17:21:00.000
I have a django project that uses a sqlite database that can be written to by an external tool. The text is supposed to be UTF-8, but in some cases there will be errors in the encoding. The text is from an external source, so I cannot control the encoding. Yes, I know that I could write a "wrapping layer" between the external source and the database, but I prefer not having to do this, especially since the database already contains a lot of "bad" data. The solution in sqlite is to change the text_factory to something like: lambda x: unicode(x, "utf-8", "ignore") However, I don't know how to tell the Django model driver this. The exception I get is: 'Could not decode to UTF-8 column 'Text' with text' in /var/lib/python-support/python2.5/django/db/backends/sqlite3/base.py in execute Somehow I need to tell the sqlite driver not to try to decode the text as UTF-8 (at least not using the standard algorithm, but it needs to use my fail-safe variant).
6
0
0
0
false
64,263,492
1
4,701
1
0
0
2,744,632
Incompatible Django version. Check Django version for solving this error first. I was running on Django==3.0.8 and it was producing an error. Than I ran virtualenv where I have Django==3.1.2 and the error was removed.
1
0
0
Change text_factory in Django/sqlite
6
python,django,sqlite,pysqlite
0
2010-04-30T13:00:00.000
I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..) We're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because: We couldn't track changes using the ORM The state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints) There was no way to audit database changes unless the developer documented the database change via email to the team. Our solution to this problem was to basically have a "gatekeeper" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database. I would like to know what are some strategies to maintain database schemas and if ours seems reasonable. Thanks!
9
2
1.2
0
true
2,768,187
0
930
1
0
0
2,748,946
The solution is rather administrative then technical :) The general rule is easy, there should only be tree-like dependencies in the project: - There should always be a single master source of schema, stored together with the project source code in the version control - Everything affected by the change in the master source should be automatically re-generated every time the master source is updated, no manual intervention allowed never, if automatic generation does not work -- fix either master source or generator, don't manually update the source code - All re-generations should be performed by the same person who updated the master source and all changes including the master source change should be considered a single transaction (single source control commit, single build/deployment for every affected environment including DBs update) Being enforced, this gives 100% reliable result. There are essentially 3 possible choices of the master source 1) DB metadata, sources are generated after DB update by some tool connecting to the live DB 2) Source code, some tool is generating SQL scheme from the sources, annotated in a special way and then SQL is run on the DB 3) DDL, both SQL schema and source code are generated by some tool 4) some other description is used (say a text file read by a special Perl script generating both SQL schema and the source code) 1,2,3 are equally good, providing that the tool you need exists and is not over expensive 4 is a universal approach, but it should be applied from the very beginning of the project and has an overhead of couple thousands lines of code in a strange language to maintain
1
0
0
What are some strategies for maintaining a common database schema with a team of developers and no DBA?
4
python,database,postgresql,sqlalchemy,database-schema
0
2010-05-01T04:57:00.000
I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.
0
2
1.2
0
true
2,751,989
0
921
1
0
0
2,751,957
Looks like SQLalchemy is pushing or echoing the query to your output (where fast-cgi) is instead looking for headers, then body. Maybe setting sqlalchemy.echo to False can help.
1
0
0
Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py
2
python,mysql,apache,sqlalchemy,fastcgi
0
2010-05-01T23:44:00.000
How would one go about authenticating against a single db using Python and openfire? Is there a simple module that will do this?
0
0
0
0
false
2,766,455
0
94
1
0
0
2,752,047
Openfire uses a SQL database. So talking to the database from python is probably the easiest way. You could also try to connect/authenticate via XMPP - there's probably an xmpp library for python somewhere.
1
0
0
I need to authenticate against one db with python and openfire. How do I do this?
1
python,database,openfire
0
2010-05-02T00:35:00.000
I am trying to access a MySQL database with python through Pydev Eclipse. I have installed the necessary files to access MysQL from python and I can access the database only when I write code in Python IDLE environment and run it from command prompt. However I am not able to run my applications from Pydev. when I use this "import MysqlDB" i get an error, but in IDLE no errors and my code runs very smoothly. Does anyone know were the problem is? Thanks
2
0
0
0
false
23,798,598
0
6,768
2
1
0
2,775,095
If the connector works in the IDLE but not in PyDev. Open Eclipse preferences, open PyDev directory and go to interpreter screen. Remove the interpreter and add it again from the location on your computer (Usually C drive). Close and reload Eclipse and now it should work.
1
0
0
Using MySQL in Pydev Eclipse
3
python,mysql,eclipse,pydev,mysql-python
0
2010-05-05T16:36:00.000
I am trying to access a MySQL database with python through Pydev Eclipse. I have installed the necessary files to access MysQL from python and I can access the database only when I write code in Python IDLE environment and run it from command prompt. However I am not able to run my applications from Pydev. when I use this "import MysqlDB" i get an error, but in IDLE no errors and my code runs very smoothly. Does anyone know were the problem is? Thanks
2
0
0
0
false
70,125,088
0
6,768
2
1
0
2,775,095
Posting Answer in case URL changed in future From Eclipse, choose Window / Preferences / PyDev / Interpreters / Python Interpreter, click on Manage with pip and enter the command: install mysql-connector-python
1
0
0
Using MySQL in Pydev Eclipse
3
python,mysql,eclipse,pydev,mysql-python
0
2010-05-05T16:36:00.000
If so, how can I do this?
0
1
1.2
0
true
2,810,300
0
848
2
0
0
2,810,235
when you create a prepared statement, the "template" SQL code is sent to the DBMS already, which compiles it into an expression tree. When you pass the values, the corresponding library (python sqlite3 module in your case) doesn't merge the values into the statement. The DBMS does. If you still want to produce a normal SQL string, you can use string replace functions to replace the placeholders by the values (after escaping them). What do you need this for?
1
0
0
Can I get the raw SQL generated by a prepared statement in Python’s sqlite3 module?
2
python,sqlite,pysqlite
0
2010-05-11T11:28:00.000
If so, how can I do this?
0
2
0.197375
0
false
2,810,250
0
848
2
0
0
2,810,235
When executing a prepared statement, no new SQL is generated. The idea of prepared statements is that the SQL query and its data are transmitted separately (that's why you don't have to escape any arguments) - the query is most likely only stored in an optimized form after preparing it.
1
0
0
Can I get the raw SQL generated by a prepared statement in Python’s sqlite3 module?
2
python,sqlite,pysqlite
0
2010-05-11T11:28:00.000
I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time. Is there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp. Any help would be greatly appreciated. Will EDIT: Since this seems important ... The dump file was created with a MySQL database and so it has quite a few commands/syntax that sqlite3 does not understand correctly.
5
2
1.2
0
true
2,828,580
0
3,406
2
0
0
2,824,244
"or some kind of parser" I've found MySQL to be a great parser for MySQL dump files :) You said it yourself: "so it has quite a few commands/syntax that sqlite3 does not understand correctly." Clearly then, SQLite is not the tool for this task. As for your particular error: without context (i.e. a traceback) there's nothing I can say about it. Martelli or Skeet could probably reach across time and space and read your interpreter's mind, but me, not so much.
1
0
0
How can I load a sql "dump" file into sql alchemy
2
python,sql,sqlalchemy
0
2010-05-13T03:23:00.000
I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time. Is there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp. Any help would be greatly appreciated. Will EDIT: Since this seems important ... The dump file was created with a MySQL database and so it has quite a few commands/syntax that sqlite3 does not understand correctly.
5
0
0
0
false
2,828,621
0
3,406
2
0
0
2,824,244
The SQL recognized by MySQL and the SQL in SQLite are quite different. I suggest dumping the data of each table individually, then loading the data into equivalent tables in SQLite. Create the tables in SQLite manually, using a subset of the "CREATE TABLE" commands given in your raw-dump file.
1
0
0
How can I load a sql "dump" file into sql alchemy
2
python,sql,sqlalchemy
0
2010-05-13T03:23:00.000
I want to delete all records in a mysql db except the record id's I have in a list. The length of that list can vary and could easily contain 2000+ id's, ... Currently I convert my list to a string so it fits in something like this: cursor.execute("""delete from table where id not in (%s)""",(list)) Which doesn't feel right and I have no idea how long list is allowed to be, .... What's the most efficient way of doing this from python? Altering the structure of table with an extra field to mark/unmark records for deletion would be great but not an option. Having a dedicated table storing the id's would indeed be helpful then this can just be done through a sql query... but I would really like to avoid these options if possible. Thanks,
3
0
0
0
false
2,827,845
0
3,327
1
0
0
2,826,387
I'd add a "todelete tinyint(1) not null default 1" column to the table, update it to 0 for those id's which have to be kept, then delete from table where todelete;. It's faster than not in. Or, create a table with the same structure as yours, insert the kept rows there and rename tables. Then, drop the old one.
1
0
0
delete all records except the id I have in a python list
4
python,mysql
0
2010-05-13T11:37:00.000
Currently, i am querying with this code: meta.Session.query(Label).order_by(Label.name).all() and it returns me objects sorted by Label.name in this manner ['1','7','1a','5c']. Is there a way i can have the objects returned in the order with their Label.name sorted like this ['1','1a','5c','7'] Thanks!
1
1
0.197375
0
false
2,863,830
1
1,746
1
0
0
2,863,748
Sorting is done by the database. If you database doesn't support natural sorting your are out of luck and have to sort your rows manually after retrieving them via sqlalchemy.
1
0
0
sqlalchemy natural sorting
1
python,sqlalchemy
0
2010-05-19T07:59:00.000
I have a relatively extensive sqlite database that I'd like to import into my Google App Engine python app. I've created my models using the appengine API which are close, but not quite identical to the existing schema. I've written an import script to load the data from sqlite and create/save new appengine objects, but the appengine environment blocks me from accessing the sqlite library. This script is only to be run on my local app engine instance, and from there I hope to push the data to google. Am I approaching this problem the wrong way, or is there a way to import the sqlite library while running in the local instance's environment?
5
0
0
0
false
2,873,946
1
2,231
1
1
0
2,870,379
I have not had any trouble importing pysqlite2, reading data, then transforming it and writing it to AppEngine using the remote_api. What error are you seeing?
1
0
0
Importing Sqlite data into Google App Engine
4
python,google-app-engine,sqlite
0
2010-05-20T00:47:00.000
I am working on a personal project where I need to manipulate values in a database-like format. Up until now I have been using dictionaries, tuples, and lists to store and consult those values. I am thinking about starting to use SQL to manipulate those values, but I don't know if it's worth the effort, because I don't know anything about SQL, and I don't want to use something that won't bring me any benefits (if I can do it in a simpler way, I don't want to complicate things) If I am only storing and consulting values, what would be the benefit of using SQL? PS: the numbers of rows goes between 3 and 100 and the number of columns is around 10 (some may have 5 some may have 10 etc.)
1
2
0.132549
0
false
2,870,821
0
291
3
0
0
2,870,815
No, I think you just stick to dictionaries or tuples if you only have rows around 100
1
0
0
Python and database
3
python,sql,database
0
2010-05-20T03:24:00.000
I am working on a personal project where I need to manipulate values in a database-like format. Up until now I have been using dictionaries, tuples, and lists to store and consult those values. I am thinking about starting to use SQL to manipulate those values, but I don't know if it's worth the effort, because I don't know anything about SQL, and I don't want to use something that won't bring me any benefits (if I can do it in a simpler way, I don't want to complicate things) If I am only storing and consulting values, what would be the benefit of using SQL? PS: the numbers of rows goes between 3 and 100 and the number of columns is around 10 (some may have 5 some may have 10 etc.)
1
7
1.2
0
true
2,870,832
0
291
3
0
0
2,870,815
SQL is nice and practical for many kinds of problems, is not that hard to learn at a simple "surface" level, and can be very handy to use in Python with its embedded sqlite. But if you don't know SQL, have no intrinsic motivation to learn it right now, and are already doing all you need to do to/with your data without problems, then the immediate return on the investment of learning SQL (relatively small as that investment may be) seem like it would be pretty meager indeed for you.
1
0
0
Python and database
3
python,sql,database
0
2010-05-20T03:24:00.000