Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is it even possible to create an abstraction layer that can accommodate relational and non-relational databases? The purpose of this layer is to minimize repetition and allows a web application to use any kind of database by just changing/modifying the code in one place (ie, the abstraction layer). The part that sits on top of the abstraction layer must not need to worry whether the underlying database is relational (SQL) or non-relational (NoSQL) or whatever new kind of database that may come out later in the future. | 2 | 0 | 1.2 | 0 | true | 3,649,176 | 1 | 1,721 | 3 | 0 | 0 | 3,606,215 | Thank you for all the answers. To summarize the answers, currently only web2py and Django supports this kind of abstraction.
It is not about a SQL-NoSQL holy grail, using abstraction can make the apps more flexible. Lets assume that you started a project using NoSQL, and then later on you need to switch over to SQL. It is desirable that you only make changes to the codes in a few spots instead of all over the place. For some cases, it does not really matter whether you store the data in a relational or non-relational db. For example, storing user profiles, text content for dynamic page, or blog entries.
I know there must be a trade off by using the abstraction, but my question is more about the existing solution or technical insight, instead of the consequences. | 1 | 0 | 0 | Is there any python web app framework that provides database abstraction layer for SQL and NoSQL? | 5 | python,sql,database,google-app-engine,nosql | 0 | 2010-08-31T05:18:00.000 |
Is it even possible to create an abstraction layer that can accommodate relational and non-relational databases? The purpose of this layer is to minimize repetition and allows a web application to use any kind of database by just changing/modifying the code in one place (ie, the abstraction layer). The part that sits on top of the abstraction layer must not need to worry whether the underlying database is relational (SQL) or non-relational (NoSQL) or whatever new kind of database that may come out later in the future. | 2 | 1 | 0.039979 | 0 | false | 3,609,648 | 1 | 1,721 | 3 | 0 | 0 | 3,606,215 | Regarding App Engine, all existing attempts limit you in some way (web2py doesn't support transactions or namespaces and probably many other stuff, for example). If you plan to work with GAE, use what GAE provides and forget looking for a SQL-NoSQL holy grail. Existing solutions are inevitably limited and affect performance negatively. | 1 | 0 | 0 | Is there any python web app framework that provides database abstraction layer for SQL and NoSQL? | 5 | python,sql,database,google-app-engine,nosql | 0 | 2010-08-31T05:18:00.000 |
Is it even possible to create an abstraction layer that can accommodate relational and non-relational databases? The purpose of this layer is to minimize repetition and allows a web application to use any kind of database by just changing/modifying the code in one place (ie, the abstraction layer). The part that sits on top of the abstraction layer must not need to worry whether the underlying database is relational (SQL) or non-relational (NoSQL) or whatever new kind of database that may come out later in the future. | 2 | 1 | 0.039979 | 0 | false | 3,606,610 | 1 | 1,721 | 3 | 0 | 0 | 3,606,215 | Yo may also check web2py, they support relational databases and GAE on the core. | 1 | 0 | 0 | Is there any python web app framework that provides database abstraction layer for SQL and NoSQL? | 5 | python,sql,database,google-app-engine,nosql | 0 | 2010-08-31T05:18:00.000 |
Can you recommend a high-performance, thread-safe and stable ORM for Python? The data I need to work with isn't complex, so SQLAlchemy is probably an overkill. | 3 | 6 | 1.2 | 0 | true | 3,609,616 | 0 | 3,631 | 1 | 0 | 0 | 3,607,285 | If you are looking for something thats high performance, and based on one of your comments "something that can handle >5k queries per second". You need to keep in mind that an ORM is not built specifically for speed and performance, it is built for maintainability and ease of use. If the data is so basic that even SqlAlchemy might be overkill, and your mostly doing writes, it might be easier to just do straight inserts and skip the ORM altogether. | 1 | 0 | 1 | Fast, thread-safe Python ORM? | 4 | python,orm | 0 | 2010-08-31T08:25:00.000 |
i'm using IronPython 2.6 for .Net4 to build an GUI logging application.
This application received data via serialport and stores these data in an sqlite3 database while showing the last 100 received items in an listview. The listview gathers it's data via an SQL SELECT from the database every 100ms. It only querys data that is not already visible in the listview.
At first, the useage of the sqlite3 module was good and solid but i'm now stuck with several issues that i can't solve.
The sqlite3 module throws after a while exceptions like:
database disk image is malformed
database or disk is full.
These errors occur sporadic and never under high system load.
I stuck with this kind if issues for some weeks now and i'm looking for an alternative way to store binary and ascii data in a database-like object.
Please, does somebody know a good database solution i could use with IronPython 2.6 for .Net4?
Thanks | 0 | 0 | 0 | 0 | false | 3,616,111 | 1 | 495 | 1 | 0 | 0 | 3,616,078 | good
That is highly subjective without far more detailed requirements.
You should be able to use any database with .NET support, whether out of the box (notably SQL Server Express and Compact) or installed separately (SQL Server-other editions, DB2, MySQL, Oracle, ...).
Ten select commands per second should be easily in each of any of the databases above, unless there is some performance issue (e.g. huge amount of data and not able to use an index). | 1 | 0 | 0 | IronPython - What kind of database is useable | 2 | database,ironpython | 0 | 2010-09-01T08:04:00.000 |
I use Python and MySQLdb to download web pages and store them into database. The problem I have is that I can't save complicated strings in the database because they are not properly escaped.
Is there a function in Python that I can use to escape a string for MySQL? I tried with ''' (triple simple quotes) and """, but it didn't work. I know that PHP has mysql_escape_string(), is something similar in Python?
Thanks. | 77 | 0 | 0 | 0 | false | 61,042,304 | 0 | 144,313 | 1 | 0 | 0 | 3,617,052 | One other way to work around this is using something like this when using mysqlclient in python.
suppose the data you want to enter is like this <ol><li><strong style="background-color: rgb(255, 255, 0);">Saurav\'s List</strong></li></ol>. It contains both double qoute and single quote.
You can use the following method to escape the quotes:
statement = """ Update chats set html='{}' """.format(html_string.replace("'","\\\'"))
Note: three \ characters are needed to escape the single quote which is there in unformatted python string. | 1 | 0 | 0 | Escape string Python for MySQL | 7 | python,mysql,escaping | 0 | 2010-09-01T10:23:00.000 |
I'm having a right old nightmare with JPype. I have got my dev env on Windows and so tried installing it there with no luck. I then tried on Ubunto also with no luck. I'm getting a bit desperate now. I am using Mingw32 since I tried installing VS2008 but it told me I had to install XP SP2 but I am on Vista. I tried VS2010 but no luck, I got the 'error: Unable to find vcvarsall.bat' error. Anyway, I am now on Mingw32
Ultimately I am trying to use Neo4j and Python hence my need to use JPype. I have found so many references to the problem on the net for MySQL etc but they don't help me with JPype.
If I could fix unix or windows I could get going so help on either will be really appreciated.
Here's the versions..
Windows: Vista 64
Python: 2.6
Compiler Mingw32: latest version
Jpype: 0.5.4.1
Java info:
java version "1.6.0_13"
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02, mixed mode)
I run:
python setup.py install --compiler=wingw32
and get the following output.
Choosing the Windows profile
running install
running build
running build_py
running build_ext
building '_jpype' extension
C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -DWIN32=1 "-IC:\Program Files (x86)\Java\jdk1.6.0_21/include" "-IC:\Program Files (x86)\Java\jdk1.6.0_21/include/win32" -Isrc/native/common/include -Isrc/native/python/include -Ic:\Python26\include -Ic:\Python26\PC -c src/native/common/jp_array.cpp -o build\temp.win32-2.6\Release\src\native\common\jp_array.o /EHsc
src/native/common/jp_array.cpp: In member function 'void JPArray::setRange(int, int, std::vector&)':
src/native/common/jp_array.cpp:56:13: warning: comparison between signed and unsigned integer expressions
src/native/common/jp_array.cpp:68:4: warning: deprecated conversion from string constant to 'char*'
src/native/common/jp_array.cpp: In member function 'void JPArray::setItem(int, HostRef*)':
src/native/common/jp_array.cpp:80:3: warning: deprecated conversion from string constant to 'char*'
gcc: /EHsc: No such file or directory
error: command 'gcc' failed with exit status 1
So on unix Ubunto the problem is as follows:
Java version: 1.6.0_18
JPype: 0.5.4.1
Python: 2.6
Java is in the path and I did apt-get install build-essentials just now so have latest GCC etc.
I won't paste all the output as it's massive. So many errors it's like I have missed the install of Java or similar but I haven't. typing java takes me into version above. This is the beginning:
running install
running build
running build_py
running build_ext
building '_jpype' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/jvm/java-1.5.0-sun-1.5.0.08/include -I/usr/lib/jvm/java-1.5.0-sun-1.5.0.08/include/linux -Isrc/native/common/include -Isrc/native/python/include -I/usr/include/python2.6 -c src/native/common/jp_javaenv_autogen.cpp -o build/temp.linux-i686-2.6/src/native/common/jp_javaenv_autogen.o
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
In file included from src/native/common/jp_javaenv_autogen.cpp:21:
src/native/common/include/jpype.h:45:17: error: jni.h: No such file or directory
In file included from src/native/common/jp_javaenv_autogen.cpp:21:
src/native/common/include/jpype.h:77: error: ISO C++ forbids declaration of ‘jchar’ with no type
src/native/common/include/jpype.h:77: error: expected ‘,’ or ‘...’ before ‘’ token
src/native/common/include/jpype.h:82: error: ISO C++ forbids declaration of ‘jchar’ with no type
src/native/common/include/jpype.h:82: error: expected ‘;’ before ‘’ token
src/native/common/include/jpype.h:86: error: ISO C++ forbids declaration of ‘jchar’ with no type
src/native/common/include/jpype.h:86: error: expected ‘;’ before ‘&’ token
src/native/common/include/jpype.h:88: error: expected ‘;’ before ‘private’
src/native/common/include/jpype.h:89: error: ISO C++ forbids declaration of ‘jchar’ with no type
src/native/common/include/jpype.h:89: error: expected ‘;’ before ‘*’ token
In file included from src/native/common/include/jpype.h:96,
from src/native/common/jp_javaenv_autogen.cpp:21:
And this is the end:
src/native/common/include/jp_monitor.h:27: error: ‘jobject’ does not name a type
src/native/common/jp_javaenv_autogen.cpp:30: error: ‘jbyte’ does not name a type
src/native/common/jp_javaenv_autogen.cpp:38: error: ‘jbyte’ does not name a type
src/native/common/jp_javaenv_autogen.cpp:45: error: variable or field ‘SetStaticByteField’ declared void
src/native/common/jp_javaenv_autogen.cpp:45: error: ‘jclass’ was not declared in this scope
src/native/common/jp_javaenv_autogen.cpp:45: error: ‘jfieldID’ was not declared in this scope
src/native/common/jp_javaenv_autogen.cpp:45: error: ‘jbyte’ was not declared in this scope
error: command 'gcc' failed with exit status 1 | 3 | 1 | 0.066568 | 0 | false | 6,258,169 | 1 | 3,736 | 1 | 0 | 0 | 3,649,577 | Edit the Setup.py and remove the /EHsc option. | 1 | 1 | 0 | JPype compile problems | 3 | java,python | 0 | 2010-09-06T06:54:00.000 |
I have a Twisted application that runs in an x86 64bit machine with Win 2008 server.
It needs to be connected to a SQL Server database that runs in another machine (in a cloud actually but I have IP, port, db name, credentials).
Do I need to install anything more that Twisted to my machine?
And which API should be used? | 0 | 1 | 0.099668 | 0 | false | 4,059,366 | 0 | 1,128 | 1 | 1 | 0 | 3,657,271 | If you want to have portable mssql server library, you can try the module from www.pytds.com.
It works with 2.5+ and 3.1, have a good stored procedure support. It's api is more "functional", and has some good features you won't find anywhere else. | 1 | 0 | 0 | Twisted and connection to SQL Server | 2 | python,sql-server,twisted | 0 | 2010-09-07T09:07:00.000 |
I'm creating a basic database utility class in Python. I'm refactoring an old module into a class. I'm now working on an executeQuery() function, and I'm unsure of whether to keep the old design or change it. Here are the 2 options:
(The old design:) Have one generic executeQuery method that takes the query to execute and a boolean commit parameter that indicates whether to commit (insert, update, delete) or not (select), and determines with an if statement whether to commit or to select and return.
(This is the way I'm used to, but that might be because you can't have a function that sometimes returns something and sometimes doesn't in the languages I've worked with:) Have 2 functions, executeQuery and executeUpdateQuery (or something equivalent). executeQuery will execute a simple query and return a result set, while executeUpdateQuery will make changes to the DB (insert, update, delete) and return nothing.
Is it accepted to use the first way? It seems unclear to me, but maybe it's more Pythonistic...? Python is very flexible, maybe I should take advantage of this feature that can't really be accomplished in this way in more strict languages...
And a second part of this question, unrelated to the main idea - what is the best way to return query results in Python? Using which function to query the database, in what format...? | 3 | 4 | 0.26052 | 0 | false | 3,662,258 | 0 | 174 | 1 | 0 | 0 | 3,662,134 | It's propably just me and my FP fetish, but I think a function executed solely for side effects is very different from a non-destructive function that fetches some data, and therefore have different names. Especially if the generic function would do something different depending on exactly that (the part on the commit parameter seems to imply that).
As for how to return results... I'm a huge fan of generators, but if the library you use for database connections returns a list anyway, you might as well pass this list on - a generator wouldn't buy you anything in this case. But if it allows you to iterate over the results (one at a time), seize the opportunity to save a lot of memory on larger queries. | 1 | 0 | 0 | Design question in Python: should this be one generic function or two specific ones? | 3 | python,oop | 0 | 2010-09-07T19:47:00.000 |
AFAIK SQLite returns unicode objects for TEXT in Python. Is it possible to get SQLite to return string objects instead? | 3 | 0 | 0 | 0 | false | 25,273,292 | 0 | 7,275 | 2 | 0 | 0 | 3,666,328 | Use Python 3.2+. It will automatically return string instead of unicode (as in Python 2.7) | 1 | 0 | 0 | Can I get SQLite to string instead of unicode for TEXT in Python? | 3 | python,string,sqlite,unicode | 0 | 2010-09-08T09:31:00.000 |
AFAIK SQLite returns unicode objects for TEXT in Python. Is it possible to get SQLite to return string objects instead? | 3 | 4 | 0.26052 | 0 | false | 3,666,433 | 0 | 7,275 | 2 | 0 | 0 | 3,666,328 | TEXT is intended to store text. Use BLOB if you want to store bytes. | 1 | 0 | 0 | Can I get SQLite to string instead of unicode for TEXT in Python? | 3 | python,string,sqlite,unicode | 0 | 2010-09-08T09:31:00.000 |
I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta.
Unfortunately, I don't know where to instantiate the connection to make it available in my application.
The goal would be to :
Create a pool when the application is launched
Get a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool "lazily", i.e. only if needed
If a connexion has been used, release it when the request has been processed
Additionally, is there something important I should know about it ? When I see some comments like "Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.", what does it mean exactly ?
Thanks.
--
Pierre | 10 | 2 | 1.2 | 0 | true | 3,687,133 | 0 | 885 | 1 | 1 | 0 | 3,671,535 | Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be.
I ended up with just pycassa.connect_thread_local() in app_globals, and there I go. | 1 | 0 | 0 | How to connect to Cassandra inside a Pylons app? | 2 | python,pylons,cassandra | 0 | 2010-09-08T20:14:00.000 |
We have a Django project which runs on Google App Engine and used db.UserProperty in several models. We don't have an own User model.
My boss would like to use RPXNow (Janrain) for authentication, but after I integrated it, the users.get_current_user() method returned None. It makes sense, because not Google authenticated me. But what should I use for db.UserProperty attributes? Is it possible to use rpxnow and still can have Google's User object as well?
After this I tried to use OpenID authentication (with federated login) in my application, and it works pretty good: I still have users.get_current_user() object. As far as I know, rpxnow using openID as well, which means (for me) that is should be possible to get User objects with rpxnow. But how?
Cheers,
psmith | 0 | 1 | 1.2 | 0 | true | 3,707,639 | 1 | 389 | 1 | 1 | 0 | 3,699,751 | You can only get a User object if you're using one of the built-in authentication methods. User objects provide an interface to the Users API, which is handled by the App Engine infrastructure. If you're using your own authentication library, regardless of what protocol it uses, you will have to store user information differently. | 1 | 0 | 0 | Google App Engine's db.UserProperty with rpxnow | 2 | python,google-app-engine,rpxnow | 0 | 2010-09-13T10:55:00.000 |
Short story
I have a technical problem with a third-party library at my hands that I seem to be unable to easily solve in a way other than creating a surrogate key (despite the fact that I'll never need it). I've read a number of articles on the Net discouraging the use of surrogate keys, and I'm a bit at a loss if it is okay to do what I intend to do.
Long story
I need to specify a primary key, because I use SQLAlchemy ORM (which requires one), and I cannot just set it in __mapper_args__, since the class is being built with classobj, and I have yet to find a way to reference the field of a not-yet-existing class in the appropriate PK definition argument. Another problem is that the natural equivalent of the PK is a composite key that is too long for the version of MySQL I use (and it's generally a bad idea to use such long primary keys anyway). | 2 | 2 | 1.2 | 0 | true | 3,713,061 | 0 | 698 | 3 | 0 | 0 | 3,712,949 | I always make surrogate keys when using ORMs (or rather, I let the ORMs make them for me). They solve a number of problems, and don't introduce any (major) problems.
So, you've done your job by acknowledging that there are "papers on the net" with valid reasons to avoid surrogate keys, and that there's probably a better way to do it.
Now, write "# TODO: find a way to avoid surrogate keys" somewhere in your source code and go get some work done. | 1 | 0 | 0 | How badly should I avoid surrogate primary keys in SQL? | 3 | python,sqlalchemy,primary-key | 0 | 2010-09-14T21:11:00.000 |
Short story
I have a technical problem with a third-party library at my hands that I seem to be unable to easily solve in a way other than creating a surrogate key (despite the fact that I'll never need it). I've read a number of articles on the Net discouraging the use of surrogate keys, and I'm a bit at a loss if it is okay to do what I intend to do.
Long story
I need to specify a primary key, because I use SQLAlchemy ORM (which requires one), and I cannot just set it in __mapper_args__, since the class is being built with classobj, and I have yet to find a way to reference the field of a not-yet-existing class in the appropriate PK definition argument. Another problem is that the natural equivalent of the PK is a composite key that is too long for the version of MySQL I use (and it's generally a bad idea to use such long primary keys anyway). | 2 | 0 | 0 | 0 | false | 4,160,811 | 0 | 698 | 3 | 0 | 0 | 3,712,949 | I use surrogate keys in a db that I use reflection on with sqlalchemy. The pro is that you can more easily manage the foreign keys / relationships that exists in your tables / models. Also, the rdbms is managing the data more efficiently. The con is the data inconsistency: duplicates. To avoid this - always use the unique constraint on your natural key.
Now, I understand from your long story that you can't enforce this uniqueness because of your mysql limitations. For long composite keys mysql causes problems. I suggest you move to postgresql. | 1 | 0 | 0 | How badly should I avoid surrogate primary keys in SQL? | 3 | python,sqlalchemy,primary-key | 0 | 2010-09-14T21:11:00.000 |
Short story
I have a technical problem with a third-party library at my hands that I seem to be unable to easily solve in a way other than creating a surrogate key (despite the fact that I'll never need it). I've read a number of articles on the Net discouraging the use of surrogate keys, and I'm a bit at a loss if it is okay to do what I intend to do.
Long story
I need to specify a primary key, because I use SQLAlchemy ORM (which requires one), and I cannot just set it in __mapper_args__, since the class is being built with classobj, and I have yet to find a way to reference the field of a not-yet-existing class in the appropriate PK definition argument. Another problem is that the natural equivalent of the PK is a composite key that is too long for the version of MySQL I use (and it's generally a bad idea to use such long primary keys anyway). | 2 | 0 | 0 | 0 | false | 3,713,270 | 0 | 698 | 3 | 0 | 0 | 3,712,949 | "Using a surrogate key allows duplicates to be created when using a natural key would have prevented such problems" Exactly, so you should have both keys, not just a surrogate. The error you seem to be making is not that you are using a surrogate, it's that you are assuming the table only needs one key. Make sure you create all the keys you need to ensure the integrity of your data.
Having said that, in this case it seems like a deficiency of the ORM software (apparently not being able to use a composite key) is the real cause of your problems. It's unfortunate that a software limitation like that should force you to create keys you don't otherwise need. Maybe you could consider using different software. | 1 | 0 | 0 | How badly should I avoid surrogate primary keys in SQL? | 3 | python,sqlalchemy,primary-key | 0 | 2010-09-14T21:11:00.000 |
I have created a database in PostgreSQL, let's call it testdb.
I have a generic set of tables inside this database, xxx_table_one, xxx_table_two and xxx_table_three.
Now, I have Python code where I want to dynamically create and remove "sets" of these 3 tables to my database with a unique identifier in the table name distinguishing different "sets" from each other, e.g.
Set 1
testdb.aaa_table_one
testdb.aaa_table_two
testdb.aaa_table_three
Set 2
testdb.bbb_table_one
testdb.bbb_table_two
testdb.bbb_table_three
The reason I want to do it this way is to keep multiple LARGE data collections of related data separate from each other. I need to regularly overwrite individual data collections, and it's easy if we can just drop the data collections table and recreate a complete new set of tables. Also, I have to mention, the different data collections fit into the same schemas, so I could save all the data collections in 1 set of tables using an identifier to distinguish data collections instead of separating them by using different tables.
I want to know, a few things
Does PostgreSQL limit the number of tables per database?
What is the effect on performance, if any, of having a large number of tables in 1 database?
What is the effect on performance of saving the data collections in different sets of tables compared to saving them all in the same set, e.g. I guess would need to write more queries if I want to query multiple data collections at once when the data is spread accross tables as compared to just 1 set of tables. | 7 | 3 | 0.197375 | 0 | false | 3,715,621 | 0 | 5,102 | 2 | 0 | 0 | 3,715,456 | PostgreSQL doesn't impose a direct limit on this, your OS does (it depends on maximum directory size)
This may depend on your OS as well. Some filesystems get slower with large directories.
PostgreSQL won't be able to optimize queries if they're across different tables. So using less tables (or a single table) should be more efficient | 1 | 0 | 0 | Is there a limitation on the number of tables a PostgreSQL database can have? | 3 | python,mysql,database,database-design,postgresql | 0 | 2010-09-15T07:15:00.000 |
I have created a database in PostgreSQL, let's call it testdb.
I have a generic set of tables inside this database, xxx_table_one, xxx_table_two and xxx_table_three.
Now, I have Python code where I want to dynamically create and remove "sets" of these 3 tables to my database with a unique identifier in the table name distinguishing different "sets" from each other, e.g.
Set 1
testdb.aaa_table_one
testdb.aaa_table_two
testdb.aaa_table_three
Set 2
testdb.bbb_table_one
testdb.bbb_table_two
testdb.bbb_table_three
The reason I want to do it this way is to keep multiple LARGE data collections of related data separate from each other. I need to regularly overwrite individual data collections, and it's easy if we can just drop the data collections table and recreate a complete new set of tables. Also, I have to mention, the different data collections fit into the same schemas, so I could save all the data collections in 1 set of tables using an identifier to distinguish data collections instead of separating them by using different tables.
I want to know, a few things
Does PostgreSQL limit the number of tables per database?
What is the effect on performance, if any, of having a large number of tables in 1 database?
What is the effect on performance of saving the data collections in different sets of tables compared to saving them all in the same set, e.g. I guess would need to write more queries if I want to query multiple data collections at once when the data is spread accross tables as compared to just 1 set of tables. | 7 | 0 | 0 | 0 | false | 5,603,789 | 0 | 5,102 | 2 | 0 | 0 | 3,715,456 | If your data were not related, I think your tables could be in different schema, and then you would use SET search_path TO schema1, public for example, this way you wouldn't have to dynamically generate table names in your queries. I am planning to try this structure on a large database which stores logs and other tracking information.
You can also change your tablespace if your os has a limit or suffers from large directory size. | 1 | 0 | 0 | Is there a limitation on the number of tables a PostgreSQL database can have? | 3 | python,mysql,database,database-design,postgresql | 0 | 2010-09-15T07:15:00.000 |
I am into a project where zope web server is used. With this PostgreSQL database is used. But I am not able to add a new PostgreSQL connection via zope. Actually, I am not aware of what else I need to install so that I can use PostgreSQL dB with zope. From whatever I have explored about this I have come to know that I will require a Zope Database Adapter so that I can use PostgreSQL dB with Zope. But still I am not confirmed about this. Also I don't know which version of Zope Database Adapter will I require to install? The zope version I am using is 2.6 and PostgreSQL dB version is 7.4.13 and the Python version is 2.1.3 . Also from where should I download that Zope Database Adapter? | 3 | 0 | 0 | 0 | false | 3,719,408 | 0 | 110 | 1 | 0 | 0 | 3,719,145 | Look at psycopg, it ships with a Zope Database Adapter. | 1 | 0 | 0 | What are the essentials I need to install if I want to use PostgreSQL DB with zope? for eg: Zope Database Adapter? | 2 | python,zope | 0 | 2010-09-15T15:24:00.000 |
I'm running a web crawler that gets called as a separate thread via Django. When it tries to store the scraped information I get this error:
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 147, in execute
charset = db.character_set_name()
InterfaceError: (0, '')
If I manually run the script from the command line I don't get this error. Any ideas?
My guess is that I do about 4 cursor.execute()s in one iteration of a loop. Could this be throwing something off?
Thanks! | 0 | 0 | 1.2 | 0 | true | 3,722,799 | 1 | 1,854 | 1 | 0 | 0 | 3,722,120 | Since it mentions the character set, my gut says you are running a different Django/Python/something from the command line than you are from the webserver. In your settings file, turn on DEBUG=True, restart the server, and then run this again. In particular, look at the list of paths shown. If they are not exactly what you expect them to be, then this is a Red Flag. | 1 | 0 | 0 | mySQL interface error only occuring if ran in Django | 2 | python,mysql,django,multithreading | 0 | 2010-09-15T21:49:00.000 |
I have zope 2.11 installed. Now i want to use Posgresql 7.4.13 DB with it. So i know i need to install psycopg2 Database Adapter. Can any one tell me Is psycopg2 compatible with zope2?? | 1 | 1 | 0.197375 | 0 | false | 4,018,666 | 0 | 142 | 1 | 0 | 0 | 3,725,699 | Yes, you can use psycopg2 with Zope2.
Just install it in your Python with easy_install or setup.py. You will also need a matching ZPsycopgDA Product in Zope. You find the ZPsycopgDA folder in the psycopg2 source distribution tarball. | 1 | 0 | 0 | Is Zpsycopg2 compatible with zope 2? | 1 | python,database,zope | 0 | 2010-09-16T10:19:00.000 |
I am building an application with objects which have their data stored in mysql tables (across multiple tables). When I need to work with the object (retrieve object attributes / change the attributes) I am querying the sql database using mysqldb (select / update). However, since the application is quite computation intensive, the execution time is killing me.
Wanted to understand if there are approaches where all of the data is loaded into python, the computations / modifications are done on those objects and then subsequently a full data update is done to the mysql database? Will loading the data initially into lists of those objects in one go from the database improve the performance? Also since the db size is close to around 25 mb, will it cause any memory problems.
Thanks in advance. | 2 | 5 | 0.462117 | 0 | false | 3,770,439 | 0 | 1,287 | 1 | 0 | 0 | 3,770,394 | 25Mb is tiny. Microscopic. SQL is slow. Glacial.
Do not waste time on SQL unless you have transactions (with locking and multiple users).
If you're doing "analysis", especially computationally-intensive analysis, load all the data into memory.
In the unlikely event that data doesn't fit into memory, then do this.
Query data into flat files. This can be fast. It's fastest if you don't use Python, but use the database native tools to extract data into CSV or something small.
Read flat files and do computations, writing flat files. This is really fast.
Do bulk updates from the flat files. Again, this is fastest if you use database native toolset for insert or update.
If you didn't need SQL in the first place, consider the data as you originally received it and what you're going to do with it.
Read the original file once, parse it, create your Python objects and pickle the entire list or dictionary. This means that each subsequent program can simply load the pickled file and start doing analysis. However. You can't easily update the pickled file. You have to create a new one. This is not a bad thing. It gives you complete processing history.
Read the original file once, parse it, create your Python objects using shelve. This means you can
update the file.
Read the original file once, parse it, create your Python objects and save the entire list or dictionary as a JSON or YAML file. This means that each subsequent program can simply load the JSON (or YAML) file and start doing analysis. However. You can't easily update the file. You have to create a new one. This is not a bad thing. It gives you complete processing history.
This will probably be slightly slower than pickling. And it will require that you write some helpers so that the JSON objects are dumped and loaded properly. However, you can read JSON (and YAML) giving you some advantages in working with the file. | 1 | 0 | 0 | Optimizing Python Code for Database Access | 2 | python,mysql,optimization | 0 | 2010-09-22T14:43:00.000 |
We've worked hard to work up a full dimensional database model of our problem, and now it's time to start coding. Our previous projects have used hand-crafted queries constructed by string manipulation.
Is there any best/standard practice for interfacing between python and a complex database layout?
I've briefly evaluated SQLAlchemy, SQLObject, and Django-ORM, but (I may easily be missing something) they seem tuned for tiny web-type (OLTP) transactions, where I'm doing high-volume analytical (OLAP) transactions.
Some of my requirements, that may be somewhat different than usual:
load large amounts of data relatively quickly
update/insert small amounts of data quickly and easily
handle large numbers of rows easily (300 entries per minute over 5 years)
allow for modifications in the schema, for future requirements
Writing these queries is easy, but writing the code to get the data all lined up is tedious, especially as the schema evolves. This seems like something that a computer might be good at? | 10 | 3 | 0.197375 | 0 | false | 3,782,627 | 0 | 3,376 | 3 | 0 | 0 | 3,782,386 | I'm using SQLAlchemy with a pretty big datawarehouse and I'm using it for the full ETL process with success. Specially in certain sources where I have some complex transformation rules or with some heterogeneous sources (such as web services). I'm not using the Sqlalchemy ORM but rather using its SQL Expression Language because I don't really need to map anything with objects in the ETL process. Worth noticing that when I'm bringing a verbatim copy of some of the sources I rather use the db tools for that -such as PostgreSQL dump utility-. You can't beat that.
SQL Expression Language is the closest you will get with SQLAlchemy (or any ORM for the matter) to handwriting SQL but since you can programatically generate the SQL from python you will save time, specially if you have some really complex transformation rules to follow.
One thing though, I rather modify my schema by hand. I don't trust any tool for that job. | 1 | 0 | 0 | Python: interact with complex data warehouse | 3 | python,django-models,sqlalchemy,data-warehouse,olap | 0 | 2010-09-23T20:40:00.000 |
We've worked hard to work up a full dimensional database model of our problem, and now it's time to start coding. Our previous projects have used hand-crafted queries constructed by string manipulation.
Is there any best/standard practice for interfacing between python and a complex database layout?
I've briefly evaluated SQLAlchemy, SQLObject, and Django-ORM, but (I may easily be missing something) they seem tuned for tiny web-type (OLTP) transactions, where I'm doing high-volume analytical (OLAP) transactions.
Some of my requirements, that may be somewhat different than usual:
load large amounts of data relatively quickly
update/insert small amounts of data quickly and easily
handle large numbers of rows easily (300 entries per minute over 5 years)
allow for modifications in the schema, for future requirements
Writing these queries is easy, but writing the code to get the data all lined up is tedious, especially as the schema evolves. This seems like something that a computer might be good at? | 10 | 2 | 0.132549 | 0 | false | 3,782,432 | 0 | 3,376 | 3 | 0 | 0 | 3,782,386 | SQLAlchemy definitely. Compared to SQLAlchemy, all other ORMs look like child's toy. Especially the Django-ORM. What's Hibernate to Java, SQLAlchemy is to Python. | 1 | 0 | 0 | Python: interact with complex data warehouse | 3 | python,django-models,sqlalchemy,data-warehouse,olap | 0 | 2010-09-23T20:40:00.000 |
We've worked hard to work up a full dimensional database model of our problem, and now it's time to start coding. Our previous projects have used hand-crafted queries constructed by string manipulation.
Is there any best/standard practice for interfacing between python and a complex database layout?
I've briefly evaluated SQLAlchemy, SQLObject, and Django-ORM, but (I may easily be missing something) they seem tuned for tiny web-type (OLTP) transactions, where I'm doing high-volume analytical (OLAP) transactions.
Some of my requirements, that may be somewhat different than usual:
load large amounts of data relatively quickly
update/insert small amounts of data quickly and easily
handle large numbers of rows easily (300 entries per minute over 5 years)
allow for modifications in the schema, for future requirements
Writing these queries is easy, but writing the code to get the data all lined up is tedious, especially as the schema evolves. This seems like something that a computer might be good at? | 10 | 6 | 1 | 0 | false | 3,782,509 | 0 | 3,376 | 3 | 0 | 0 | 3,782,386 | Don't get confused by your requirements. One size does not fit all.
load large amounts of data relatively quickly
Why not use the databases's native loaders for this? Use Python to prepare files, but use database tools to load. You'll find that this is amazingly fast.
update/insert small amounts of data quickly and easily
That starts to bend the rules of a data warehouse. Unless you're talking about Master Data Management to update reporting attributes of a dimension.
That's what ORM's and web frameworks are for.
handle large numbers of rows easily (300 entries per minute over 5 years)
Again, that's why you use a pipeline of Python front-end processing, but the actual INSERT's are done by database tools. Not Python.
alter schema (along with python interface) easily, for future requirements
You have almost no use for automating this. It's certainly your lowest priority task for "programming". You'll often do this manually in order to preserve data properly.
BTW, "hand-crafted queries constructed by string manipulation" is probably the biggest mistake ever. These are hard for the RDBMS parser to handle -- they're slower than using queries that have bind variables inserted. | 1 | 0 | 0 | Python: interact with complex data warehouse | 3 | python,django-models,sqlalchemy,data-warehouse,olap | 0 | 2010-09-23T20:40:00.000 |
Howdie stackoverflow people!
So I've been doing some digging regarding these NoSQL databases, MongoDB, CouchDB etc. Though I am still not sure about real time-ish stuff therefore I thought i'd ask around to see if someone have any practical experience.
Let's think about web stuff, let's say we've got a very dynamic super ajaxified webapp that asks for various types of data every 5-20 seconds, our backend is python or php or anything other than java really... in cases such as these obviously a MySQL or similar db would be under heavy pressure (with lots of users), would MongoDB / CouchDB run this without breaking a sweat and without the need to create some super ultra complex cluster/caching etc solution?
Yes, that's basically my question, if you think that no.. then yes I know there are several types of solutions for this, nodeJS/websockets/antigravity/worm-hole super tech, but I am just interested in these NoSQL things atm and more specifically if they can handle this type of thing.
Let's say we have 5000 users at the same time, every 5, 10 or 20 seconds ajax requests that updates various interfaces.
Shoot ;] | 2 | 0 | 0 | 0 | false | 3,799,207 | 1 | 1,738 | 2 | 0 | 0 | 3,798,728 | It depends heavily on the server running said NoSQL solution, amount of data etc... I have played around with Mongo a bit and it is very easy to setup multiple servers to run simultaneously and you would most likely be able to accomplish high concurrency by starting multiple instances on the same box and having them act like a cluster. Luckily Mongo, at least, handles all the specifics so servers can be killed and introduced without skipping a beat (depending on version). By default I believe the max connections is 1000 so starting 5 servers with said configuration would suffice (if your server can handle it obviously) but realistically you would most likely never be hitting 5000 users at the exact same time.
I hope for your hardware's sake you would at least come up with a solution that can check to see if new data is available before a full-on fetch. Either via timestamps or Memcache etc...
Overall I would tend to believe NoSQL would be much faster than traditional databases assuming you are fetching data and not running reports etc... and your datastore design is intelligent enough to compensate for the lack of complex joins. | 1 | 0 | 0 | MongoDB for realtime ajax stuff? | 2 | php,python,ajax,mongodb,real-time | 0 | 2010-09-26T16:31:00.000 |
Howdie stackoverflow people!
So I've been doing some digging regarding these NoSQL databases, MongoDB, CouchDB etc. Though I am still not sure about real time-ish stuff therefore I thought i'd ask around to see if someone have any practical experience.
Let's think about web stuff, let's say we've got a very dynamic super ajaxified webapp that asks for various types of data every 5-20 seconds, our backend is python or php or anything other than java really... in cases such as these obviously a MySQL or similar db would be under heavy pressure (with lots of users), would MongoDB / CouchDB run this without breaking a sweat and without the need to create some super ultra complex cluster/caching etc solution?
Yes, that's basically my question, if you think that no.. then yes I know there are several types of solutions for this, nodeJS/websockets/antigravity/worm-hole super tech, but I am just interested in these NoSQL things atm and more specifically if they can handle this type of thing.
Let's say we have 5000 users at the same time, every 5, 10 or 20 seconds ajax requests that updates various interfaces.
Shoot ;] | 2 | 2 | 1.2 | 0 | true | 3,801,074 | 1 | 1,738 | 2 | 0 | 0 | 3,798,728 | Let's say we have 5000 users at the
same time, every 5, 10 or 20 seconds
ajax requests that updates various
interfaces.
OK, so to get this right, you're talking about 250 to 1000 writes per second? Yeah, MongoDB can handle that.
The real key on performance is going to be whether or not these are queries, updates or inserts.
For queries, Mongo can probably handle this load. It's really going to be about data size to memory size ratios. If you have a server with 1GB of RAM and 150GB of data, then you're probably not going to get 250 queries / second (with any DB technology). But with reasonable hardware specs, Mongo can hit this speed on a single 64-bit server.
If you have 5,000 active users and you're constantly updating existing records then Mongo will be really fast (on par with updating memcached on a single machine). The reason here is simply that Mongo will likely keep the record in memory. So a user will send updates every 5 seconds and the in-memory object will be updated.
If you are constantly inserting new records, then the limitation is really going to be one of throughput. When you're writing lots of new data, you're also forcing the index to expand. So if you're planning to pump in Gigs of new data, then you risk saturating the disk throughput and you'll need to shard.
So based on your questions, it looks like you're mostly querying/updating. You'll be writing new records, but not 1000 new records / second. If this is the case, then MongoDB is probably right for you. It will definitely get around a lot of caching concerns. | 1 | 0 | 0 | MongoDB for realtime ajax stuff? | 2 | php,python,ajax,mongodb,real-time | 0 | 2010-09-26T16:31:00.000 |
I've heard of redis-cache but how exactly does it work? Is it used as a layer between django and my rdbms, by caching the rdbms queries somehow?
Or is it supposed to be used directly as the database? Which I doubt, since that github page doesn't cover any login details, no setup.. just tells you to set some config property. | 107 | 61 | 1 | 0 | false | 7,722,260 | 1 | 75,541 | 1 | 0 | 0 | 3,801,379 | Just because Redis stores things in-memory does not mean that it is meant to be a cache. I have seen people using it as a persistent store for data.
That it can be used as a cache is a hint that it is useful as a high-performance storage. If your Redis system goes down though you might loose data that was not been written back onto the disk again. There are some ways to mitigate such dangers, e.g. a hot-standby replica.
If your data is 'mission-critical', like if you run a bank or a shop, Redis might not be the best pick for you. But if you write a high-traffic game with persistent live data or some social-interaction stuff and manage the probability of data-loss to be quite acceptable, then Redis might be worth a look.
Anyway, the point remains, yes, Redis can be used as a database. | 1 | 0 | 0 | How can I use redis with Django? | 5 | python,django,redis | 0 | 2010-09-27T05:48:00.000 |
FTS3/FTS4 doesn't work in python by default (up to 2.7). I get the error: sqlite3.OperationalError: no such module: fts3
or
sqlite3.OperationalError: no such module: fts4
How can this be resolved? | 13 | 0 | 0 | 0 | false | 12,372,189 | 0 | 6,571 | 2 | 0 | 0 | 3,823,659 | What Naveen said but =>
For Windows installations:
While running setup.py for for package installations... Python 2.7 searches for an installed Visual Studio 2008. You can trick Python to use Visual Studio by setting
SET VS90COMNTOOLS=%VS100COMNTOOLS%
before calling setup.py. | 1 | 0 | 0 | How to setup FTS3/FTS4 with python2.7 on Windows | 4 | python,sqlite,full-text-search,fts3,fts4 | 0 | 2010-09-29T16:22:00.000 |
FTS3/FTS4 doesn't work in python by default (up to 2.7). I get the error: sqlite3.OperationalError: no such module: fts3
or
sqlite3.OperationalError: no such module: fts4
How can this be resolved? | 13 | 2 | 0.099668 | 0 | false | 3,826,412 | 0 | 6,571 | 2 | 0 | 0 | 3,823,659 | never mind.
installing pysqlite from source was easy and sufficient.
python setup.py build_static install fts3 is enabled by default when installing from source. | 1 | 0 | 0 | How to setup FTS3/FTS4 with python2.7 on Windows | 4 | python,sqlite,full-text-search,fts3,fts4 | 0 | 2010-09-29T16:22:00.000 |
I have a 100 mega bytes sqlite db file that I would like to load to memory before performing sql queries. Is it possible to do that in python?
Thanks | 8 | 2 | 0.099668 | 0 | false | 25,521,707 | 0 | 12,928 | 1 | 0 | 0 | 3,826,552 | If you are using Linux, you can try tmpfs which is a memory-based file system.
It's very easy to use it:
mount tmpfs to a directory.
copy sqlite db file to the directory.
open it as normal sqlite db file.
Remember, anything in tmpfs will be lost after reboot. So, you may copy db file back to disk if it changed. | 1 | 0 | 0 | In python, how can I load a sqlite db completely to memory before connecting to it? | 4 | python,sql,memory,sqlite | 0 | 2010-09-29T23:10:00.000 |
I have been working on developing this analytical tool to help interpret and analyze a database that is bundled within the package. It is very important for us to secure the database in a way that can only be accessed with our software. What is the best way of achieving it in Python?
I am aware that there may not be a definitive solution, but deterrence is what really matters here.
Thank you very much. | 4 | 3 | 1.2 | 0 | true | 3,850,560 | 0 | 4,184 | 1 | 0 | 0 | 3,848,658 | This question comes up on the SQLite users mailing list about once a month.
No matter how much encryption etc you do, if the database is on the client machine then the key to decrypt will also be on the machine at some point. An attacker will be able to get that key since it is their machine.
A better way of looking at this is in terms of money - how much would a bad guy need to spend in order to get the data. This will generally be a few hundred dollars at most. And all it takes is any one person to get the key and they can then publish the database for everyone.
So either go for a web service as mentioned by Donal or just spend a few minutes obfuscating the database. For example if you use APSW then you can write a VFS in a few lines that XORs the database content so regular SQLite will not open it, nor will a file viewer show the normal SQLite header. (There is example code in APSW showing how to do this.)
Consequently anyone who does have the database content had to knowingly do so. | 1 | 0 | 1 | Encrypting a Sqlite db file that will be bundled in a pyexe file | 2 | python,database,sqlite,encryption | 0 | 2010-10-03T04:55:00.000 |
I have an existing sqlite3 db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory.
Is there a Pythonic way to load the existing file into memory in order to speed up the calculations? | 72 | -1 | -0.019997 | 0 | false | 3,850,164 | 0 | 46,619 | 2 | 0 | 0 | 3,850,022 | sqlite supports in-memory databases.
In python, you would use a :memory: database name for that.
Perhaps you could open two databases (one from the file, an empty one in-memory), migrate everything from the file database into memory, then use the in-memory database further to do calculations. | 1 | 0 | 0 | How to load existing db file to memory in Python sqlite3? | 10 | python,performance,sqlite | 0 | 2010-10-03T13:55:00.000 |
I have an existing sqlite3 db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory.
Is there a Pythonic way to load the existing file into memory in order to speed up the calculations? | 72 | 0 | 0 | 0 | false | 57,569,063 | 0 | 46,619 | 2 | 0 | 0 | 3,850,022 | With the solution of Cenk Alti, I always had a MemoryError with Python 3.7, when the process reached 500MB. Only with the use of the backup functionality of sqlite3 (mentioned by thinwybk), I was able to to load and save bigger SQLite databases. Also you can do the same with just 3 lines of code, both ways. | 1 | 0 | 0 | How to load existing db file to memory in Python sqlite3? | 10 | python,performance,sqlite | 0 | 2010-10-03T13:55:00.000 |
Looking around for a noSQL database implementation that has an ORM syntax (pref. like Django's), lets me store and retrieve nested dictionary attributes but written entirely in Python to ease deployment and avoids Javascript syntax for map/reduce. Even better if it has a context-aware (menus), python-based console, as well as being able to run as a separate daemon task. Is there such an initiative already (I can't find it) or should I start one? | 4 | 2 | 0.099668 | 0 | false | 3,865,523 | 0 | 1,830 | 1 | 0 | 0 | 3,865,283 | I don't know about a noSQL solution, but sqlite+sqlalchemy's ORM works pretty well for me. As long as it gives you the interface and features you need, I don't see a reason to care whether it uses sql internally. | 1 | 0 | 1 | Pure Python implementation of MongoDB? | 4 | python,mongodb,nosql | 0 | 2010-10-05T15:39:00.000 |
I'm using Python and SQLAlchemy to query a SQLite FTS3 (full-text) store and I would like to prevent my users from using the - as an operator. How should I escape the - so users can search for a term containing the - (enabled by changing the default tokenizer) instead of it signifying "does not contain the term following the -"? | 2 | 1 | 0.099668 | 0 | false | 3,942,449 | 0 | 1,200 | 1 | 0 | 0 | 3,865,733 | From elsewhere on the internet it seems it may be possible to surround each search term with double quotes "some-term". Since we do not need the subtraction operation, my solution was to replace hyphens - with underscores _ when populating the search index and when performing searches. | 1 | 0 | 0 | How do I escape the - character in SQLite FTS3 queries? | 2 | python,sqlite,sqlalchemy,fts3 | 0 | 2010-10-05T16:32:00.000 |
I'm trying to restore the current working database to the data stored in a .sql file from within Django. Whats the best way to do this? Does django have an good way to do this or do I need to grab the connection string from the settings.py file and send command line mysql commands to do this?
Thanks for your help. | 0 | 1 | 1.2 | 0 | true | 3,868,544 | 1 | 182 | 1 | 0 | 0 | 3,866,989 | You can't import sql dumps through django; import it through mysql directly, if you run mysql locally you can find various graphical mysql clients that can help you with doing so; if you need to do it remotely, find out if your server has any web interfaces for that installed! | 1 | 0 | 0 | How do I replace the current working MySQL database with a .sql file? | 2 | python,mysql,django | 0 | 2010-10-05T19:28:00.000 |
i just want to use an entity modify it to show something,but don't want to change to the db,
but after i use it ,and in some other place do the session.commit()
it will add this entity to db,i don't want this happen,
any one could help me? | 0 | 1 | 1.2 | 0 | true | 3,896,280 | 1 | 85 | 1 | 0 | 0 | 3,881,364 | You can expunge it from session before modifying object, then this changes won't be accounted on next commits unless you add the object back to session. Just call session.expunge(obj). | 1 | 0 | 0 | use sqlalchemy entity isolately | 1 | python,sqlalchemy,entity | 0 | 2010-10-07T11:56:00.000 |
I am trying out Sphinx search in my Django project. All setup done & it works but need some clarification from someone who has actually used this setup.
In my Sphinx search while indexing, I have used 'name' as the field in my MySQL to be searchable & all other fields in sql_query to be as attributes (according to Sphinx lingo).
So when I search from my Model instance in Django, I get the search results alright but it does not have the 'name' field in the search results. I get all the other attributes.
However, I get the 'id' of the search term. Technically, I could get the 'name' by again querying MySQL but I want to avoid this. Is there anything I am not doing here? | 0 | 1 | 1.2 | 0 | true | 4,121,651 | 1 | 395 | 1 | 0 | 0 | 3,897,650 | Here's a shot in the dark -
Try to get the name of your index in sphinx.conf same as the table_name you are trying to index. This is a quirk which is missed by lot of people. | 1 | 0 | 0 | Django Sphinx Text Search | 1 | python,django,search,full-text-search,django-sphinx | 0 | 2010-10-09T20:02:00.000 |
I'm currently busy making a Python ORM which gets all of its information from a RDBMS via introspection (I would go with XRecord if I was happy with it in other respects) — meaning, the end-user only tells which tables/views to look at, and the ORM does everything else automatically (if it makes you actually write something and you're not looking for weird things and dangerous adventures, it's a bug).
The major part of that is detecting relationships, provided that the database has all relevant constraints in place and you have no naming conventions at all — I want to be able to have this ORM work with a database made by any crazy DBA which has his own views on what the columns and tables should be named like. And I'm stuck at many-to-many relationships.
First, there can be compound keys. Then, there can be MTM relationships with three or more tables. Then, a MTM intermediary table might have its own data apart from keys — some data common to all tables it ties together.
What I want is a method to programmatically detect that a table X is an intermediary table tying tables A and B, and that any non-key data it has must belong to both A and B (and if I change a common attribute from within A, it should affect the same attribute in B). Are there common algorithms to do that? Or at least to make guesses which are right in 80% of the cases (provided the DBA is sane)? | 2 | 0 | 1.2 | 0 | true | 3,902,410 | 0 | 241 | 3 | 0 | 0 | 3,901,961 | So far, I see the only one technique covering more than two tables in relation. A table X is assumed related to table Y, if and only if X is referenced to Y no more than one table away. That is:
"Zero tables away" means X contains the foreign key to Y. No big deal, that's how we detect many-to-ones.
"One table away" means there is a table Z which itself has a foreign key referencing table X (these are easy to find), and a foreign key referencing table Y.
This reduces the scope of traits to look for a lot (we don't have to care if the intermediary table has any other attributes), and it covers any number of tables tied together in a MTM relation.
If there are some interesting links or other methods, I'm willing to hear them. | 1 | 0 | 0 | What are methods of programmatically detecting many-to-many relationships in a RDMBS? | 3 | python,orm,metaprogramming,introspection,relationships | 0 | 2010-10-10T19:59:00.000 |
I'm currently busy making a Python ORM which gets all of its information from a RDBMS via introspection (I would go with XRecord if I was happy with it in other respects) — meaning, the end-user only tells which tables/views to look at, and the ORM does everything else automatically (if it makes you actually write something and you're not looking for weird things and dangerous adventures, it's a bug).
The major part of that is detecting relationships, provided that the database has all relevant constraints in place and you have no naming conventions at all — I want to be able to have this ORM work with a database made by any crazy DBA which has his own views on what the columns and tables should be named like. And I'm stuck at many-to-many relationships.
First, there can be compound keys. Then, there can be MTM relationships with three or more tables. Then, a MTM intermediary table might have its own data apart from keys — some data common to all tables it ties together.
What I want is a method to programmatically detect that a table X is an intermediary table tying tables A and B, and that any non-key data it has must belong to both A and B (and if I change a common attribute from within A, it should affect the same attribute in B). Are there common algorithms to do that? Or at least to make guesses which are right in 80% of the cases (provided the DBA is sane)? | 2 | 1 | 0.066568 | 0 | false | 3,902,041 | 0 | 241 | 3 | 0 | 0 | 3,901,961 | If you have to ask, you shouldn't be doing this. I'm not saying that to be cruel, but Python already has several excellent ORMs that are well-tested and widely used. For example, SQLAlchemy supports the autoload=True attribute when defining tables that makes it read the table definition - including all the stuff you're asking about - directly from the database. Why re-invent the wheel when someone else has already done 99.9% of the work?
My answer is to pick a Python ORM (such as SQLAlchemy) and add any "missing" functionality to that instead of starting from scratch. If it turns out to be a good idea, release your changes back to the main project so that everyone else can benefit from them. If it doesn't work out like you hoped, at least you'll already be using a common ORM that many other programmers can help you with. | 1 | 0 | 0 | What are methods of programmatically detecting many-to-many relationships in a RDMBS? | 3 | python,orm,metaprogramming,introspection,relationships | 0 | 2010-10-10T19:59:00.000 |
I'm currently busy making a Python ORM which gets all of its information from a RDBMS via introspection (I would go with XRecord if I was happy with it in other respects) — meaning, the end-user only tells which tables/views to look at, and the ORM does everything else automatically (if it makes you actually write something and you're not looking for weird things and dangerous adventures, it's a bug).
The major part of that is detecting relationships, provided that the database has all relevant constraints in place and you have no naming conventions at all — I want to be able to have this ORM work with a database made by any crazy DBA which has his own views on what the columns and tables should be named like. And I'm stuck at many-to-many relationships.
First, there can be compound keys. Then, there can be MTM relationships with three or more tables. Then, a MTM intermediary table might have its own data apart from keys — some data common to all tables it ties together.
What I want is a method to programmatically detect that a table X is an intermediary table tying tables A and B, and that any non-key data it has must belong to both A and B (and if I change a common attribute from within A, it should affect the same attribute in B). Are there common algorithms to do that? Or at least to make guesses which are right in 80% of the cases (provided the DBA is sane)? | 2 | 0 | 0 | 0 | false | 3,902,030 | 0 | 241 | 3 | 0 | 0 | 3,901,961 | Theoretically, any table with multiple foreign keys is in essence a many-to-many relation, which makes your question trivial. I suspect that what you need is a heuristic of when to use MTM patterns (rather than standard classes) in the object model. In that case, examine what are the limitations of the patterns you chose.
For example, you can model a simple MTM relationship (two tables, no attributes) by having lists as attributes on both types of objects. However, lists will not be enough if you have additional data on the relationship itself. So only invoke this pattern for tables with two columns, both with foreign keys. | 1 | 0 | 0 | What are methods of programmatically detecting many-to-many relationships in a RDMBS? | 3 | python,orm,metaprogramming,introspection,relationships | 0 | 2010-10-10T19:59:00.000 |
Well, the question pretty much summarises it. My db activity is very update intensive, and I want to programmatically issue a Vacuum Analyze. However I get an error that says that the query cannot be executed within a transaction. Is there some other way to do it? | 9 | 14 | 1.2 | 0 | true | 3,932,055 | 0 | 5,030 | 1 | 0 | 0 | 3,931,951 | This is a flaw in the Python DB-API: it starts a transaction for you. It shouldn't do that; whether and when to start a transaction should be up to the programmer. Low-level, core APIs like this shouldn't babysit the developer and do things like starting transactions behind our backs. We're big boys--we can start transactions ourself, thanks.
With psycopg2, you can disable this unfortunate behavior with an API extension: run connection.autocommit = True. There's no standard API for this, unfortunately, so you have to depend on nonstandard extensions to issue commands that must be executed outside of a transaction.
No language is without its warts, and this is one of Python's. I've been bitten by this before too. | 1 | 0 | 0 | Is it possible to issue a "VACUUM ANALYZE " from psycopg2 or sqlalchemy for PostgreSQL? | 2 | python,postgresql,sqlalchemy,psycopg2,vacuum | 0 | 2010-10-14T09:49:00.000 |
Hi so this is what I understand how Openid works:-
the user enters his openid url on the site say"hii.com"
The app does a redirect to the openid provider and either does the login or denies it and sends the response back to the site i.e"hii.com"
If authentication was succesful then the response object provided by the openid provider can contain other data too like email etc if "hii.com" had requested for it.
I can save this data in the database.
Please correct me if I am wrong. However what I am not understanding here is the concept of stores. I see openid.store.filestore,nonce,sqlstore. Could someone please provide some clarity on it. What role does this store play here.
I have gone through python openid docs but end up feeling clueless.
Thanks | 3 | 1 | 1.2 | 0 | true | 3,937,506 | 0 | 488 | 1 | 0 | 0 | 3,937,456 | upd.: my previous answer was wrong
The store you are referring to is where your app stores the data during auth.
Storing it in a shared memcached instance should be the best option (faster than db and reliable enough). | 1 | 0 | 0 | what is the concept of store in OpenID | 1 | python,openid,store | 0 | 2010-10-14T20:51:00.000 |
... vs declarative sqlalchemy ? | 7 | 1 | 0.066568 | 0 | false | 3,975,114 | 0 | 4,024 | 1 | 0 | 0 | 3,957,938 | The Elixir syntax is something I find useful when building a database for a given app from scratch and everything is all figured out beforehand.
I have had my best luck with SQLAlchemy when using it on legacy databases (and on other similarly logistically immutable schemas). Particularly useful is the plugin SQLSoup, for read-only one-time extractions of data in preparation for migrating it elsewhere.
YMMV but Elixir isn't really designed to adapt to older schemas -- and SQLAlchemy proper is overkill for most small- to mid-size projects (in my opinion of course). | 1 | 0 | 0 | What are the benefits of using Elixir | 3 | python,sqlalchemy,python-elixir | 0 | 2010-10-18T09:36:00.000 |
I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux.
I intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment.
The db has IIRC well over half a million rows of data. My questions are:
Is the number of records a cause for concern? (i.e. Will I hit some limits)?
Is there a better file format for the transitory data (instead of CSV)?
I chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but
I would like to hear from someone who may have done something similar before. | 1 | 5 | 1.2 | 0 | true | 3,964,635 | 0 | 8,978 | 3 | 0 | 0 | 3,964,378 | Memory usage for csvfile.reader and csvfile.writer isn't proportional to the number of records, as long as you iterate correctly and don't try to load the whole file into memory. That's one reason the iterator protocol exists. Similarly, csvfile.writer writes directly to disk; it's not limited by available memory. You can process any number of records with these without memory limitations.
For simple data structures, CSV is fine. It's much easier to get fast, incremental access to CSV than more complicated formats like XML (tip: pulldom is painfully slow). | 1 | 0 | 0 | is there a limit to the (CSV) filesize that a Python script can read/write? | 4 | python,ms-access,csv,odbc | 0 | 2010-10-18T23:49:00.000 |
I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux.
I intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment.
The db has IIRC well over half a million rows of data. My questions are:
Is the number of records a cause for concern? (i.e. Will I hit some limits)?
Is there a better file format for the transitory data (instead of CSV)?
I chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but
I would like to hear from someone who may have done something similar before. | 1 | 1 | 0.049958 | 0 | false | 3,964,404 | 0 | 8,978 | 3 | 0 | 0 | 3,964,378 | I wouldn't bother using an intermediate format. Pulling from Access via ADO and inserting right into MySQL really shouldn't be an issue. | 1 | 0 | 0 | is there a limit to the (CSV) filesize that a Python script can read/write? | 4 | python,ms-access,csv,odbc | 0 | 2010-10-18T23:49:00.000 |
I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux.
I intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment.
The db has IIRC well over half a million rows of data. My questions are:
Is the number of records a cause for concern? (i.e. Will I hit some limits)?
Is there a better file format for the transitory data (instead of CSV)?
I chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but
I would like to hear from someone who may have done something similar before. | 1 | 0 | 0 | 0 | false | 3,964,398 | 0 | 8,978 | 3 | 0 | 0 | 3,964,378 | The only limit should be operating system file size.
That said, make sure when you send the data to the new database, you're writing it a few records at a time; I've seen people do things where they try to load the entire file first, then write it. | 1 | 0 | 0 | is there a limit to the (CSV) filesize that a Python script can read/write? | 4 | python,ms-access,csv,odbc | 0 | 2010-10-18T23:49:00.000 |
As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || " " || second_string, but it looks very ugly. | 0 | 0 | 1.2 | 0 | true | 3,976,347 | 0 | 2,632 | 2 | 0 | 0 | 3,976,313 | There isn't one. | 1 | 0 | 0 | SQLite equivalent of Python's "'%s %s' % (first_string, second_string)" | 5 | python,sqlite,string | 0 | 2010-10-20T09:16:00.000 |
As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || " " || second_string, but it looks very ugly. | 0 | 2 | 0.07983 | 0 | false | 3,976,353 | 0 | 2,632 | 2 | 0 | 0 | 3,976,313 | I can understand not liking first_string || ' ' || second_string, but that's the equivalent. Standard SQL (which SQLite speaks in this area) just isn't the world's prettiest string manipulation language. You could try getting the results of the query back into some other language (e.g., Python which you appear to like) and doing the concatenation there; it's usually best to not do "presentation" in the database layer (and definitely not a good idea to use the result of concatenation as something to search against; that makes it impossible to optimize with indices!) | 1 | 0 | 0 | SQLite equivalent of Python's "'%s %s' % (first_string, second_string)" | 5 | python,sqlite,string | 0 | 2010-10-20T09:16:00.000 |
I've been asked to encrypt various db fields within the db.
Problem is that these fields need be decrypted after being read.
I'm using Django and SQL Server 2005.
Any good ideas? | 17 | 2 | 0.099668 | 0 | false | 3,979,447 | 1 | 13,590 | 2 | 0 | 0 | 3,979,385 | If you are storing things like passwords, you can do this:
store users' passwords as their SHA256 hashes
get the user's password
hash it
List item
check it against the stored password
You can create a SHA-256 hash in Python by using the hashlib module.
Hope this helps | 1 | 0 | 0 | A good way to encrypt database fields? | 4 | python,sql,sql-server,django,encryption | 0 | 2010-10-20T15:12:00.000 |
I've been asked to encrypt various db fields within the db.
Problem is that these fields need be decrypted after being read.
I'm using Django and SQL Server 2005.
Any good ideas? | 17 | 6 | 1.2 | 0 | true | 3,979,446 | 1 | 13,590 | 2 | 0 | 0 | 3,979,385 | Yeah. Tell whoever told you to get real. Makes no / little sense. If it is about the stored values - enterprise edition 2008 can store encrypted DB files.
Otherwise, if you really need to (with all disadvantages) just encrypt them and store them as byte fields. | 1 | 0 | 0 | A good way to encrypt database fields? | 4 | python,sql,sql-server,django,encryption | 0 | 2010-10-20T15:12:00.000 |
I'd like to build a "feed" for recent activity related to a specific section of my site. I haven't used memcache before, but I'm thinking of something like this:
When a new piece of information is submitted to the site, assign a unique key to it and also add it to memcache.
Add this key to the end of an existing list in memcache, so it can later be referenced.
When retrieving, first retrieve the list of keys from memcache
For each key retrieved, retrieve the individual piece of information
String the pieces together and return them as the "feed"
E.g., user comments: user writes, "Nice idea"
Assign a unique key to "Nice idea," let's say key "1234"
Insert a key/data pair into memcache, 1234 -> "Nice Idea"
Append "1234" to an existing list of keys: key_list -> {2341,41234,124,341,1234}
Now when retrieving, first query the key list: {2341,41234,124,341,1234}
For each key in the key list, retrieve the data:
2341 -> "Yes"
41234 -> "Good point"
124 -> "That's funny"
341 -> "I don't agree"
1234 -> "Nice Idea"
Is this a good approach?
Thanks! | 0 | 0 | 0 | 0 | false | 4,006,612 | 1 | 351 | 1 | 0 | 0 | 3,999,496 | If the list of keys is bounded in size then it should be ok. memcache by default has a 1MB item size limit.
Sounds like memcache is the only storage for the data, is it a good idea? | 1 | 0 | 0 | Best way to keep an activity log in memcached | 1 | python,memcached,feed | 0 | 2010-10-22T17:43:00.000 |
I am trying to implement a python script which writes and reads to a database to track changes within a 3d game (Minecraft) These changes are done by various clients and can be represented by player name, coordinates (x,y,z), and a description. I am storing a high volume of changes and would like to know what would be an easy and preferably fast way to store and retrieve these changes. What kinds of databases that would be suited to this job? | 0 | 0 | 0 | 0 | false | 4,000,101 | 0 | 144 | 1 | 0 | 0 | 4,000,072 | Any kind. A NoSQL option like MongoDB might be especially interesting. | 1 | 0 | 0 | Suitable kind of database to track a high volume of changes | 2 | python,database,change-tracking | 0 | 2010-10-22T19:02:00.000 |
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives? | 1 | 2 | 0.099668 | 0 | false | 4,001,358 | 0 | 1,335 | 4 | 0 | 0 | 4,001,314 | I don't see why not. As a related real-world example, WordPress stores serialized PHP arrays as a single value in many instances. | 1 | 0 | 1 | Storing JSON in MySQL? | 4 | python,mysql,json | 0 | 2010-10-22T22:05:00.000 |
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives? | 1 | 0 | 0 | 0 | false | 4,008,102 | 0 | 1,335 | 4 | 0 | 0 | 4,001,314 | I think,It's beter serialize your XML.If you are using python language ,cPickle is good choice. | 1 | 0 | 1 | Storing JSON in MySQL? | 4 | python,mysql,json | 0 | 2010-10-22T22:05:00.000 |
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives? | 1 | 5 | 1.2 | 0 | true | 4,001,338 | 0 | 1,335 | 4 | 0 | 0 | 4,001,314 | If you need to query based on the values within the JSON, it would be better to store the values separately.
If you are just loading a set of configurations like you say you are doing, storing the JSON directly in the database works great and is a very easy solution. | 1 | 0 | 1 | Storing JSON in MySQL? | 4 | python,mysql,json | 0 | 2010-10-22T22:05:00.000 |
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives? | 1 | 2 | 0.099668 | 0 | false | 4,001,334 | 0 | 1,335 | 4 | 0 | 0 | 4,001,314 | No different than people storing XML snippets in a database (that doesn't have XML support). Don't see any harm in it, if it really doesn't need to be searched at the DB level. And the great thing about JSON is how parseable it is. | 1 | 0 | 1 | Storing JSON in MySQL? | 4 | python,mysql,json | 0 | 2010-10-22T22:05:00.000 |
A new requirement has come down from the top: implement 'proprietary business tech' with the awesome, resilient Elixir database I have set up. I've tried a lot of different things, such as creating an implib from the provided interop DLL (which apparently doesn't work like COM dlls) which didn't work at all. CPython doesn't like the MFC stuff either, so all attempts to create a Python lib have failed (using C anyway, not sure you can create a python library from .NET directly).
The only saving grace is the developer saw fit to provide VBA, .NET and MFC Interop C++ hooks into his library, so there are "some" choices, though they all ultimately lead back to the same framework. What would be the best method to:
A) Keep my model definitions in one place, in one language (Python/Elixir/SQLAlchemy)
B) Have this new .NET access the models without resorting to brittle, hard-coded SQL.
Any and all suggestions are welcome. | 0 | 0 | 1.2 | 0 | true | 4,025,154 | 0 | 502 | 1 | 0 | 0 | 4,017,164 | After a day or so of deliberation, I'm attempting to load the new business module in IronPython. Although I don't really want to introduce to python interpreters into my environment, I think that this will be the glue I need to get this done efficiently. | 1 | 0 | 0 | Loading Elixir/SQLAlchemy models in .NET? | 1 | sqlalchemy,python-elixir | 0 | 2010-10-25T17:23:00.000 |
Django: If I added new tables to database, how can I query them?
Do I need to create the relevant models first? Or django creates it by itself?
More specifically, I installed another django app, it created several database tables in database, and now I want to get some specific data from them? What are the correct approaches? Thank you very much! | 0 | 0 | 0 | 0 | false | 4,042,305 | 1 | 78 | 2 | 0 | 0 | 4,042,286 | Django doen't follow convention over configuration philosophy. you have to explicitly create the backing model for the table and in the meta tell it about the table name... | 1 | 0 | 0 | Django: If I added new tables to database, how can I query them? | 2 | python,django,django-models,django-admin | 0 | 2010-10-28T11:11:00.000 |
Django: If I added new tables to database, how can I query them?
Do I need to create the relevant models first? Or django creates it by itself?
More specifically, I installed another django app, it created several database tables in database, and now I want to get some specific data from them? What are the correct approaches? Thank you very much! | 0 | 1 | 1.2 | 0 | true | 4,042,337 | 1 | 78 | 2 | 0 | 0 | 4,042,286 | I suppose another django app has all model files needed to access those tables, you should just try importing those packages and use this app's models. | 1 | 0 | 0 | Django: If I added new tables to database, how can I query them? | 2 | python,django,django-models,django-admin | 0 | 2010-10-28T11:11:00.000 |
I'm having a problem with file uploading. I'm using FastCGI on Apache2 (unix) to run a WSGI-compliant application. File uploads, in the form of images, are begin saved in a MySQL database. However, larger images are being truncated at 65535 bytes. As far as I can tell, nothing should be limiting the size of the files and I'm not sure which one of the pieces in my solution would be causing the problem.
Is it FastCGI; can it limit file upload sizes?
Is it Python? The cgi.FieldStorage object gives me a file handle to the uploaded file which I then read: file.read(). Does this limit file sizes in any way?
Is it MySQL? The type of the column for saving the image data is a longblob. I figured this could store a couple of GB worth of data. So a few MB shouldn't be a problem, right?
Is it the flups WSGIServer? I can't find any information regarding this.
My file system can definitely handle huge files, so that's not a problem. Any ideas?
UPDATE:
It is MySQL. I got python to output the number of bytes uploaded and it's greater than 65535. So I looked into max_allowed_packet for mysqld and set it to 128M. Overkill, but wanting to be sure for the moment.
My only problem now is getting python's MySQLdb to allow the transfer of more than 65535 bytes. Does anyone know how to do this? Might post as a separate question. | 3 | 2 | 1.2 | 0 | true | 4,047,955 | 1 | 1,279 | 1 | 0 | 0 | 4,047,899 | If the web server/gateway layer were truncating incoming form submissions I'd expect an error from FieldStorage, since the truncation would not just interrupt the file upload but also the whole multipart/form-data structure. Even if cgi.py tolerated this, it would be very unlikely to have truncated the multipart at just the right place to leave exactly 2**16-1 bytes of file upload.
So I would suspect MySQL. LONGBLOB should be fine up to 2**32-1, but 65535 would be the maximum length of a normal BLOB. Are you sure the types are what you think? Check with SHOW CREATE TABLE x. Which database layer are you using to get the data in? | 1 | 0 | 0 | Does FastCGI or Apache2 limit upload sizes? | 1 | python,mysql,file-upload,apache2,fastcgi | 0 | 2010-10-28T23:16:00.000 |
It seems as if MySQLdb is restricting the maximum transfer size for SQL statements. I have set the max_allowed_packet to 128M for mysqld. MySQL documentation says that this needs to be done for the client as well. | 3 | 1 | 1.2 | 0 | true | 4,051,531 | 0 | 1,923 | 1 | 0 | 0 | 4,050,257 | You need to put max_allowed_packet into the [client] section of my.cnf on the machine where the client runs. If you want to, you can specify a different file or group in mysqldb.connect. | 1 | 0 | 0 | How do I set max_allowed_packet or equivalent for MySQLdb in python? | 1 | python,mysql | 0 | 2010-10-29T08:36:00.000 |
When connected to a postgresql database using psycopg and I pull the network cable I get no errors. How can I detect this in code to notify the user? | 2 | 0 | 0 | 0 | false | 4,061,641 | 0 | 214 | 2 | 0 | 0 | 4,061,635 | You will definitely get an error the next time you try and execute a query, so I wouldn't worry if you can't alert the user at the exact instance they lose there network connection. | 1 | 0 | 0 | Python and psycopg detect network error | 2 | python,postgresql,psycopg | 0 | 2010-10-31T02:54:00.000 |
When connected to a postgresql database using psycopg and I pull the network cable I get no errors. How can I detect this in code to notify the user? | 2 | 0 | 1.2 | 0 | true | 4,069,833 | 0 | 214 | 2 | 0 | 0 | 4,061,635 | psycopg can't detect what happens with the network. For example, if you unplug your ethernet cable, replug it and execute a query everything will work OK. You should definitely get an exception when psycopg tries to send some SQL to the backend and there is no network connection but depending on the exact netwokr problem it can take some time. In the worst case you'll have to wait for a TCP timeout on the connection (several tens of seconds). | 1 | 0 | 0 | Python and psycopg detect network error | 2 | python,postgresql,psycopg | 0 | 2010-10-31T02:54:00.000 |
I am using Redis database where we store the navigational information. These data must be persistent and should be fetched faster. I don't have more than 200 MB data in this data set.
I face problem when writing admin modules for redis db and I really missing the sql schema and power of django style admin modules.
Now I am thinking of using MySQL. The requirement is, I want the persistent database but the data can be loaded into the memory like redis so that I can do the SQL queries REALLY faster.
Is it possible to use MySQL in persistent mode and instruct MySQL to use the memory for querying purpose? What is the best suitable MySQL DB where I do not worry much on consistencies where our writes are very few. | 1 | 1 | 1.2 | 0 | true | 4,061,902 | 1 | 1,165 | 2 | 0 | 0 | 4,061,828 | I would create a read only slave to your mysql database and force its database engines to memory. You'd have to handle failures by re-initializing the read only database, but that can be scripted rather easily.
This way you still have your persistence in the regular mysql database and your read speed in the read only memory tables. | 1 | 0 | 0 | fit mysql db in memory | 3 | python,mysql,sqlalchemy,performance | 0 | 2010-10-31T04:18:00.000 |
I am using Redis database where we store the navigational information. These data must be persistent and should be fetched faster. I don't have more than 200 MB data in this data set.
I face problem when writing admin modules for redis db and I really missing the sql schema and power of django style admin modules.
Now I am thinking of using MySQL. The requirement is, I want the persistent database but the data can be loaded into the memory like redis so that I can do the SQL queries REALLY faster.
Is it possible to use MySQL in persistent mode and instruct MySQL to use the memory for querying purpose? What is the best suitable MySQL DB where I do not worry much on consistencies where our writes are very few. | 1 | 0 | 0 | 0 | false | 4,061,848 | 1 | 1,165 | 2 | 0 | 0 | 4,061,828 | I would think you could have a persistent table, copy all of the data into a MEMORY engine table whenever the server starts, and have triggers on the memory db for INSERT UPDATE and DELETE write to the persistent table so it is hidden for the user. Correct me if I'm wrong though, it's just the approach I would first try. | 1 | 0 | 0 | fit mysql db in memory | 3 | python,mysql,sqlalchemy,performance | 0 | 2010-10-31T04:18:00.000 |
I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.
When I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).
Does anyone know why this is happenning and how I can fix it?
Edit: I used the strip() function for all the attributes before writing a row.
Thanks. | 15 | 34 | 1.2 | 1 | true | 4,122,980 | 0 | 7,209 | 2 | 0 | 0 | 4,122,794 | You're using open('file.csv', 'w')--try open('file.csv', 'wb').
The Python csv module requires output files be opened in binary mode. | 1 | 0 | 0 | Csv blank rows problem with Excel | 2 | python,excel,csv | 0 | 2010-11-08T10:00:00.000 |
I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.
When I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).
Does anyone know why this is happenning and how I can fix it?
Edit: I used the strip() function for all the attributes before writing a row.
Thanks. | 15 | 0 | 0 | 1 | false | 4,122,816 | 0 | 7,209 | 2 | 0 | 0 | 4,122,794 | the first that comes into my mind (just an idea) is that you might have used "\r\n" as row delimiter (which is shown as one linebrak in notepad) but excel expects to get only "\n" or only "\r" and so it interprets this as two line-breaks. | 1 | 0 | 0 | Csv blank rows problem with Excel | 2 | python,excel,csv | 0 | 2010-11-08T10:00:00.000 |
I have a simple question. I'm doing some light crawling so new content arrives every few days. I've written a tokenizer and would like to use it for some text mining purposes. Specifically, I'm using Mallet's topic modeling tool and one of the pipe is to tokenize the text into tokens before further processing can be done. With the amount of text in my database, it takes a substantial amount of time tokenizing the text (I'm using regex here).
As such, is it a norm to store the tokenized text in the db so that tokenized data can be readily available and tokenizing can be skipped if I need them for other text mining purposes such as Topic modeling, POS tagging? What are the cons of this approach? | 2 | 1 | 0.099668 | 0 | false | 4,151,273 | 0 | 894 | 1 | 0 | 0 | 4,122,940 | I store tokenized text in a MySQL database. While I don't always like the overhead of communication with the database, I've found that there are lots of processing tasks that I can ask the database to do for me (like search the dependency parse tree for complex syntactic patterns). | 1 | 0 | 0 | Storing tokenized text in the db? | 2 | python,caching,postgresql,nlp,tokenize | 0 | 2010-11-08T10:17:00.000 |
I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about? | 4 | 5 | 1.2 | 0 | true | 4,136,841 | 0 | 903 | 3 | 0 | 0 | 4,136,800 | SQLite does not run in a separate process. So you don't actually have any extra overhead from IPC. But IPC overhead isn't that big, anyway, especially over e.g., UNIX sockets. If you need multiple writers (more than one process/thread writing to the database simultaneously), the locking overhead is probably worse, and MySQL or PostgreSQL would perform better, especially if running on the same machine. The basic SQL supported by all three of these databases is the same, so benchmarking isn't that painful.
You generally don't have to do the same type of debugging on SQL statements as you do on your own implementation. SQLite works, and is fairly well debugged already. It is very unlikely that you'll ever have to debug "OK, that row exists, why doesn't the database find it?" and track down a bug in index updating. Debugging SQL is completely different than procedural code, and really only ever happens for pretty complicated queries.
As for debugging your code, you can fairly easily centralize your SQL calls and add tracing to log the queries you are running, the results you get back, etc. The Python SQLite interface may already have this (not sure, I normally use Perl). It'll probably be easiest to just make your existing Table class a wrapper around SQLite.
I would strongly recommend not reinventing the wheel. SQLite will have far fewer bugs, and save you a bunch of time. (You may also want to look into Firefox's fairly recent switch to using SQLite to store history, etc., I think they got some pretty significant speedups from doing so.)
Also, SQLite's well-optimized C implementation is probably quite a bit faster than any pure Python implementation. | 1 | 0 | 0 | Pros and cons of using sqlite3 vs custom table implementation | 3 | python,performance,sqlite | 0 | 2010-11-09T17:49:00.000 |
I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about? | 4 | 4 | 0.26052 | 0 | false | 4,136,876 | 0 | 903 | 3 | 0 | 0 | 4,136,800 | You could try to make a sqlite wrapper with the same interface as your class Table, so that you keep your code clean and you get the sqlite performences. | 1 | 0 | 0 | Pros and cons of using sqlite3 vs custom table implementation | 3 | python,performance,sqlite | 0 | 2010-11-09T17:49:00.000 |
I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about? | 4 | 0 | 0 | 0 | false | 4,136,862 | 0 | 903 | 3 | 0 | 0 | 4,136,800 | If you're doing database work, use a database, if your not, then don't. Using tables, it sound's like you are. I'd recommend using an ORM to make it more pythonic. SQLAlchemy is the most flexible (though it's not strictly just an ORM). | 1 | 0 | 0 | Pros and cons of using sqlite3 vs custom table implementation | 3 | python,performance,sqlite | 0 | 2010-11-09T17:49:00.000 |
The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually.
The Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states:
sh: /etc/mysql/my.cnf: Permission denied
along with a Traceback that says setup.py couldn't find the file.
Note that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is /etc/mysql/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error.
Any suggestions? | 3 | 0 | 0 | 0 | false | 4,139,191 | 0 | 429 | 2 | 1 | 0 | 4,138,504 | Are you sure that file isn't hardcoded in some other portion of the build process? Why not just add it to you $PATH for the duration of the build?
Does the script need to write that file for some reason? Does the build script use su or sudo to attempt to become some other user? Are you absolutely sure about both the permissions and the fact that you ran the script as root?
It's a really weird thing if you still can't get to it. Are you using a chroot or a virtualenv? | 1 | 0 | 0 | Trouble installing MySQLdb for second version of Python | 2 | python,mysql,permissions,configuration-files | 0 | 2010-11-09T20:52:00.000 |
The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually.
The Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states:
sh: /etc/mysql/my.cnf: Permission denied
along with a Traceback that says setup.py couldn't find the file.
Note that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is /etc/mysql/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error.
Any suggestions? | 3 | 2 | 0.197375 | 0 | false | 4,139,563 | 0 | 429 | 2 | 1 | 0 | 4,138,504 | As far as I'm aware, there is a very significant difference between "mysql_config" and "my.cnf".
"mysql_config" is usually located in the "bin" folder of your MySQL install and when executed, spits out various filesystem location information about your install.
"my.cnf" is a configuration script used by MySQL itself.
In short, when the script asks for "mysql_config", it should be taken to literally mean the executable file with a name of "mysql_config" and not the textual configuration file you're feeding it. MYSQLdb needs the "mysql_config" file so that it knows which libraries to use. That's it. It does not read your MySQL configuration directly.
The errors you are experiencing can be put down to;
It's trying to open the wrong file and running into permission trouble.
Even after it has tried to open that file, it still can't find the "mysql_config" file.
From here, you need to locate your MySQL installation's "bin" folder and check it contains "mysql_config". Then you can edit the folder path into the "site.cnf" file and you should be good to go. | 1 | 0 | 0 | Trouble installing MySQLdb for second version of Python | 2 | python,mysql,permissions,configuration-files | 0 | 2010-11-09T20:52:00.000 |
I'd like to develop a small/medium-size cross-platform application (including GUI).
My background: mostly web applications with MVC architectures, both Python (Pylons + SqlAlchemy) and Java (know the language well, but don't like it that much). I also know some C#. So far, I have no GUI programming experience (neither Windows Forms, Swing nor QT).
I plan to use SQLite for data storage: It seems to be a nice cross-platform solution and has some powerful features (e.g. full text search, which SQL Server Compact lacks).
I have done some research and these are my favorite options:
1) QT, Python (PyQT or PySide), and SQLAlchemy
pros:
Python the language
open source is strong in the Python world (lots of libraries and users)
SQLAlchemy: A fantastic way to interact with a DB and incredibly well documented!
cons:
compilation, distribution and deployment more difficult?
no QT experience
QT Designer not as nice as the Visual Studio Winforms designer
2) .NET/Mono, Windows Forms, C#, (Fluent) NHibernate, System.Data.SQLite
pros:
C# (I like it, especially compared to Java and would like to get more experience in it)
The Winforms GUI designer in Visual Studio seems really slick
IntelliSense
ClickOnce Deployment(?)
Windows Forms look and feel good on Windows
cons:
(Fluent) NHibernate far less documented than SQLAlchemy; also annoying: Fluent docs refer to NHibernate docs which refer to Hibernate (aargh!). But plain NHibernate + XML does not look very comfortable.
Windows Forms will not look + behave native on Linux/Mac OS (correct?)
fewer open source libraries in the .NET world, fewer OSS users, less documentation in general
no WinForms and NHibernate experience
3) JVM, Java + Jython, Swing, SQLAlchemy
(I'm emotionally biased against this one, but listed for completeness sake)
pros:
JVM/Swing work well as cross-platform basis
Jython
SQLAlchemy
lots of open source libraries
cons:
Swing seems ugly and difficult to layout
lacks a good GUI designer
Guessing that I won't be able to avoid Java for UI stuff
Not sure how stable the Jython/Java integration is
(Options that I have ruled out... just to avoid discussion on these):
- wxWidgets/wxPython (now that QT is LGPLed)
- GTK/PyGTK
The look and feel of the final application is very important to me. The above technology stacks are very different (PyQT, .NET WinForms, JVM Swing) and require some time to get proficient, so:
Which alternative would you recommend and why? | 11 | 5 | 0.761594 | 0 | false | 4,145,581 | 0 | 3,111 | 1 | 0 | 0 | 4,145,350 | I'm a Python guy and use PyQt myself, and I can wholly recommend it. Concerning your cons:
compilation, distribution and deployment more difficult?
No, not really. For many projects, a full setup.py for e.g. cx_Freeze can be less than 30 lines that rarely need to change (most import dependencies are detected automatically, only need to specify the few modules that are not recognized), and then python setup.py will build a standalone executable. Then you can distribute it just like e.g. a C++ .exe.
no QT experience
I didn't have notable GUI experience either when I started out with Qt (only a bit of fiddling with Tkinter), but I grew to love Qt. Most of the time, all widgets work seamlessly and do what they're supposed to do - and there's a lot of them for many purposes. You name it, there's probably a widget that does it, and doesn't annoy the user by being half-assed. All the nice things we've been spoiled with are there.
Qt is huge, but the PyQt documentation answer most question with reasonable search effort. And if all else fails and you know a bit of C++, you can also look at Qt resources.
QT Designer not as nice as the Visual Studio Winforms designer
I don't know the VS Winforms designer, but I must admit that the Qt Designer is lacking. I ended up making a sketch of the UI in the designer, generating the code, cleaning that up and taking care all remaining details by hand. It works out okay so far, but my projects are rather small.
PS:
(now that QT is LGPLed)
PyQt is still GPL only. PySide is LGPL, yes, but it's not that mature, if that's a concern. The project website states that "starting development on PySide should be pretty safe now" though. | 1 | 1 | 0 | Python + QT, Windows Forms or Swing for a cross-platform application? | 1 | c#,java,python,user-interface,cross-platform | 0 | 2010-11-10T14:11:00.000 |
I'm programming a web application using sqlalchemy. Everything was smooth during the first phase of development when the site was not in production. I could easily change the database schema by simply deleting the old sqlite database and creating a new one from scratch.
Now the site is in production and I need to preserve the data, but I still want to keep my original development speed by easily converting the database to the new schema.
So let's say that I have model.py at revision 50 and model.py a revision 75, describing the schema of the database. Between those two schema most changes are trivial, for example a new column is declared with a default value and I just want to add this default value to old records.
Eventually a few changes may not be trivial and require some pre-computation.
How do (or would) you handle fast changing web applications with, say, one or two new version of the production code per day ?
By the way, the site is written in Pylons if this makes any difference. | 63 | 16 | 1 | 0 | false | 4,165,496 | 1 | 32,073 | 1 | 0 | 0 | 4,165,452 | What we do.
Use "major version"."minor version" identification of your applications. Major version is the schema version number. The major number is no some random "enough new functionality" kind of thing. It's a formal declaration of compatibility with database schema.
Release 2.3 and 2.4 both use schema version 2.
Release 3.1 uses the version 3 schema.
Make the schema version very, very visible. For SQLite, this means keep the schema version number in the database file name. For MySQL, use the database name.
Write migration scripts. 2to3.py, 3to4.py. These scripts work in two phases. (1) Query the old data into the new structure creating simple CSV or JSON files. (2) Load the new structure from the simple CSV or JSON files with no further processing. These extract files -- because they're in the proper structure, are fast to load and can easily be used as unit test fixtures. Also, you never have two databases open at the same time. This makes the scripts slightly simpler. Finally, the load files can be used to move the data to another database server.
It's very, very hard to "automate" schema migration. It's easy (and common) to have database surgery so profound that an automated script can't easily map data from old schema to new schema. | 1 | 0 | 0 | How to efficiently manage frequent schema changes using sqlalchemy? | 4 | python,sqlalchemy,pylons,data-migration,migrate | 0 | 2010-11-12T14:08:00.000 |
Can anyone help me install Apache with mod_wsgi to run Python for implementation of RESTful Web services. We're trying to get rid of our existing Java REST services with Apache Tomcat.
The installation platform is SUSE Linux Enterprise. Please provide a step by step installation procedure with required modules, as I tried it and everytime was missinhg one module or other either in Python installation or Apache installation.
I followed the standard Installation steps for all 3, Apache, Python and mod_wsgi, but didn't work out for me.
Would this work at all? Do you have any other suggestions? | 0 | 0 | 0 | 0 | false | 4,168,054 | 1 | 2,568 | 1 | 0 | 0 | 4,167,684 | Check if mod_wsgi is loaded as a module into the httpd.conf
Add apache host that points to a python/wsgi module which contains the 'def application' definition for your web-service.
Resolve any path issues that maybe arise from your import handling.
If this doesn't work, drop some error-dump here and we'll check. | 1 | 0 | 0 | Install Apache with mod_wsgi to use Python for RESTful web services and Apache for web pages | 2 | python,apache,rest,mod-wsgi,mod-python | 0 | 2010-11-12T18:07:00.000 |
i want to create application in windows. i need to use databases which would be preferable best for pyqt application
like
sqlalchemy
mysql
etc. | 0 | 0 | 0 | 0 | false | 4,208,750 | 0 | 535 | 3 | 0 | 0 | 4,168,020 | i guess its totally upto you ..but as far as i am concerned i personlly use sqlite, becoz it is easy to use and amazingly simple syntax whereas for MYSQL u can use it for complex apps and has options for performance tuning. but in end its totally upto u and wt your app requires | 1 | 1 | 0 | which databases can be used better for pyqt application | 4 | python,database,pyqt | 0 | 2010-11-12T18:49:00.000 |
i want to create application in windows. i need to use databases which would be preferable best for pyqt application
like
sqlalchemy
mysql
etc. | 0 | 1 | 0.049958 | 0 | false | 4,294,636 | 0 | 535 | 3 | 0 | 0 | 4,168,020 | SQlite is fine for a single user.
If you are going over a network to talk to a central database, then you need a database woith a decent Python lirary.
Take a serious look at MySQL if you need/want SQL.
Otherwise, there is CouchDB in the Not SQL camp, which is great if you are storing documents, and can express searches as Map/reduce functions. Poor for adhoc queries. | 1 | 1 | 0 | which databases can be used better for pyqt application | 4 | python,database,pyqt | 0 | 2010-11-12T18:49:00.000 |
i want to create application in windows. i need to use databases which would be preferable best for pyqt application
like
sqlalchemy
mysql
etc. | 0 | 1 | 0.049958 | 0 | false | 4,512,428 | 0 | 535 | 3 | 0 | 0 | 4,168,020 | If you want a relational database I'd recommend you to use SQLAlchemy, as you then get a choice as well as an ORM. Bu default go with SQLite, as per other recommendations here.
If you don't need a relational database, take a look at ZODB. It's an awesome Python-only object-oriented database. | 1 | 1 | 0 | which databases can be used better for pyqt application | 4 | python,database,pyqt | 0 | 2010-11-12T18:49:00.000 |
I have my own unit testing suite based on the unittest library. I would like to track the history of each test case being run. I would also like to identify after each run tests which flipped from PASS to FAIL or vice versa.
I have very little knowledge about databases, but it seems that I could utilize sqlite3 for this task.
Are there any existing solutions which integrate unittest and a database? | 1 | 0 | 1.2 | 0 | true | 4,170,458 | 0 | 280 | 1 | 0 | 0 | 4,170,442 | Technically, yes. The only thing that you need is some kind of scripting language or shell script that can talk to sqlite.
You should think of a database like a file in a file system where you don't have to care about the file format. You just say, here are tables of data, with columns. And each row of that is one record. Much like in a Excel table.
So if you are familiar with shell scripts or calling command line tools, you can install sqlite and use the sqlitecommand to interact with the database.
Although I think the first thing you should do is to learn basic SQL. There are a lot of SQL tutorials out there. | 1 | 0 | 0 | Using sqlite3 to track unit test results | 1 | python,unit-testing,sqlite | 1 | 2010-11-13T01:33:00.000 |
I'm trying to use a MongoDB Database from a Google App Engine service is that possible? How do I install the PyMongo driver on Google App Engine? Thanks | 4 | 1 | 0.066568 | 0 | false | 4,179,091 | 1 | 1,355 | 1 | 1 | 0 | 4,178,742 | It's not possible because you don't have access to networks sockets in App Engine. As long as you cannot access the database via HTTP, it's impossible. | 1 | 0 | 0 | is it possible to use PyMongo in Google App Engine? | 3 | python,google-app-engine,mongodb,pymongo | 0 | 2010-11-14T17:42:00.000 |
We're rewriting a website used by one of our clients. The user traffic on it is very low, less than 100 unique visitors a week. It's basically just a nice interface to their data in our databases. It allows them to query and filter on different sets of data of theirs.
We're rewriting the site in Python, re-using the same Oracle database that the data is currently on. The current version is written in an old, old version of Coldfusion. One of the things that Coldfusion does well though is displays tons of database records on a single page. It's capable of displaying hundreds of thousands of rows at once without crashing the browser. It uses a Java applet, and it looks like the contents of the rows are perhaps compressed and passed in through the HTML or something. There is a large block of data in the HTML but it's not displayed - it's just rendered by the Java applet.
I've tried several JavaScript solutions but they all hinge on the fact that the data will be present in an HTML table or something along those lines. This causes browsers to freeze and run out of memory.
Does anyone know of any solutions to this situation? Our client loves the ability to scroll through all of this data without clicking a "next page" link. | 7 | 1 | 0.033321 | 0 | false | 4,186,505 | 1 | 3,442 | 1 | 0 | 0 | 4,186,384 | Most people, in this case, would use a framework. The best documented and most popular framework in Python is Django. It has good database support (including Oracle), and you'll have the easiest time getting help using it since there's such an active Django community.
You can try some other frameworks, but if you're tied to Python I'd recommend Django.
Of course, Jython (if it's an option), would make your job very easy. You could take the existing Java framework you have and just use Jython to build a frontend (and continue to use your Java applet and Java classes and Java server).
The memory problem is an interesting one; I'd be curious to see what you come up with. | 1 | 0 | 0 | How to display database query results of 100,000 rows or more with HTML? | 6 | python,html,oracle,coldfusion | 0 | 2010-11-15T16:18:00.000 |
I am implementing a class that resembles a typical database table:
has named columns and unnamed rows
has a primary key by which I can refer to the rows
supports retrieval and assignment by primary key and column title
can be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column
removal of a row is fast and is implemented as "soft-delete": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations
addition of a column is fast
rows are rarely added
columns are rarely deleted
I decided to implement the class directly rather than use a wrapper around sqlite.
What would be a good data structure to use?
Just as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways:
As lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list.
As dictionaries. Column titles are the keys of this dictionary.
Not sure about the pros/cons of the two.
The reasons I want to write my own code are:
I need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what "reason" (the "reason" is passed to my delete method).
I need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated) | 6 | 2 | 0.132549 | 0 | false | 4,188,260 | 0 | 1,361 | 2 | 0 | 0 | 4,188,202 | I would consider building a dictionary with keys that are tuples or lists. Eg: my_dict(("col_2", "row_24")) would get you this element. Starting from there, it would be pretty easy (if not extremely fast for very large databases) to write 'get_col' and 'get_row' methods, as well as 'get_row_slice' and 'get_col_slice' from the 2 preceding ones to gain access to your methods.
Using a whole dictionary like that will have 2 advantages. 1) Getting a single element will be faster than your 2 proposed methods; 2) If you want to have different number of elements (or missing elements) in your columns, this will make it extremely easy and memory efficient.
Just a thought :) I'll be curious to see what packages people will suggest!
Cheers | 1 | 0 | 0 | How to implement database-style table in Python | 3 | python,performance,data-structures,implementation | 0 | 2010-11-15T19:48:00.000 |
I am implementing a class that resembles a typical database table:
has named columns and unnamed rows
has a primary key by which I can refer to the rows
supports retrieval and assignment by primary key and column title
can be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column
removal of a row is fast and is implemented as "soft-delete": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations
addition of a column is fast
rows are rarely added
columns are rarely deleted
I decided to implement the class directly rather than use a wrapper around sqlite.
What would be a good data structure to use?
Just as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways:
As lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list.
As dictionaries. Column titles are the keys of this dictionary.
Not sure about the pros/cons of the two.
The reasons I want to write my own code are:
I need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what "reason" (the "reason" is passed to my delete method).
I need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated) | 6 | 0 | 0 | 0 | false | 4,231,416 | 0 | 1,361 | 2 | 0 | 0 | 4,188,202 | You really should use SQLite.
For your first reason (tracking deletion reasons) you can easily implement this by having a second table that you "move" rows to on deletion. The reason can be tracked in additional column in that table or another table you can join. If a deletion reason isn't always required then you can even use triggers on your source table to copy rows about to be deleted, and/or have a user defined function that can get the reason.
The indexing reason is somewhat covered by constraints etc but I can't directly address it without more details. | 1 | 0 | 0 | How to implement database-style table in Python | 3 | python,performance,data-structures,implementation | 0 | 2010-11-15T19:48:00.000 |
I am using the mysql connector (https://launchpad.net/myconnpy) with SQLAlchemy and, though the table is definitely UTF8, any string columns returned are just normal strings not unicode. The documentation doesn't list any specific parameters for UTF8/unicode support for the mysql connector driver so I borrowed from the mysqldb driver. Here is my connect string:
mysql+mysqlconnector://user:[email protected]/mydbname?charset=utf8&use_unicode=0
I'd really prefer to keep using this all-python mysql driver. Any suggestions? | 1 | -3 | -0.291313 | 0 | false | 4,192,633 | 0 | 1,239 | 1 | 0 | 0 | 4,191,370 | Sorry, i don't know about the connector, i use MySQLDB and it is working quite nicely. I work in UTF8 as well and i didn't have any problem. | 1 | 0 | 0 | MySql Connector (python) and SQLAlchemy Unicode problem | 2 | python,mysql,unicode,sqlalchemy | 0 | 2010-11-16T05:06:00.000 |
What the difference is between flush() and commit() in SQLAlchemy?
I've read the docs, but am none the wiser - they seem to assume a pre-understanding that I don't have.
I'm particularly interested in their impact on memory usage. I'm loading some data into a database from a series of files (around 5 million rows in total) and my session is occasionally falling over - it's a large database and a machine with not much memory.
I'm wondering if I'm using too many commit() and not enough flush() calls - but without really understanding what the difference is, it's hard to tell! | 569 | 0 | 0 | 0 | false | 65,843,088 | 0 | 180,674 | 1 | 0 | 0 | 4,201,455 | commit () records these changes in the database. flush () is always called as part of the commit () (1) call. When you use a Session object to query a database, the query returns results from both the database and the reddened parts of the unrecorded transaction it is performing. | 1 | 0 | 0 | SQLAlchemy: What's the difference between flush() and commit()? | 6 | python,sqlalchemy | 0 | 2010-11-17T04:20:00.000 |
I'm using sqlite with python. I'm implementing the POP3 protocol. I have a table
msg_id text
date text
from_sender text
subject text
body text
hashkey text
Now I need to check for duplicate messages by checking the message id of the message retrieved against the existing msg_id's in the table. I encrypted the msg_id using md5 and put it in the hashkey column. Whenever I retrieve mail, I hash the message id and check it with the table values. Heres what I do.
def check_duplicate(new):
conn = sql.connect("mail")
c = conn.cursor()
m = hashlib.md5()
m.update(new)
c.execute("select hashkey from mail")
for row in c:
if m.hexdigest() == row:
return 0
else:
continue
return 1
It just refuses to work correctly. I tried printing the row value, it shows it in unicode, thats where the problem lies as it cannot compare properly.
Is there a better way to do this, or to improve my method? | 0 | 0 | 0 | 0 | false | 4,208,359 | 0 | 481 | 1 | 0 | 0 | 4,208,146 | The main issue is that you're trying to compare a Python string (m.hexdigest()) with a tuple.
Additionally, another poster's suggestion that you use SQL for the comparison is probably good advice. Another SQL suggestion would be to fix your columns -- TEXT for everything probably isn't what you want; an index on your hashkey column is very likely a good thing. | 1 | 0 | 0 | Comparing sql values | 3 | python,sql,sqlite | 0 | 2010-11-17T19:08:00.000 |
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation?
I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file.
Which would be faster? thanks in advance. | 4 | 6 | 1 | 0 | false | 4,210,090 | 0 | 3,529 | 5 | 0 | 0 | 4,210,057 | It depends.. if you need to read sequenced data, file might be faster, if you need to read random data, database has better chances to be optimized to your needs.
(after all - database reads it's records from a file as well, but it has an internal structure and algorithms to enhance performance, it can use the memory in a smarter way, and do a lot in the background so the results will come faster)
in an intensive case of random reading - I will go with the database option. | 1 | 0 | 1 | Is a file read faster than reading data from the database? | 5 | python,performance,mongodb | 0 | 2010-11-17T22:58:00.000 |
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation?
I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file.
Which would be faster? thanks in advance. | 4 | 1 | 0.039979 | 0 | false | 49,248,435 | 0 | 3,529 | 5 | 0 | 0 | 4,210,057 | Reading from a database can be more efficient, because you can access records directly and make use of indexes etc. With normal flat files you basically have to read them sequentially. (Mainframes support direct access files, but these are sort of halfway between flat files and databases).
If you are in a multi-user environment, you must make sure that your data remain consistent even if multiple users try updates at the same time. With flat files, you have to lock the file for all but one user until she is ready with her update, and then lock for the next. Databases can do locking on row level.
You can make a file based system as efficient as a database, but that effort amounts to writing a database system yourself. | 1 | 0 | 1 | Is a file read faster than reading data from the database? | 5 | python,performance,mongodb | 0 | 2010-11-17T22:58:00.000 |
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation?
I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file.
Which would be faster? thanks in advance. | 4 | 3 | 0.119427 | 0 | false | 4,210,106 | 0 | 3,529 | 5 | 0 | 0 | 4,210,057 | There are too many factors to offer a concrete answer, but here's a list for you to consider:
Disk bandwidth
Disk latency
Disk cache
Network bandwidth
MongoDB cluster size
Volume of MongoDB client activity (the disk only has one "client" unless your machine is busy with other workloads) | 1 | 0 | 1 | Is a file read faster than reading data from the database? | 5 | python,performance,mongodb | 0 | 2010-11-17T22:58:00.000 |
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation?
I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file.
Which would be faster? thanks in advance. | 4 | 0 | 0 | 0 | false | 4,210,113 | 0 | 3,529 | 5 | 0 | 0 | 4,210,057 | If caching is not used sequential IO operations are faster with files by definition. Databases eventually use files, but they have more layers to pass before data hit the file. But if you want to query data using database is more efficient, because if you choose files you will have to implement it yourselves. For your task i recommend to research clustering for different databases, they can scale to your rate. | 1 | 0 | 1 | Is a file read faster than reading data from the database? | 5 | python,performance,mongodb | 0 | 2010-11-17T22:58:00.000 |
Which is more expensive to do in terms of resources and efficiency, File read/write operation or Database Read/Write operation?
I'm using MongoDB, with Python. I't be preforming about 100k requests on the db/file per minute. Also, there's about 15000 documents in the database / file.
Which would be faster? thanks in advance. | 4 | 4 | 0.158649 | 0 | false | 4,210,368 | 0 | 3,529 | 5 | 0 | 0 | 4,210,057 | Try it and tell us the answer. | 1 | 0 | 1 | Is a file read faster than reading data from the database? | 5 | python,performance,mongodb | 0 | 2010-11-17T22:58:00.000 |
i have a noob question.
I have a record in a table that looks like '\1abc'
I then use this string as a regex replacement in re.sub("([0-9])",thereplacement,"2")
I'm a little confused with the backslashes. The string i got back was "\\1abc" | 0 | 2 | 0.197375 | 0 | false | 4,226,375 | 0 | 191 | 1 | 0 | 0 | 4,224,400 | Note that you can make \ stop being an escape character by setting standard_conforming_strings to on. | 1 | 0 | 0 | regarding backslash from postgresql | 2 | python,postgresql | 0 | 2010-11-19T11:11:00.000 |
Short Question:
Is there any nosql flat-file database available as sqlite?
Explanation:
Flat file database can be opened in different processes to read, and keep one process to write. I think its perfect for read cache if there's no strict consistent needed. Say 1-2 secs write to the file or even memory block and the readers get updated data after that.
So I almost choose to use sqlite, as my python server read cache. But there's still one problem. I don't like to rewrite sqls again in another place and construct another copy of my data tables in sqlite just as the same as I did in PostgreSql which used as back-end database.
so is there any other choice?thanks! | 49 | 0 | 0 | 0 | false | 15,588,028 | 0 | 28,380 | 1 | 0 | 0 | 4,245,438 | Something trivial but workable, if you are looking storage backed up key value data structure use pickled dictionary. Use cPickle for better performance if needed. | 1 | 0 | 0 | Is there any nosql flat file database just as sqlite? | 3 | python,database,caching,sqlite,nosql | 0 | 2010-11-22T12:36:00.000 |
Is there a Python module that writes Excel 2007+ files?
I'm interested in writing a file longer than 65535 lines and only Excel 2007+ supports it. | 14 | 1 | 0.024995 | 0 | false | 4,258,896 | 0 | 21,620 | 1 | 0 | 0 | 4,257,771 | If you are on Windows and have Excel 2007+ installed, you should be able to use pywin32 and COM to write XLSX files using almost the same code as you would would to write XLS files ... just change the "save as ...." part at the end.
Probably, you can also write XLSX files using Excel 2003 with the freely downloadable add-on kit but the number of rows per sheet would be limited to 64K. | 1 | 0 | 0 | Python: Writing to Excel 2007+ files (.xlsx files) | 8 | python,excel,excel-2007,openpyxl | 0 | 2010-11-23T15:36:00.000 |
I've built a number of python driven sites that utilize mongodb as a database backend and am very happy with it's ObjectId system, however, I'd love to be able encode the ids in a shorter fashion without building a mapping collection or utilizing a url-shortener service.
Suggestions? Success stories? | 14 | 0 | 0 | 0 | false | 8,654,689 | 0 | 4,165 | 2 | 0 | 0 | 4,261,129 | If you can generate auto-incrementing unique numbers, there's absolutely no need to use ObjectId for _id. Doing this in a distributed environment will most likely be more expensive than using ObjectId. That's your tradeoff. | 1 | 0 | 0 | How can one shorten mongo ids for better use in URLs? | 5 | python,mongodb | 0 | 2010-11-23T21:26:00.000 |
I've built a number of python driven sites that utilize mongodb as a database backend and am very happy with it's ObjectId system, however, I'd love to be able encode the ids in a shorter fashion without building a mapping collection or utilizing a url-shortener service.
Suggestions? Success stories? | 14 | 1 | 0.039979 | 0 | false | 4,261,319 | 0 | 4,165 | 2 | 0 | 0 | 4,261,129 | If you are attempting to retain the original value then there really is not a good way. You could encode it, but the likeliness of it being smaller is minimal. You could hash it, but then it's not reversible.
If this is a REQUIREMENT, I'd probably recommend creating a lookup table or collection where a small incremental number references entries in a Mongo Collection. | 1 | 0 | 0 | How can one shorten mongo ids for better use in URLs? | 5 | python,mongodb | 0 | 2010-11-23T21:26:00.000 |
When creating a virtual environment with no -site packages do I need to install mysql & the mysqldb adapter which is in my global site packages in order to use them in my virtual project environment? | 4 | 5 | 1.2 | 0 | true | 4,273,823 | 0 | 920 | 1 | 0 | 0 | 4,273,729 | You can also (on UNIX) symlink specific packages from the Python site-packages into your virtualenv's site-packages. | 1 | 0 | 1 | Python Virtualenv | 2 | python,virtualenv | 0 | 2010-11-25T04:29:00.000 |