Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
22
3
0.059928
0
false
1,090,390
0
10,684
8
0
0
1,090,022
I've just spent the last year dealing with a database that has almost all IDs as strings, some with digits only, and others mixed. These are the problems: Grossly restricted ID space. A 4 char (digit-only) ID has capacity for 10,000 unique values. A 4 byte numeric has capacity for over 4 billion. Unpredictable ID space coverage. Once IDs start including non-digits it becomes hard to predict where you can create new IDs without collisions. Conversion and display problems in certain circumstances, when scripting or on export for instance. If the ID gets interpreted as a number and there is a leading zero, the ID gets altered. Sorting problems. You can't rely on the natural order being helpful. Of course, if you run out of IDs, or don't know how to create new IDs, your app is dead. I suggest that if you can't control the format of your incoming IDs then you need to create your own (numeric) IDs and relate the user provided ID to that. You can then ensure that your own ID is reliable and unique (and numeric) but provide a user-viewable ID that can have whatever format your users want, and doesn't even have to be unique across the whole app. This is more work, but if you'd been through what I have you'd know which way to go. Anil G
1
0
1
Drawbacks of storing an integer as a string in a database
10
python,mysql,database,database-design
0
2009-07-07T01:58:00.000
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
22
37
1.2
0
true
1,090,065
0
10,684
8
0
0
1,090,022
Unless you really need the features of an integer (that is, the ability to do arithmetic), then it is probably better for you to store the product IDs as strings. You will never need to do anything like add two product IDs together, or compute the average of a group of product IDs, so there is no need for an actual numeric type. It is unlikely that storing product IDs as strings will cause a measurable difference in performance. While there will be a slight increase in storage size, the size of a product ID string is likely to be much smaller than the data in the rest of your database row anyway. Storing product IDs as strings today will save you much pain in the future if the data provider decides to start using alphabetic or symbol characters. There is no real downside.
1
0
1
Drawbacks of storing an integer as a string in a database
10
python,mysql,database,database-design
0
2009-07-07T01:58:00.000
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
22
3
0.059928
0
false
1,090,057
0
10,684
8
0
0
1,090,022
It really depends on what kind of id you are talking about. If it's a code like a phone number it would actually be better to use a varchar for the id and then have your own id to be a serial for the db and use for primary key. In a case where the integer have no numerical value, varchars are generally prefered.
1
0
1
Drawbacks of storing an integer as a string in a database
10
python,mysql,database,database-design
0
2009-07-07T01:58:00.000
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
22
0
0
0
false
1,090,035
0
10,684
8
0
0
1,090,022
Integers are more efficient from a storage and performance perspective. However, if there is a remote chance that alpha characters may be introduced, then you should use a string. In my opinion, the efficiency and performance benefits are likely to be negligible, whereas the time it takes to modify your code may not be.
1
0
1
Drawbacks of storing an integer as a string in a database
10
python,mysql,database,database-design
0
2009-07-07T01:58:00.000
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
22
1
0.019997
0
false
1,090,132
0
10,684
8
0
0
1,090,022
The space an integer would take up would me much less than a string. For example 2^32-1 = 4,294,967,295. This would take 10 bytes to store, where as the integer would take 4 bytes to store. For a single entry this is not very much space, but when you start in the millions... As many other posts suggest there are several other issues to consider, but this is one drawback of the string representation.
1
0
1
Drawbacks of storing an integer as a string in a database
10
python,mysql,database,database-design
0
2009-07-07T01:58:00.000
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
22
18
1
0
false
1,090,100
0
10,684
8
0
0
1,090,022
Do NOT consider performance. Consider meaning. ID "numbers" are not numeric except that they are written with an alphabet of all digits. If I have part number 12 and part number 14, what is the difference between the two? Is part number 2 or -2 meaningful? No. Part numbers (and anything that doesn't have units of measure) are not "numeric". They're just strings of digits. Zip codes in the US, for example. Phone numbers. Social security numbers. These are not numbers. In my town the difference between zip code 12345 and 12309 isn't the distance from my house to downtown. Do not conflate numbers -- with units -- where sums and differences mean something with strings of digits without sums or differences. Part ID numbers are -- properly -- strings. Not integers. They'll never be integers because they don't have sums, differences or averages.
1
0
1
Drawbacks of storing an integer as a string in a database
10
python,mysql,database,database-design
0
2009-07-07T01:58:00.000
I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data. I want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2?
1
1
0.066568
0
false
1,422,838
0
163
2
0
0
1,093,589
I am using two databases for a read-only application. The second database is a cache in case the primary database is down. I use two objects to hold the connection, metadata and compatible Table instances. The top of the view function assigns db = primary or db = secondary and the rest is just queries against db.tableA.join(db.tableB). I am not using the ORM. The schemata are not strictly identical. The primary database needs a schema. prefix (Table(...schema='schema')) and the cache database does not. To get around this, I create my table objects in a function that takes the schema name as an argument. By calling the function once for each database, I wind up with compatible prefixed and non-prefixed Table objects. At least in Pylons, the SQLAlchemy meta.Session is a ScopedSession. The application's BaseController in appname/lib/base.py calls Session.remove() after each request. It's probably better to have a single Session that talks to both databases, but if you don't you may need to modify your BaseController to call .remove() on each Session.
1
0
0
Switching databases in TG2 during runtime
3
python,sqlite,turbogears,turbogears2
0
2009-07-07T17:14:00.000
I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data. I want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2?
1
1
0.066568
0
false
1,387,164
0
163
2
0
0
1,093,589
If ALL databases have the same schema then you should be able to create several Sessions using the same model to the different DBs.
1
0
0
Switching databases in TG2 during runtime
3
python,sqlite,turbogears,turbogears2
0
2009-07-07T17:14:00.000
My specific situation Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. For photos, there will be thumbnails of each. My question My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything? Thanks in advance!
11
1
0.033321
0
false
1,105,534
1
7,730
4
0
0
1,105,429
a DB might be faster than a filesystem on some operations, but loading a well-identified chunk of data 100s of KB is not one of them. also, a good frontend webserver (like nginx) is way faster than any webapp layer you'd have to write to read the blob from the DB. in some tests nginx is roughly on par with memcached for raw data serving of medium-sized files (like big HTMLs or medium-sized images). go FS. no contest.
1
0
0
storing uploaded photos and documents - filesystem vs database blob
6
python,postgresql,storage,photos,photo-management
0
2009-07-09T17:39:00.000
My specific situation Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. For photos, there will be thumbnails of each. My question My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything? Thanks in advance!
11
9
1
0
false
1,105,444
1
7,730
4
0
0
1,105,429
File system. No contest. The data has to go through a lot more layers when you store it in the db. Edit on caching: If you want to cache the file while the user uploads it to ensure the operation finishes as soon as possible, dumping it straight to disk (i.e. file system) is about as quick as it gets. As long as the files aren't too big and you don't have too many concurrent users, you can 'cache' the file in memory, return to the user, then save to disk. To be honest, I wouldn't bother. If you are making the files available on the web after they have been uploaded and want to cache to improve the performance, file system is still the best option. You'll get caching for free (may have to adjust a setting or two) from your web server. You wont get this if the files are in the database. After all that it sounds like you should never store files in the database. Not the case, you just need a good reason to do so.
1
0
0
storing uploaded photos and documents - filesystem vs database blob
6
python,postgresql,storage,photos,photo-management
0
2009-07-09T17:39:00.000
My specific situation Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. For photos, there will be thumbnails of each. My question My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything? Thanks in advance!
11
3
0.099668
0
false
1,105,479
1
7,730
4
0
0
1,105,429
Definitely store your images on the filesystem. One concern that folks don't consider enough when considering these types of things is bloat; cramming images as binary blobs into your database is a really quick way to bloat your DB way up. With a large database comes higher hardware requirements, more difficult replication and backup requirements, etc. Sticking your images on a filesystem means you can back them up / replicate them with many existing tools easily and simply. Storage space is far easier to increase on filesystem than in database, as well.
1
0
0
storing uploaded photos and documents - filesystem vs database blob
6
python,postgresql,storage,photos,photo-management
0
2009-07-09T17:39:00.000
My specific situation Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. For photos, there will be thumbnails of each. My question My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything? Thanks in advance!
11
10
1
0
false
1,105,453
1
7,730
4
0
0
1,105,429
While there are exceptions to everything, the general case is that storing images in the file system is your best bet. You can easily provide caching services to the images, you don't need to worry about additional code to handle image processing, and you can easily do maintenance on the images if needed through standard image editing methods. It sounds like your business model fits nicely into this scenario.
1
0
0
storing uploaded photos and documents - filesystem vs database blob
6
python,postgresql,storage,photos,photo-management
0
2009-07-09T17:39:00.000
We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON. These tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs. We'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc. I am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses. EDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue. EDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools.
25
2
0.057081
0
false
59,109,834
0
14,213
3
0
0
1,108,918
for both ipv4 and ipv6 compatibility, use VARBINARY(16) , ipv4's will always be BINARY(4) and ipv6 will always be BINARY(16), so VARBINARY(16) seems like the most efficient way to support both. and to convert them from the normal readable format to binary, use INET6_ATON('127.0.0.1'), and to reverse that, use INET6_NTOA(binary)
1
0
0
How to store an IP in mySQL
7
python,mysql,perl,ip-address
0
2009-07-10T10:58:00.000
We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON. These tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs. We'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc. I am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses. EDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue. EDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools.
25
0
0
0
false
56,818,264
0
14,213
3
0
0
1,108,918
Old thread, but for the benefit of readers, consider using ip2long. It translates ip into an integer. Basically, you will be converting with ip2long when storing into DB then converting back with long2ip when retrieving from DB. The field type in DB will INT, so you will save space and gain better performance compared to storing ip as a string.
1
0
0
How to store an IP in mySQL
7
python,mysql,perl,ip-address
0
2009-07-10T10:58:00.000
We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON. These tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs. We'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc. I am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses. EDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue. EDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools.
25
3
0.085505
0
false
1,109,278
0
14,213
3
0
0
1,108,918
Having seperate fields doesn't sound particularly sensible to me - much like splitting a zipcode into sections or a phone number. Might be useful if you wanted specific info on the sections, but I see no real reason to not use a 32 bit int.
1
0
0
How to store an IP in mySQL
7
python,mysql,perl,ip-address
0
2009-07-10T10:58:00.000
HI , I made a ICAPServer (similar with httpserver) for which the performance is very important. The DB module is sqlalchemy. I then made a test about the performance of sqlalchemy, as a result, i found that it takes about 30ms for sqlalchemy to write <50kb data to DB (Oracle), i don`t know if the result is normal, or i did something wrong? BUT, no matter right or wrong, it seems the bottle-neck comes from the DB part. HOW can i improve the performance of sqlalchemy? OR it is up to DBA to improve Oracle? BTW, ICAPServer and Oracle are on the same pc , and i used the essential way of sqlalchemy..
1
1
1.2
0
true
1,110,990
0
4,462
2
0
0
1,110,805
I had some issues with sqlalchemy's performance as well - I think you should first figure out in which ways you are using it ... they recommend that for big data sets is better to use the sql expression language. Either ways try and optimize the sqlalchemy code and have the Oracle database optimized as well, so you can better figure out what's wrong. Also, do some tests on the database.
1
0
0
python sqlalchemy performance?
3
python,sqlalchemy
0
2009-07-10T17:16:00.000
HI , I made a ICAPServer (similar with httpserver) for which the performance is very important. The DB module is sqlalchemy. I then made a test about the performance of sqlalchemy, as a result, i found that it takes about 30ms for sqlalchemy to write <50kb data to DB (Oracle), i don`t know if the result is normal, or i did something wrong? BUT, no matter right or wrong, it seems the bottle-neck comes from the DB part. HOW can i improve the performance of sqlalchemy? OR it is up to DBA to improve Oracle? BTW, ICAPServer and Oracle are on the same pc , and i used the essential way of sqlalchemy..
1
1
0.066568
0
false
1,110,888
0
4,462
2
0
0
1,110,805
You can only push SQLAlchemy so far as a programmer. I would agree with you that the rest of the performance is up to your DBA, including creating proper indexes on tables, etc.
1
0
0
python sqlalchemy performance?
3
python,sqlalchemy
0
2009-07-10T17:16:00.000
I would like to insert a calculation in Excel using Python. Generally it can be done by inserting a formula string into the relevant cell. However, if i need to calculate a formula multiple times for the whole column the formula must be updated for each individual cell. For example, if i need to calculate the sum of two cells, then for cell C(k) the computation would be A(k)+B(k). In excel it is possible to calculate C1=A1+B1 and then automatically expand the calculation by dragging the mouse from C1 downwards. My question is: Is it possible to the same thing with Python, i.e. to define a formula in only one cell and then to use Excel capabilities to extend the calculation for the whole column/row? Thank you in advance, Sasha
1
0
0
0
false
1,116,782
0
12,592
1
0
0
1,116,725
If you are using COM bindings, then you can simply record a macro in Excel, then translate it into Python code. If you are using xlwt, you have to resort to normal loops in python..
1
0
0
Calculating formulae in Excel with Python
6
python,excel,formula
0
2009-07-12T19:36:00.000
HI,i got a multi-threading program which all threads will operate on oracle DB. So, can sqlalchemy support parallel operation on oracle? tks!
0
1
0.099668
0
false
1,117,592
0
2,195
1
0
0
1,117,538
As long as each concurrent thread has it's own session you should be fine. Trying to use one shared session is where you'll get into trouble.
1
0
0
python sqlalchemy parallel operation
2
python,sqlalchemy
0
2009-07-13T02:44:00.000
Just starting to get to grips with python and MySQLdb and was wondering Where is the best play to put a try/catch block for the connection to MySQL. At the MySQLdb.connect point? Also should there be one when ever i query? What exceptions should i be catching on any of these blocks? thanks for any help Cheers Mark
9
1
0.099668
0
false
1,117,841
0
5,063
2
0
0
1,117,828
I think that the connections and the query can raised errors so you should have try/excepy for both of them.
1
0
0
Python MySQLdb exceptions
2
python,mysql,exception
0
2009-07-13T05:31:00.000
Just starting to get to grips with python and MySQLdb and was wondering Where is the best play to put a try/catch block for the connection to MySQL. At the MySQLdb.connect point? Also should there be one when ever i query? What exceptions should i be catching on any of these blocks? thanks for any help Cheers Mark
9
16
1.2
0
true
1,118,129
0
5,063
2
0
0
1,117,828
Catch the MySQLdb.Error, while connecting and while executing query
1
0
0
Python MySQLdb exceptions
2
python,mysql,exception
0
2009-07-13T05:31:00.000
I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine. I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor. Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture. Thanks
9
1
0.049958
0
false
1,118,790
1
1,845
2
1
0
1,118,761
There are a few things that you can't do on the App Engine that you can do on your own server like uploading of files. On the App Engine you kinda have to upload it and store the datastore which can cause a few problems. Other than that it should be fine from the Presentation part. There are a number of other little things that are better on your own dedicated server but I think eventually a lot of those things will be in the App Engine
1
0
0
Migrating Django Application to Google App Engine?
4
python,django,google-app-engine
0
2009-07-13T10:40:00.000
I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine. I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor. Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture. Thanks
9
8
1.2
0
true
1,119,377
1
1,845
2
1
0
1,118,761
Most (all?) of Django is available in GAE, so your main task is to avoid basing your designs around a reliance on anything from Django or the Python standard libraries which is not available on GAE. You've identified the glaring difference, which is the database, so I'll assume you're on top of that. Another difference is the tie-in to Google Accounts and hence that if you want, you can do a fair amount of access control through the app.yaml file rather than in code. You don't have to use any of that, though, so if you don't envisage switching to Google Accounts when you switch to GAE, no problem. I think the differences in the standard libraries can mostly be deduced from the fact that GAE has no I/O and no C-accelerated libraries unless explicitly stated, and my experience so far is that things I've expected to be there, have been there. I don't know Django and haven't used it on GAE (apart from templates), so I can't comment on that. Personally I probably wouldn't target LAMP (where P = Django) with the intention of migrating to GAE later. I'd develop for both together, and try to ensure if possible that the differences are kept to the very top (configuration) and the very bottom (data model). The GAE version doesn't necessarily have to be perfect, as long as you know how to make it perfect should you need it. It's not guaranteed that this is faster than writing and then porting, but my guess is it normally will be. The easiest way to spot any differences is to run the code, rather than relying on not missing anything in the GAE docs, so you'll likely save some mistakes that need to be unpicked. The Python SDK is a fairly good approximation to the real App Engine, so all or most of your tests can be run locally most of the time. Of course if you eventually decide not to port then you've done unnecessary work, so you have to think about the probability of that happening, and whether you'd consider the GAE development to be a waste of your time if it's not needed.
1
0
0
Migrating Django Application to Google App Engine?
4
python,django,google-app-engine
0
2009-07-13T10:40:00.000
I am using Python 2.6.1, MySQL4.0 in Windows platform and I have successfully installed MySQLdb. Do we need to set any path for my python code and MySQLdb to successful run my application? Without any setting paths (in my code I am importing MySQLdb) I am getting No module named MySQLdb error is coming and I am not able to move further.
1
0
0
0
false
1,136,692
0
770
1
0
0
1,136,676
How did you install MySQLdb? This sounds like your MySQLdb module is not within your PYTHONPATH which indicates some inconsistancy between how you installed Python itself and how you installed MySQLdb. Or did you perhaps install a MySQLdb binary that was not targeted for your version of Python? Modules are normally put into version-dependant folders.
1
0
0
is it required to give path after installation MySQL db for Python 2.6
3
python,mysql
0
2009-07-16T10:20:00.000
Has anyone used SQLAlchemy in addition to Django's ORM? I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins). Is it possible? Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.
23
4
0.158649
0
false
1,308,718
1
12,511
3
0
0
1,154,331
Jacob Kaplan-Moss admitted to typing "import sqlalchemy" from time to time. I may write a queryset adapter for sqlalchemy results in the not too distant future.
1
0
0
SQLAlchemy and django, is it production ready?
5
python,database,django,sqlalchemy
0
2009-07-20T15:44:00.000
Has anyone used SQLAlchemy in addition to Django's ORM? I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins). Is it possible? Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.
23
19
1.2
0
true
1,155,407
1
12,511
3
0
0
1,154,331
What I would do, Define the schema in Django orm, let it write the db via syncdb. You get the admin interface. In view1 you need a complex join def view1(request): import sqlalchemy data = sqlalchemy.complex_join_magic(...) ... payload = {'data': data, ...} return render_to_response('template', payload, ...)
1
0
0
SQLAlchemy and django, is it production ready?
5
python,database,django,sqlalchemy
0
2009-07-20T15:44:00.000
Has anyone used SQLAlchemy in addition to Django's ORM? I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins). Is it possible? Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.
23
7
1
0
false
3,555,602
1
12,511
3
0
0
1,154,331
I've done it before and it's fine. Use the SQLAlchemy feature where it can read in the schema so you don't need to declare your fields twice. You can grab the connection settings from the settings, the only problem is stuff like the different flavours of postgres driver (e.g. with psyco and without). It's worth it as the SQLAlchemy stuff is just so much nicer for stuff like joins.
1
0
0
SQLAlchemy and django, is it production ready?
5
python,database,django,sqlalchemy
0
2009-07-20T15:44:00.000
I'm modeling a database relationship in django, and I'd like to have other opinions. The relationship is kind of a two-to-many relationship. For example, a patient can have two physicians: an attending and a primary. A physician obviously has many patients. The application does need to know which one is which; further, there are cases where an attending physician of one patient can be the primary of another. Lastly, both attending and primary are often the same. At first, I was thinking two foreign keys from the patient table into the physician table. However, I think django disallows this. Additionally, on second thought, this is really a many(two)-to-many relationship. Therefore, how can I model this relationship with django while maintaining the physician type as it pertains to a patient? Perhaps I will need to store the physician type on the many-to-many association table? Thanks, Pete
2
0
0
0
false
1,162,884
1
358
1
0
0
1,162,877
I agree with your conclusion. I would store the physician type in the many-to-many linking table.
1
0
0
How would you model this database relationship?
3
python,database,django,database-design
0
2009-07-22T03:05:00.000
I have to get the data from a User site. If I would work on their site, I would VPN and then remote into their server using username and password. I thought getting data into my local machine than getting into their server where my work is not secured. So, I thought of using Ironpython to get data from the remote server. So, I still VPN'd to their domain, but when I was using the ADO.net connection string to connect to their database, it does not work. connection string: Data Source=xx.xx.xx.xx;Initial Catalog=;User ID=;Password=; and the error says: login failed for Well, one thing to notice is: when i remote into their server, I provide username and password once. Then when i log on to SQL Server, I dont have to provide username and password. It s windows authenticated. So, in the above connection string, I used the same username and password that I use while remoting in. I hope this gives an idea to ya'll what i might be missing. Help appreciated!!!
0
0
0
0
false
1,169,704
1
1,035
2
0
0
1,169,668
Data Source=xx.xx.xx.xx;Initial Catalog=;Integrated Security="SSPI" How are you connecting to SQL. Do you use sql server authentication or windows authentication? Once you know that, then if you use a DNS name or IP that will go to the server correctly, you have the instance name correct AND you have permissions on the account to access the server you can connect. Heres a quick test. From the system you are using to connect to your SQL Server with, can you open the SQL Server management studio and connect to the remote database. If you can, tell me what settings you needed to do that, and I'll give you a connection string that will work.
1
0
0
need help on ADO.net connection string
2
.net,ado.net,ironpython,connection-string
0
2009-07-23T04:52:00.000
I have to get the data from a User site. If I would work on their site, I would VPN and then remote into their server using username and password. I thought getting data into my local machine than getting into their server where my work is not secured. So, I thought of using Ironpython to get data from the remote server. So, I still VPN'd to their domain, but when I was using the ADO.net connection string to connect to their database, it does not work. connection string: Data Source=xx.xx.xx.xx;Initial Catalog=;User ID=;Password=; and the error says: login failed for Well, one thing to notice is: when i remote into their server, I provide username and password once. Then when i log on to SQL Server, I dont have to provide username and password. It s windows authenticated. So, in the above connection string, I used the same username and password that I use while remoting in. I hope this gives an idea to ya'll what i might be missing. Help appreciated!!!
0
0
0
0
false
1,169,755
1
1,035
2
0
0
1,169,668
Is that user granted login abilities in SQL? If using SQL 2005, you go to Security->Logins Double click the user, and click Status. ------Edit ---- Create a file on your desktop called TEST.UDL. Double click it. setup your connection until it works. View the UDL in notepad, there's your connection string. Though I think you take out the first part which includes provider info.
1
0
0
need help on ADO.net connection string
2
.net,ado.net,ironpython,connection-string
0
2009-07-23T04:52:00.000
I'm trying to make a web app that will manage my Mercurial repositories for me. I want it so that when I tell it to load repository X: Connect to a MySQL server and make sure X exists. Check if the user is allowed to access the repository. If above is true, get the location of X from a mysql server. Run a hgweb cgi script (python) containing the path of the repository. Here is the problem, I want to: take the hgweb script, modify it, and run it. But I do not want to: take the hgweb script, modify it, write it to a file and redirect there. I am using Apache to run the httpd process.
0
0
0
0
false
1,185,909
0
940
1
0
0
1,185,867
As far as you question, no, you're not likely to get php to execute a modified script without writing it somewhere, whether that's a file on the disk, a virtual file mapped to ram, or something similar. It sounds like you might be trying to pound a railroad spike with a twig. If you're to the point where you're filtering access based on user permissions stored in MySQL, have you looked at existing HG solutions to make sure there isn't something more applicable than hgweb? It's really built for doing exactly one thing well, and this is a fair bit beyond it's normal realm. I might suggest looking into apache's native authentication as a more convenient method for controlling access to repositories, then just serve the repo without modifying the script.
1
0
0
How can I execute CGI files from PHP?
3
php,python,mercurial,cgi
1
2009-07-26T23:24:00.000
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module?
5
1
0.028564
0
false
1,188,711
0
1,116
4
0
0
1,188,585
The potential advantages of a custom format over a pickle are: you can selectively get individual objects, rather than having to incarnate the full set of objects you can query subsets of objects by properties, and only load those objects that match your criteria Whether these advantages materialize depends on how you design the storage, of course.
1
0
0
What are the benefits of not using cPickle to create a persistent storage for data?
7
python,database,data-structures,persistence
0
2009-07-27T14:45:00.000
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module?
5
10
1.2
0
true
1,188,704
0
1,116
4
0
0
1,188,585
Pickling is a two-face coin. On one side, you have a way to store your object in a very easy way. Just four lines of code and you pickle. You have the object exactly as it is. On the other side, it can become a compatibility nightmare. You cannot unpickle objects if they are not defined in your code, exactly as they were defined when pickled. This strongly limits your ability to refactor the code, or rearrange stuff in your modules. Also, not everything can be pickled, and if you are not strict on what gets pickled and the client of your code has full freedom of including any object, sooner or later it will pass something unpicklable to your system, and the system will go boom. Be very careful about its use. there's no better definition of quick and dirty.
1
0
0
What are the benefits of not using cPickle to create a persistent storage for data?
7
python,database,data-structures,persistence
0
2009-07-27T14:45:00.000
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module?
5
2
0.057081
0
false
1,188,679
0
1,116
4
0
0
1,188,585
Note that not all objects may be directly pickled - only basic types, or objects that have defined the pickle protocol. Using your own binary format would allow you to potentially store any kind of object. Just for note, Zope Object DB (ZODB) is following that very same approach, storing objects with the Pickle format. You may be interested in getting their implementations.
1
0
0
What are the benefits of not using cPickle to create a persistent storage for data?
7
python,database,data-structures,persistence
0
2009-07-27T14:45:00.000
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module?
5
0
0
0
false
1,189,928
0
1,116
4
0
0
1,188,585
Will you ever need to process data from untrusted sources? If so, you should know that the pickle format is actually a virtual machine that is capable of executing arbitrary code on behalf of the process doing the unpickling.
1
0
0
What are the benefits of not using cPickle to create a persistent storage for data?
7
python,database,data-structures,persistence
0
2009-07-27T14:45:00.000
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
0
0
0
1
false
1,241,784
0
581
3
0
0
1,241,758
Are you likely to need all rows in order or will you want only specific known rows? If you need to read all the data there isn't much advantage to having it in a database. edit: If the code fits in memory then a simple CSV is fine. Plain text data formats are always easier to deal with than opaque ones if you can use them.
1
0
0
Store data series in file or database if I want to do row level math operations?
4
python,database,database-design,file-io
0
2009-08-06T21:58:00.000
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
0
0
0
1
false
1,241,787
0
581
3
0
0
1,241,758
What matters most if all data will fit simultaneously into memory. From the size that you give, it seems that this is easily the case (a few megabytes at worst). If so, I would discourage using a relational database, and do all operations directly in Python. Depending on what other processing you need, I would probably rather use binary pickles, than CSV.
1
0
0
Store data series in file or database if I want to do row level math operations?
4
python,database,database-design,file-io
0
2009-08-06T21:58:00.000
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
0
2
1.2
1
true
1,245,169
0
581
3
0
0
1,241,758
"I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input." This is the standard use case for a data warehouse star-schema design. Buy Kimball's The Data Warehouse Toolkit. Read (and understand) the star schema before doing anything else. "What is the best way to store the data and manipulate?" A Star Schema. You can implement this as flat files (CSV is fine) or RDBMS. If you use flat files, you write simple loops to do the math. If you use an RDBMS you write simple SQL and simple loops. "My main concern is speed/performance as the number of datasets grows" Nothing is as fast as a flat file. Period. RDBMS is slower. The RDBMS value proposition stems from SQL being a relatively simple way to specify SELECT SUM(), COUNT() FROM fact JOIN dimension WHERE filter GROUP BY dimension attribute. Python isn't as terse as SQL, but it's just as fast and just as flexible. Python competes against SQL. "pitfalls/gotchas that I should be aware of?" DB design. If you don't get the star schema and how to separate facts from dimensions, all approaches are doomed. Once you separate facts from dimensions, all approaches are approximately equal. "What are the reasons why one should be chosen over another?" RDBMS slow and flexible. Flat files fast and (sometimes) less flexible. Python levels the playing field. "Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design?" Star Schema: central fact table surrounded by dimension tables. Nothing beats it. "Is there any project or framework out there to help with this type of task?" Not really.
1
0
0
Store data series in file or database if I want to do row level math operations?
4
python,database,database-design,file-io
0
2009-08-06T21:58:00.000
I'm looking for the simplest way of using python and SQLAlchemy to produce some XML for a jQuery based HTTP client. Right now I'm using mod_python's CGI handler but I'm unhappy with the fact that I can't persist stuff like the SQLAlchemy session. The mod_python publisher handler that is apparently capable of persisting stuff does not allow requests with XML content type (as used by jQuery's ajax stuff) so I can't use it. What other options are there?
2
2
1.2
0
true
1,272,579
1
1,222
1
0
0
1,272,325
You could always write your own handler, which is the way mod_python is normally intended to be used. You would have to set some HTTP headers (and you could have a look at the publisher handler's source code for inspiration on that), but otherwise I don't think it's much more complicated than what you've been trying to do. Though as long as you're at it, I would suggest trying mod_wsgi instead of mod_python, which is probably eventually going to supersede mod_python. WSGI is a Python standard for writing web applications.
1
0
0
Alternatives to mod_python's CGI handler
1
python,cgi,mod-python
1
2009-08-13T14:29:00.000
What I really like about Entity framework is its drag and drop way of making up the whole model layer of your application. You select the tables, it joins them and you're done. If you update the database scheda, right click -> update and you're done again. This seems to me miles ahead the competiting ORMs, like the mess of XML (n)Hibernate requires or the hard-to-update Django Models. Without concentrating on the fact that maybe sometimes more control over the mapping process may be good, are there similar one-click (or one-command) solutions for other (mainly open source like python or php) programming languages or frameworks? Thanks
2
0
0
0
false
1,325,558
1
433
1
1
0
1,283,646
I have heard iBattis is good. A few companies fall back to iBattis when their programmer teams are not capable of understanding Hibernate (time issue). Personally, I still like Linq2Sql. Yes, the first time someone needs to delete and redrag over a table seems like too much work, but it really is not. And the time that it doesn't update your class code when you save is really a pain, but you simply control-a your tables and drag them over again. Total remakes are very quick and painless. The classes it creates are extremely simple. You can even create multiple table entities if you like with SPs for CRUD. Linking SPs to CRUD is similar to EF: You simply setup your SP with the same parameters as your table, then drag it over your table, and poof, it matches the data types. A lot of people go out of their way to take IQueryable away from the repository, but you can limit what you link in linq2Sql, so IQueryable is not too bad. Come to think of it, I wonder if there is a way to restrict the relations (and foreign keys).
1
0
0
Entity Framwework-like ORM NOT for .NET
3
php,python,entity-framework,open-source
0
2009-08-16T07:03:00.000
I installed stackless pyton 2.6.2 after reading several sites that said its fully compatible with vanilla python. After installing i found that my django applications do not work any more. I did reinstall django (1.1) again and now im kind of lost. The error that i get is 500: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.11 (Ubuntu) DAV/2 PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 mod_ruby/1.2.6 Ruby/1.8.7(2008-08-11) mod_ssl/2.2.11 OpenSSL/0.9.8g Server at 127.0.0.1 Port 80 What else, could or should i do? Edit: From 1st comment i understand that the problem is not in django but mod_python & apache? so i edited my question title. Edit2: I think something is wrong with some paths setup. I tried going from mod_python to mod_wsgi, managed to finally set it up correctly only to get next error: [Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) [Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb Alan
0
2
1.2
0
true
1,284,586
1
672
1
0
0
1,283,856
When you install a new version of Python (whether stackless or not) you also need to reinstall all of the third party modules you need -- either from sources, which you say you don't want to do, or from packages built for the new version of Python you've just installed. So, check the repository from which you installed Python 2.6.2 with aptitude: does it also have versions for that specific Python of mod_python, mysqldb, django, and any other third party stuff you may need? There really is no "silver bullet" for package management and I know of no "sumo distribution" of Python bundling all the packages you could ever possibly need (if there were, it would have to be many 10s of GB;-).
1
0
0
Stackless python stopped mod_python/apache from working
1
python,mod-wsgi,mod-python,stackless,python-stackless
0
2009-08-16T09:14:00.000
I want to write a python script that populates a database with some information. One of the columns in my table is a BLOB that I would like to save a file to for each entry. How can I read the file (binary) and insert it into the DB using python? Likewise, how can I retrieve it and write that file back to some arbitrary location on the hard drive?
15
0
0
0
false
1,294,488
0
25,537
1
0
0
1,294,385
You can insert and read BLOBs from a DB like every other column type. From the database API's view there is nothing special about BLOBs.
1
0
0
How to insert / retrieve a file stored as a BLOB in a MySQL db using python
2
python,mysql,file-io,blob
0
2009-08-18T14:50:00.000
I am working on integrating with several music players. At the moment my favorite is exaile. In the new version they are migrating the database format from SQLite3 to an internal Pickle format. I wanted to know if there is a way to access pickle format files without having to reverse engineer the format by hand. I know there is the cPickle python module, but I am unaware if it is callable directly from C.
23
3
0.197375
0
false
1,296,188
0
27,544
1
0
0
1,296,162
You can embed a Python interpreter in a C program, but I think that the easiest solution is to write a Python script that converts "pickles" in another format, e.g. an SQLite database.
1
0
0
How can I read a python pickle database/file from C?
3
python,c
0
2009-08-18T20:02:00.000
I'm running Django through mod_wsgi and Apache (2.2.8) on Ubuntu 8.04. I've been running Django on this setup for about 6 months without any problems. Yesterday, I moved my database (postgres 8.3) to its own server, and my Django site started refusing to load (the browser spinner would just keep spinning). It works for about 10 mintues, then just stops. Apache is still able to serve static files. Just nothing through Django. I've checked the apache error logs, and I don't see any entries that could be related. I'm not sure if this is a WSGI, Django, Apache, or Postgres issue? Any ideas? Thanks for your help!
1
0
1.2
0
true
2,368,542
1
435
1
0
0
1,300,213
Found it! I'm using eventlet in some other code and I imported one of my modules into a django model. So eventlet was taking over and putting everything to "sleep".
1
0
0
Apache/Django freezing after a few requests
2
python,django,postgresql,apache2,mod-wsgi
0
2009-08-19T14:09:00.000
I have been working on a website using mod_python, python, and SQL Alchemy when I ran into a strange problem: When I query the database for all of the records, it returns the correct result set; however, when I refresh the page, it returns me a result set with that same result set appended to it. I get more result sets "stacked" on top of eachother as I refresh the page more. For example: First page load: 10 results Second page load: 20 results (two of each) Third page load: 30 results (three of each) etc... Is this some underlying problem with mod_python? I don't recall running into this when using mod_wsgi.
0
0
1.2
0
true
1,301,029
1
192
1
0
0
1,301,000
Not that I've ever heard of, but it's impossible to tell without some code to look at. Maybe you initialised your result set list as a global, or shared member, and then appended results to it when the application was called without resetting it to empty? A classic way of re-using lists accidentally is to put one in a default argument value to a function. (The same could happen in mod_wsgi of course.)
1
0
0
mod_python problem?
2
python,sqlalchemy,mod-python
0
2009-08-19T16:08:00.000
I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?
59
111
1.2
0
true
1,346,401
1
10,095
1
0
0
1,303,654
After weeks of testing and reading the Django source code, I've found the answer to my own question: Transactions Django's default autocommit behavior still holds true for my threaded function. However, it states in the Django docs: As soon as you perform an action that needs to write to the database, Django produces the INSERT/UPDATE/DELETE statements and then does the COMMIT. There’s no implicit ROLLBACK. That last sentence is very literal. It DOES NOT issue a ROLLBACK command unless something in Django has set the dirty flag. Since my function was only doing SELECT statements it never set the dirty flag and didn't trigger a COMMIT. This goes against the fact that PostgreSQL thinks the transaction requires a ROLLBACK because Django issued a SET command for the timezone. In reviewing the logs, I threw myself off because I kept seeing these ROLLBACK statements and assumed Django's transaction management was the source. Turns out it's not, and that's OK. Connections The connection management is where things do get tricky. It turns out Django uses signals.request_finished.connect(close_connection) to close the database connection it normally uses. Since nothing normally happens in Django that doesn't involve a request, you take this behavior for granted. In my case, though, there was no request because the job was scheduled. No request means no signal. No signal means the database connection was never closed. Going back to transactions, it turns out that simply issuing a call to connection.close() in the absence of any changes to the transaction management issues the ROLLBACK statement in the PostgreSQL log that I'd been looking for. Solution The solution is to allow the normal Django transaction management to proceed as normal and to simply close the connection one of three ways: Write a decorator that closes the connection and wrap the necessary functions in it. Hook into the existing request signals to have Django close the connection. Close the connection manually at the end of the function. Any of those three will (and do) work. This has driven me crazy for weeks. I hope this helps someone else in the future!
1
0
0
Threaded Django task doesn't automatically handle transactions or db connections?
1
python,database,django,multithreading,transactions
0
2009-08-20T02:26:00.000
When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed. Is there any way to pull all of the number i.e. including the leading zero?
1
4
1.2
0
true
1,308,060
0
1,789
1
0
0
1,308,038
There's almost certainly something in either your query, your table definition, or an ORM you're using that thinks the column is numeric and is converting the results to integers. You'll have to define the column as a string (everywhere!) if you want to preserve leading zeroes. Edit: ZEROFILL on the server isn't going to cut it. Python treats integer columns as Python integers, and those don't have leading zeroes, period. You'll either have to change the column type to VARCHAR, use something like "%02d" % val in Python, or put a CAST(my_column AS VARCHAR) in the query.
1
0
0
Python - MYSQL - Select leading zeros
5
python,mysql
0
2009-08-20T18:36:00.000
I'd like to use the Python version of App Engine but rather than write my code specifically for the Google Data Store, I'd like to create my models with a generic Python ORM that could be attached to Big Table, or, if I prefer, a regular database at some later time. Is there any Python ORM such as SQLAlchemy that would allow this?
11
2
0.197375
0
false
11,325,656
1
5,333
1
1
0
1,308,376
Nowadays they do since Google has launched Cloud SQL
1
0
0
Do any Python ORMs (SQLAlchemy?) work with Google App Engine?
2
python,google-app-engine,sqlalchemy,orm
0
2009-08-20T19:39:00.000
I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string. What solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.
0
3
0.197375
0
false
1,313,013
0
2,884
2
0
0
1,313,000
Make another table and do one-to-many. Don't try to cram a programming language feature into a database as-is if you can avoid it. If you absolutely need to be able to store an object down the line, your options are a bit more limited. YAML is probably the best balance of human-readable and program-readable, and it has some syntax for specifying classes you might be able to use.
1
0
0
Inserting python tuple in a MySQL database
3
python,mysql
0
2009-08-21T16:39:00.000
I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string. What solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.
0
2
1.2
0
true
1,313,016
0
2,884
2
0
0
1,313,000
I'd look at serializing it to JSON, using the simplejson package, or the built-in json package in python 2.6. It's simple to use in python, importable by practically every other language, and you don't have to make all of the "what tag should I use? what attributes should this have?" decisions that you might in XML.
1
0
0
Inserting python tuple in a MySQL database
3
python,mysql
0
2009-08-21T16:39:00.000
To deploy a site with Python/Django/MySQL I had to do these on the server (RedHat Linux): Install MySQLPython Install ModPython Install Django (using python setup.py install) Add some directives on httpd.conf file (or use .htaccess) But, when I deployed another site with PHP (using CodeIgniter) I had to do nothing. I faced some problems while deploying a Django project on a shared server. Now, my questions are: Can the deployment process of Django project be made easier? Am I doing too much? Can some of the steps be omitted? What is the best way to deploy django site on a shared server?
3
1
0.028564
0
false
1,314,005
1
1,978
1
0
0
1,313,989
You didn't have to do anything when deploying a PHP site because your hosting provider had already installed it. Web hosts which support Django typically install and configure it for you.
1
0
0
How can Django projects be deployed with minimal installation work?
7
python,django
0
2009-08-21T20:11:00.000
Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell. Rather than translate a row from the database into an object: each table is represented by a class a row is retrieved as a dict an object representing a cursor provides access to a table like so: cursor.mytable.get_by_ids(low, high) removing means setting the time_of_removal to the current time So essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row. Type mapping is trivial because each dict (row) being a first class object in python/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types. If you see any potential problems with going down this road, please let me know. Thanks.
3
8
1.2
0
true
1,319,598
0
569
2
0
0
1,319,585
That doesn't do away with the need for an ORM. That is an ORM. In which case, why reinvent the wheel? Is there a compelling reason you're trying to avoid using an established ORM?
1
0
0
Is this a good approach to avoid using SQLAlchemy/SQLObject?
3
python,sqlalchemy,sqlobject
0
2009-08-23T21:15:00.000
Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell. Rather than translate a row from the database into an object: each table is represented by a class a row is retrieved as a dict an object representing a cursor provides access to a table like so: cursor.mytable.get_by_ids(low, high) removing means setting the time_of_removal to the current time So essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row. Type mapping is trivial because each dict (row) being a first class object in python/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types. If you see any potential problems with going down this road, please let me know. Thanks.
3
2
0.132549
0
false
1,319,662
0
569
2
0
0
1,319,585
You will still be using SQLAlchemy. ResultProxy is actually a dictionary once you go for .fetchmany() or similar. Use SQLAlchemy as a tool that makes managing connections easier, as well as executing statements. Documentation is pretty much separated in sections, so you will be reading just the part that you need.
1
0
0
Is this a good approach to avoid using SQLAlchemy/SQLObject?
3
python,sqlalchemy,sqlobject
0
2009-08-23T21:15:00.000
I am writing a python script that will be doing some processing on text files. As part of that process, i need to import each line of the tab-separated file into a local MS SQL Server (2008) table. I am using pyodbc and I know how to do this. However, I have a question about the best way to execute it. I will be looping through the file, creating a cursor.execute(myInsertSQL) for each line of the file. Does anyone see any problems waiting to commit the statements until all records have been looped (i.e. doing the commit() after the loop and not inside the loop after each individual execute)? The reason I ask is that some files will have upwards of 5000 lines. I didn't know if trying to "save them up" and committing all 5000 at once would cause problems. I am fairly new to python, so I don't know all of these issues yet. Thanks.
1
0
1.2
0
true
1,325,524
0
3,467
1
0
0
1,325,481
If I understand what you are doing, Python is not going to be a problem. Executing a statement inside a transaction does not create cumulative state in Python. It will do so only at the database server itself. When you commit you will need to make sure the commit occurred, since having a large batch commit may conflict with intervening changes in the database. If the commit fails, you will have to re-run the batch again. That's the only problem that I am aware of with large batches and Python/ODBC (and it's not even really a Python problem, since you would have that problem regardless.) Now, if you were creating all the SQL in memory, and then looping through the memory-representation, that might make more sense. Still, 5000 lines of text on a modern machine is really not that big of a deal. If you start needing to process two orders of magnitude more, you might need to rethink your process.
1
0
1
Importing a text file into SQL Server in Python
2
python,database,odbc,commit,bulkinsert
0
2009-08-25T00:30:00.000
I am interested in monitoring some objects. I expect to get about 10000 data points every 15 minutes. (Maybe not at first, but this is the 'general ballpark'). I would also like to be able to get daily, weekly, monthly and yearly statistics. It is not critical to keep the data in the highest resolution (15 minutes) for more than two months. I am considering various ways to store this data, and have been looking at a classic relational database, or at a schemaless database (such as SimpleDB). My question is, what is the best way to go along doing this? I would very much prefer an open-source (and free) solution to a proprietary costly one. Small note: I am writing this application in Python.
17
1
0.039979
0
false
1,335,132
0
13,739
1
0
0
1,334,813
plain text files? It's not clear what your 10k data points per 15 minutes translates to in terms of bytes, but in any way text files are easier to store/archive/transfer/manipulate and you can inspect the directly, just by looking at. fairly easy to work with Python, too.
1
0
0
What is the best open source solution for storing time series data?
5
python,database,statistics,time-series,schemaless
0
2009-08-26T13:47:00.000
I'm using Django 1.1 with Mysql 5.* and MyISAM tables. Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out. I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down. How do I avoid this? Can I set maximum query times?
7
0
0
0
false
1,500,947
1
4,741
3
0
0
1,353,206
You shouldn't write queries like that, at least not to run against your live database. Mysql has a "slow queries" pararameter which you can use to identify the queries that are killing you. Most of the time, these slow queries are either buggy or can be speeded up by defining a new index or two.
1
0
0
Django: How can you stop long queries from killing your database?
6
python,mysql,django,timeout
0
2009-08-30T06:10:00.000
I'm using Django 1.1 with Mysql 5.* and MyISAM tables. Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out. I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down. How do I avoid this? Can I set maximum query times?
7
1
1.2
0
true
1,353,862
1
4,741
3
0
0
1,353,206
Unfortunately MySQL doesn't allow you an easy way to avoid this. A common method is basically to write a script that checks all running processes every X seconds (based on what you think is "long") and kill ones it sees are running too long. You can at least get some basic diagnostics, however, by setting log_slow_queries in MySQL which will write all queries that take longer than 10 seconds into a log. If that's too long for what you regard as "slow" for your purposes, you can set long_query_time to a value other than 10 to change the threshold.
1
0
0
Django: How can you stop long queries from killing your database?
6
python,mysql,django,timeout
0
2009-08-30T06:10:00.000
I'm using Django 1.1 with Mysql 5.* and MyISAM tables. Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out. I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down. How do I avoid this? Can I set maximum query times?
7
0
0
0
false
1,353,366
1
4,741
3
0
0
1,353,206
Do you know what the queries are? Maybe you could optimise the SQL or put some indexes on your tables?
1
0
0
Django: How can you stop long queries from killing your database?
6
python,mysql,django,timeout
0
2009-08-30T06:10:00.000
I have a problem reading a txt file to insert in the mysql db table, te sniped of this code: file contains the in first line: "aclaración" archivo = open('file.txt',"r") for line in archivo.readlines(): ....body = body + line model = MyModel(body=body) model.save() i get a DjangoUnicodeDecodeError: 'utf8' codec can't decode bytes in position 8: invalid data. You passed in 'aclaraci\xf3n' (type 'str') Unicode error hint The string that could not be encoded/decoded was: araci�n. I tried to body.decode('utf-8'), body.decode('latin-1'), body.decode('iso-8859-1') without solution. Can you help me please? Any hint is apreciated :)
1
5
1.2
0
true
1,355,303
1
3,312
1
0
0
1,355,285
Judging from the \xf3 code for 'ó', it does look like the data is encoded in ISO-8859-1 (or some close relative). So body.decode('iso-8859-1') should be a valid Unicode string (you don't specify what "without solution" means -- what error message do you get, and where?); if what you need is a utf-8 encoded bytestring instead, body.decode('iso-8859-1').encode('utf-8') should give you one!
1
0
0
Latin letters with acute : DjangoUnicodeDecodeError
1
python,django,utf-8,character-encoding
0
2009-08-31T00:11:00.000
I'm building a fairly large enterprise application made in python that on its first version will require network connection. I've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder. Some of the advantages I've thought of are: the user can change computers keeping all its settings settings can be backed up along with the rest of the systems data (not a big concern) What would be some of the caveats of this approach?
2
5
0.244919
0
false
1,365,175
0
650
4
0
0
1,365,164
One caveat might depend on where the user is using the application from. For example, if they use two computers with different screen resolutions, and 'selected zoom/text size' is one of the things you associate with the user, it might not always be suitable. It depends what kind of settings you intend to allow the user to customize. My workplace still has some users trapped on tiny LCD screens with a max res of 800x600, and we have to account for those when developing.
1
0
0
Is storing user configuration settings on database OK?
4
python,database,settings
0
2009-09-01T23:32:00.000
I'm building a fairly large enterprise application made in python that on its first version will require network connection. I've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder. Some of the advantages I've thought of are: the user can change computers keeping all its settings settings can be backed up along with the rest of the systems data (not a big concern) What would be some of the caveats of this approach?
2
3
0.148885
0
false
1,365,176
0
650
4
0
0
1,365,164
Do you need the database to run any part of the application? If that's the case there are no reasons not to store the config inside the DB. You already mentioned the benefits and there are no downsides.
1
0
0
Is storing user configuration settings on database OK?
4
python,database,settings
0
2009-09-01T23:32:00.000
I'm building a fairly large enterprise application made in python that on its first version will require network connection. I've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder. Some of the advantages I've thought of are: the user can change computers keeping all its settings settings can be backed up along with the rest of the systems data (not a big concern) What would be some of the caveats of this approach?
2
3
0.148885
0
false
1,365,183
0
650
4
0
0
1,365,164
It's perfectly reasonable to keep user settings in the database, as long as the settings pertain to the application independent of user location. One possible advantage of a file in the user's home folder is that users can send settings to one another. You may of course regard this as an advantage or a disadvantage :-)
1
0
0
Is storing user configuration settings on database OK?
4
python,database,settings
0
2009-09-01T23:32:00.000
I'm building a fairly large enterprise application made in python that on its first version will require network connection. I've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder. Some of the advantages I've thought of are: the user can change computers keeping all its settings settings can be backed up along with the rest of the systems data (not a big concern) What would be some of the caveats of this approach?
2
8
1.2
0
true
1,365,178
0
650
4
0
0
1,365,164
This is pretty standard. Go for it. The caveat is that when you take the database down for maintenance, no one can use the app because their profile is inaccessible. You can either solve that by making a 100%-on db solution, or, more easily, through some form of caching of profiles locally (an "offline" mode of operations). That would allow your app to function whether the user or the db are off the network.
1
0
0
Is storing user configuration settings on database OK?
4
python,database,settings
0
2009-09-01T23:32:00.000
I am learning Python and creating a database connection. While trying to add to the DB, I am thinking of creating tuples out of information and then add them to the DB. What I am Doing: I am taking information from the user and store it in variables. Can I add these variables into a tuple? Can you please help me with the syntax? Also if there is an efficient way of doing this, please share... EDIT Let me edit this question a bit...I only need the tuple to enter info into the DB. Once the information is added to the DB, should I delete the tuple? I mean I don't need the tuple anymore.
353
9
1
0
false
1,381,304
0
728,594
1
0
0
1,380,860
" once the info is added to the DB, should I delete the tuple? i mean i dont need the tuple anymore." No. Generally, there's no reason to delete anything. There are some special cases for deleting, but they're very, very rare. Simply define a narrow scope (i.e., a function definition or a method function in a class) and the objects will be garbage collected at the end of the scope. Don't worry about deleting anything. [Note. I worked with a guy who -- in addition to trying to delete objects -- was always writing "reset" methods to clear them out. Like he was going to save them and reuse them. Also a silly conceit. Just ignore the objects you're no longer using. If you define your functions in small-enough blocks of code, you have nothing more to think about.]
1
0
1
Add Variables to Tuple
8
python,tuples
0
2009-09-04T18:36:00.000
SQLite docs specifies that the preferred format for storing datetime values in the DB is to use Julian Day (using built-in functions). However, all frameworks I saw in python (pysqlite, SQLAlchemy) store the datetime.datetime values as ISO formatted strings. Why are they doing so? I'm usually trying to adapt the frameworks to storing datetime as julianday, and it's quite painful. I started to doubt that is worth the efforts. Please share your experience in this field with me. Does sticking with julianday make sense?
8
6
1
0
false
1,386,154
0
1,707
2
0
0
1,386,093
Julian Day is handy for all sorts of date calculations, but it can's store the time part decently (with precise hours, minutes, and seconds). In the past I've used both Julian Day fields (for dates), and seconds-from-the-Epoch (for datetime instances), but only when I had specific needs for computation (of dates and respectively of times). The simplicity of ISO formatted dates and datetimes, I think, should make them the preferred choice, say about 97% of the time.
1
0
0
Shall I bother with storing DateTime data as julianday in SQLite?
4
python,datetime,sqlite,sqlalchemy,pysqlite
0
2009-09-06T16:43:00.000
SQLite docs specifies that the preferred format for storing datetime values in the DB is to use Julian Day (using built-in functions). However, all frameworks I saw in python (pysqlite, SQLAlchemy) store the datetime.datetime values as ISO formatted strings. Why are they doing so? I'm usually trying to adapt the frameworks to storing datetime as julianday, and it's quite painful. I started to doubt that is worth the efforts. Please share your experience in this field with me. Does sticking with julianday make sense?
8
0
0
0
false
3,089,486
0
1,707
2
0
0
1,386,093
Because 2010-06-22 00:45:56 is far easier for a human to read than 2455369.5318981484. Text dates are great for doing ad-hoc queries in SQLiteSpy or SQLite Manager. The main drawback, of course, is that text dates require 19 bytes instead of 8.
1
0
0
Shall I bother with storing DateTime data as julianday in SQLite?
4
python,datetime,sqlite,sqlalchemy,pysqlite
0
2009-09-06T16:43:00.000
I have an object that is basically a Python implementation of an Oracle sequence. For a variety of reasons, we have to get the nextval of an Oracle sequence, count up manually when determining primary keys, then update the sequence once the records have been inserted. So here's the steps my object does: Construct an object, with a key_generator attribute initially set to None. Get the first value from the database, passing it to an itertools.count. Return keys from that generator using a property next_key. I'm a little bit unsure about where to do step 2. I can think of three possibilities: Skip step 1 and do step 2 in the constructor. I find this evil because I tend to dislike doing this kind of initialization in a constructor. Make next_key get the starting key from the database the first time it is called. I find this evil because properties are typically assumed to be trivial. Make next_key into a get_next_key method. I dislike this because properties just seem more natural here. Which is the lesser of 3 evils? I'm leaning towards #2, because only the first call to this property will result in a database query.
2
2
0.132549
0
false
1,386,258
0
190
2
0
0
1,386,210
I agree that attribute access and everything that looks like it (i.e. properties in the Python context) should be fairly trivial. If a property is going to perform a potentially costly operation, use a method to make this explicit. I recommend a name like "fetch_XYZ" or "retrieve_XYZ", since "get_XYZ" is used in some languages (e.g. Java) as a convention for simple attribute access, is quite generic, and does not sound "costly" either. A good guideline is: If your property could throw an exception that is not due to a programming error, it should be a method. For example, throwing a (hypothetical) DatabaseConnectionError from a property is bad, while throwing an ObjectStateError would be okay. Also, when I understood you correctly, you want to return the next key, whenever the next_key property is accessed. I recommend strongly against having side-effects (apart from caching, cheap lazy initialization, etc.) in your properties. Properties (and attributes for that matter) should be idempotent.
1
0
0
Should properties do nontrivial initialization?
3
python,properties,initialization
0
2009-09-06T17:35:00.000
I have an object that is basically a Python implementation of an Oracle sequence. For a variety of reasons, we have to get the nextval of an Oracle sequence, count up manually when determining primary keys, then update the sequence once the records have been inserted. So here's the steps my object does: Construct an object, with a key_generator attribute initially set to None. Get the first value from the database, passing it to an itertools.count. Return keys from that generator using a property next_key. I'm a little bit unsure about where to do step 2. I can think of three possibilities: Skip step 1 and do step 2 in the constructor. I find this evil because I tend to dislike doing this kind of initialization in a constructor. Make next_key get the starting key from the database the first time it is called. I find this evil because properties are typically assumed to be trivial. Make next_key into a get_next_key method. I dislike this because properties just seem more natural here. Which is the lesser of 3 evils? I'm leaning towards #2, because only the first call to this property will result in a database query.
2
0
0
0
false
1,389,673
0
190
2
0
0
1,386,210
I've decided that the key smell in the solution I'm proposing is that the property I was creating contained the word "next" in it. Thus, instead of making a next_key property, I've decided to turn my DatabaseIntrospector class into a KeyCounter class and implemented the iterator protocol (ie making a plain old next method that returns the next key).
1
0
0
Should properties do nontrivial initialization?
3
python,properties,initialization
0
2009-09-06T17:35:00.000
What is the best way to use an embedded database, say sqlite in Python: Should be small footprint. I'm only needing few thousands records per table. And just a handful of tables per database. If it's one provided by Python default installation, then great. Must be open-source, available on Windows and Linus. Better if SQL is not written directly, but no ORM is fully needed. Something that will shield me from the actual database, but not that huge of a library. Something similar to ADO will be great. Mostly will be used through code, but if there is a GUI front end, then that is great Need just a few pages to get started with. I don't want to go through pages reading what a table is and how a Select statement works. I know all of that. Support for Python 3 is preferred, but 2.x is okay too. The usage is not a web app. It's a small database to hold at most 5 tables. The data in each table is just a few string columns. Think something just larger than a pickled dictionary Update: Many thanks for the great suggestions. The use-case I'm talking about is fairly simple. One you'd probably do in a day or two. It's a 100ish line Python script that gathers data about a relatively large number of files (say 10k), and creates metadata files about them, and then one large metadata file about the whole files tree. I just need to avoid re-processing the files already processed, and create the metadata for the updated files, and update the main metadata file. In a way, cache the processed data, and only update it on file updates. If the cache is corrupt / unavailable, then simply process the whole tree. It might take 20 minutes, but that's okay. Note that all processing is done in-memory. I would like to avoid any external dependencies, so that the script can easily be put on any system with just a Python installation on it. Being Windows, it is sometimes hard to get all the components installed. So, In my opinion, even a database might be an overkill. You probably wouldn't fire up an Office Word/Writer to write a small post it type note, similarly I am reluctant on using something like Django for this use-case. Where to start?
7
0
0
0
false
1,407,345
0
5,927
1
0
0
1,407,248
Django is perfect for this but the poster is not clear if he needs to actually make a compiled EXE or a web app. Django is only for web apps. I'm not sure where you really get "heavy" from. Django is grossly smaller in terms of lines of code than any other major web app framework.
1
0
0
python database / sql programming - where to start
9
python,database,sqlite,ado
0
2009-09-10T19:31:00.000
I have started learning Python by writing a small application using Python 3.1 and py-PostgreSQL. Now I want to turn it into a web application. But it seems that most frameworks such as web-py, Django, zope are still based on Python 2.x. Unfortunately, py-PostgreSQL is incompatible with Python 2.x. Do I have to rewrite all my classes and replace py-PostgreSQL with something supported by web-py etc., or is there a framework compatible with Python 3.1? Or maybe py-PostgreSQL is compatible with 2.x but I did not figure it out?
1
0
0
0
false
1,934,744
1
1,695
1
0
0
1,423,000
Even though it's not officially released yet, I am currently 'playing around' with CherryPy 3.2.0rc1 with Python 3.1.1 and have had no problems yet. Haven't used it with py-postgresql, but I don't see why it shouldn't work. Hope this helps, Alan
1
0
0
web framework compatible with python 3.1 and py-postgresql
4
python,web-applications,python-3.x,wsgi
0
2009-09-14T17:56:00.000
I am just starting out with the MySQLdb module for python, and upon running some SELECT and UPDATE queries, the following gets output: Exception _mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL server during query') in bound method Cursor.del of MySQLdb.cursors.Cursor object at 0x8c0188c ignored The exception is apparently getting caught (and "ignored") by MySQLdb itself, so I guess this is not a major issue. Also, the SELECTs generate results and the table gets modified by UPDATE. But, since I am just getting my feet wet with this, I want to ask: does this message suggest I am doing something wrong? Or have you seen these warnings before in harmless situations? Thanks for any insight, lara
0
0
1.2
0
true
1,439,734
0
2,148
1
0
0
1,439,616
Ha! Just realized I was trying to use the cursor after having closed the connection! In any case, it was nice writing! : ) l
1
0
0
have you seen? _mysql_exceptions.OperationalError "Lost connection to MySQL server during query" being ignored
1
python,mysql
0
2009-09-17T15:33:00.000
I have configured pgpool-II for postgres connection pooling and I want to disable psycopg2 connection pooling. How do I do this? Thanks!
0
6
1.2
0
true
1,492,172
0
1,426
1
0
0
1,440,245
psycopg2 doesn't pool connections unless you explicitely use the psycopg.pool module.
1
0
0
How do I disable psycopg2 connection pooling?
2
python,psycopg2
0
2009-09-17T17:34:00.000
I am trying to install pysqlite and have troubles with that. I found out that the most probable reason of that is missing sqlite headers and I have to install them. However, I have no ideas what these headers are (where I can find them, what they are doing and how to install them). Can anybody, pleas, help me with that?
12
0
0
0
false
1,462,623
0
17,278
2
0
0
1,462,565
pysqlite needs to compiled/build before you can use it. This requires C language header files (*.H) which come with the source code of sqllite itself. i.e. sqllite and pysqlite are two different things. Did you install sqlite prior to trying and build pysqllte ? (or maybe you did, but did you do so just with the binaries; you need the source package (or at least its headers) for pysqlite purposes.
1
0
0
What are sqlite development headers and how to install them?
3
python,header,pysqlite
0
2009-09-22T20:57:00.000
I am trying to install pysqlite and have troubles with that. I found out that the most probable reason of that is missing sqlite headers and I have to install them. However, I have no ideas what these headers are (where I can find them, what they are doing and how to install them). Can anybody, pleas, help me with that?
12
7
1
0
false
5,671,345
0
17,278
2
0
0
1,462,565
For me this worked (Redhat/CentOS): $ sudo yum install sqlite-devel
1
0
0
What are sqlite development headers and how to install them?
3
python,header,pysqlite
0
2009-09-22T20:57:00.000
I am trying to get started on working with Python on Django I am by profession a PHP developer and have been told to set up django and python on my current apache and mysql setup however I am having trouble getting the Mysqldb module for python to work, I must of followed about 6 different set of instructions, I am running snow leopard and have mysql installed natively it is not part of MAMP or similar. Please can some tell me where I need to start and what steps I need to follew I would be most grateful. Thanks
7
7
1
0
false
6,537,345
1
7,786
1
0
0
1,465,846
On MAC OS X 10.6, Install the package as usual. The dynamic import error occurs because of wrong DYLD path. Export the path and open up a python terminal. $ sudo python setup.py clean $ sudo python setup.py build $ sudo python setup.py install $ export DYLD_LIBRARY_PATH=/usr/local/mysql/lib:$DYLD_LIBRARY_PATH $python import MySQLdb Now import MySQLdb should work fine. You may also want to manually remove the build folder, before build and install. The clean command does not do a proper task of cleaning up the build files.
1
0
0
Install mysqldb on snow leopard
8
python,mysql,django,osx-snow-leopard
0
2009-09-23T13:01:00.000
Can Python be used to query a SAP database?
36
4
0.113791
0
false
1,467,921
0
48,208
2
0
0
1,466,917
Sap is NOT a database server. But with the Python SAP RFC module you can query most table quite easily. It is using some sap unsupported function ( that all the world is using). And this function has some limitation on field size and datatypes.
1
0
0
Query SAP database from Python?
7
python,abap,sap-basis,pyrfc
0
2009-09-23T15:55:00.000
Can Python be used to query a SAP database?
36
1
0.028564
0
false
59,210,473
0
48,208
2
0
0
1,466,917
Python is one of the most used object-oriented programming languages which is very easy to code and understand. In order to use Python with SAP, we need to install Python SAP RFC module which is known as PyRFC. One of its available methods is RFC_READ_TABLE which can be called to read data from a table in SAP database. Also, the PyRFC package provides various bindings which can be utilized to make calls either way. We can use to make calls either from ABAP modules to Python modules or the other way round. One can define equivalent SAP data types which are used in data exchange. Also, we can create Web Service in Python which can be used for inter-communication. SAP NetWeaver is fully compatible with web services either state full or stateless.
1
0
0
Query SAP database from Python?
7
python,abap,sap-basis,pyrfc
0
2009-09-23T15:55:00.000
Is there a python equivalent of phpMyAdmin? Here's why I'm looking for a python version of phpmyadmin: While I agree that phpmyadmin really rocks, I don't want to run php on my server. I'd like to move from apache2-prefork to apache2-mpm-worker. Worker blows the doors off of prefork for performance, but php5 doesn't work with worker. (Technically it does, but it's far more complicated.) The extra memory and performance penalty for having php on this server is large to me.
33
12
1
0
false
1,480,549
0
23,600
1
0
0
1,480,453
You can use phpMyAdmin for python project, because phpMyAdmin is meant for MySQL databases. If you are using MySQL, then regardless of whether you are using PHP or python, you can use phpMyAdmin.
1
0
0
phpMyAdmin equivalent in python?
4
python,phpmyadmin
1
2009-09-26T04:51:00.000
I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory. For example I have a profile class and a profile table which are defined and mapped in /model/profile.py To create the tables I run: paster setup-app development.ini But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created? Thanks for the help!
3
0
0
0
false
1,483,061
0
1,155
3
0
0
1,482,627
Just import your other table's modules in your init.py, and use metadata object from models.meta in other files. Pylons default setup_app function creates all tables found in metadata object from model.meta after importing it.
1
0
0
Creating tables with pylons and SQLAlchemy
3
python,sqlalchemy,pylons
0
2009-09-27T02:19:00.000
I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory. For example I have a profile class and a profile table which are defined and mapped in /model/profile.py To create the tables I run: paster setup-app development.ini But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created? Thanks for the help!
3
5
0.321513
0
false
1,528,312
0
1,155
3
0
0
1,482,627
I ran into the same problem with my first real Pylons project. The solution that worked for me was this: Define tables and classes in your profile.py file In your __init__.py add from profile import * after your def init_model I then added all of my mapper definitions afterwards. Keeping them all in the init file solved some problems I was having relating between tables defined in different files. Also, I've since created projects using the declarative method and didn't need to define the mapping in the init file.
1
0
0
Creating tables with pylons and SQLAlchemy
3
python,sqlalchemy,pylons
0
2009-09-27T02:19:00.000
I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory. For example I have a profile class and a profile table which are defined and mapped in /model/profile.py To create the tables I run: paster setup-app development.ini But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created? Thanks for the help!
3
0
0
0
false
1,485,719
0
1,155
3
0
0
1,482,627
If you are using declarative style, be sure to use Base.meta for tables generation.
1
0
0
Creating tables with pylons and SQLAlchemy
3
python,sqlalchemy,pylons
0
2009-09-27T02:19:00.000
I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed? MySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?
1
1
0.039979
0
false
1,483,154
0
432
3
0
0
1,483,024
What it needs is the client library and headers that come with the server, since it just a Python wrapper (which sits in _mysql.c; and DB-API interface to that wrapper in MySQLdb package) over original C MySQL API.
1
0
0
Why MySQLdb for Mac has to have MySQL installed to install?
5
python,mysql
0
2009-09-27T07:31:00.000
I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed? MySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?
1
1
0.039979
0
false
1,483,030
0
432
3
0
0
1,483,024
I'm not sure about the specifics of MySQLdb, but most likely it needs header information to compile/install. It uses the location of mysql_config to know where the appropriate headers would be. The MySQL Gem for Ruby on Rails requires the same thing, even though it simply connects to the MySQL server.
1
0
0
Why MySQLdb for Mac has to have MySQL installed to install?
5
python,mysql
0
2009-09-27T07:31:00.000
I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed? MySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?
1
1
0.039979
0
false
1,483,305
0
432
3
0
0
1,483,024
Just to clarify what the other answerers have said: you don't need to install a MySQL server, but you do need to install the MySQL client libraries. However, for whatever reasons, MySQL don't make a separate download available for just the client libraries, as they do for Linux.
1
0
0
Why MySQLdb for Mac has to have MySQL installed to install?
5
python,mysql
0
2009-09-27T07:31:00.000
I just upgraded the default Python 2.5 on Leopard to 2.6 via the installer on www.python.org. Upon doing so, the MySQLdb I had installed was no longer found. So I tried reinstalling it via port install py-mysql, and it succeeded, but MySQLdb was still not importable. So then I tried to python install python26 with python_select python26 and it succeeded, but it doesn't appear that it is getting precedence over the python.org install: $ which python /Library/Frameworks/Python.framework/Versions/2.6/bin/python When I would expect it to be something like /opt/local/bin/python My path environment is: /Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/local/mysql/bin/:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/mysql/bin:/Users/bsr/bin Anyway, when I try port install py-mysql but how does it know where to install the Python MySQL library?
0
1
0.099668
0
false
2,302,542
0
1,051
1
1
0
1,499,572
You also need python_select (or is it select_python?) to change the default python used.
1
0
0
With multiple Python installs, how does MacPorts know which one to install MySQLdb for?
2
python,mysql,macos
0
2009-09-30T17:32:00.000
A quick SQLAlchemy question... I have a class "Document" with attributes "Number" and "Date". I need to ensure that there's no duplicated number for the same year, is there a way to have a UniqueConstraint on "Number + year(Date)"? Should I use a unique Index instead? How would I declare the functional part? (SQLAlchemy 0.5.5, PostgreSQL 8.3.4) Thanks in advance!
3
-1
-0.099668
0
false
1,510,137
0
890
1
0
0
1,510,018
I'm pretty sure that unique constraints can only be applied on columns that already have data in them, and not on runtime-calculated expressions. Hence, you would need to create an extra column which contains the year part of your date, over which you could create a unique constraint together with number. To best use this approach, maybe you should store your date split up in three separate columns containing the day, month and year part. This could be done using default constraints in the table definition.
1
0
1
Compound UniqueConstraint with a function
2
python,sqlalchemy,constraints
0
2009-10-02T14:52:00.000
One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?
1
1
0.039979
0
false
1,510,491
0
614
3
0
0
1,510,084
Python 3 isn't ready for web applications right now. The WSGI 1.0 specification isn't suitable for Py3k and the related standard libraries are 2to3 hacks that don't work consistently faced with bytes vs. unicode. It's a real mess. WEB-SIG are bashing out proposals for a WSGI revision; hopefully it can move forward soon, because although Python 3 isn't mainstream yet it's certainly heading that way, and the brokenness of webdev is rather embarrassing.
1
0
0
Is there any framework like RoR on Python 3000?
5
python,ruby-on-rails,frameworks,python-3.x
0
2009-10-02T15:01:00.000
One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?
1
0
0
0
false
1,510,218
0
614
3
0
0
1,510,084
Python 3 is not ready for practical use, because there is not yet enough libraries that have been updated to support Python 3. So the answer is: No. But there are LOADS of them on Python 2. Tens, at least. Django, Turbogears, BFG and of course the old man of the game: Zope. To tell which is best for you, you need to expand your requirements a lot.
1
0
0
Is there any framework like RoR on Python 3000?
5
python,ruby-on-rails,frameworks,python-3.x
0
2009-10-02T15:01:00.000
One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?
1
2
0.07983
0
false
1,512,245
0
614
3
0
0
1,510,084
I believe CherryPy is on the verge of being released for Python 3.X.
1
0
0
Is there any framework like RoR on Python 3000?
5
python,ruby-on-rails,frameworks,python-3.x
0
2009-10-02T15:01:00.000
I want 3 columns to have 9 different values, like a list in Python. Is it possible? If not in SQLite, then on another database engine?
12
12
1
0
false
1,517,795
0
25,012
1
0
0
1,517,771
Generally, you do this by stringifying the list (with repr()), and then saving the string. On reading the string from the database, use eval() to re-create the list. Be careful, though that you are certain no user-generated data can get into the column, or the eval() is a security risk.
1
0
0
Is it possible to save a list of values into a SQLite column?
3
python,sqlite
0
2009-10-05T00:26:00.000
Hi I want some help in building a Phone book application on python and put it on google app engine. I am running a huge db of 2 million user lists and their contacts in phonebook. I want to upload all that data from my servers directly onto the google servers and then use a UI to retrieve the phone book contacts of each user based on his name. I am using MS SQL sever 2005 as my DB. Please help in putting together this application. Your inputs are much appreciated.
0
0
0
0
false
1,519,020
1
420
1
0
0
1,518,725
I think you're going to need to be more specific as to what problem you're having. As far as bulk loading goes, there's lots of bulkloader documentation around; or are you asking about model design? If so, we need to know more about how you plan to search for users. Do you need partial string matches? Sorting? Fuzzy matching?
1
0
0
Need help in designing a phone book application on python running on google app engine
2
python,google-app-engine,bulk-load
0
2009-10-05T07:50:00.000
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
2
1
0.066568
0
false
1,622,505
1
2,259
3
1
0
1,523,706
I can only answer question one: I started using it for some small webstuff but now moved on to rework larger apps with it. Why Werkzeug? The modular concept is really helpful. You can hook in modules as you like, make stuff easily context aware and you get good request file handling for free which is able to cope with 300mb+ files by not storing it in memory. Disadvantages... Well sometimes modularity needs some upfront thought (django f.ex. gives you everything all at once, stripping stuff out is hard to do there though) but for me it works fine.
1
0
0
Werkzeug in General, and in Python 3.1
3
python,python-3.x,werkzeug
0
2009-10-06T05:13:00.000
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
2
1
0.066568
0
false
1,523,934
1
2,259
3
1
0
1,523,706
I haven't used Werkzeug, so I can only answer question 2: No, Werkzeug does not work on Python 3. In fact, very little works on Python 3 as of today. Porting is not difficult, but you can't port until all your third-party libraries have been ported, so progress is slow. One big stopper has been setuptools, which is a very popular package to use. Setuptools is unmaintained, but there is a maintained fork called Distribute. Distribute was released with Python 3 support just a week or two ago. I hope package support for Python 3 will pick up now. But it will still be a long time, at least months probably a year or so, before any major project like Werkzeug will be ported to Python 3.
1
0
0
Werkzeug in General, and in Python 3.1
3
python,python-3.x,werkzeug
0
2009-10-06T05:13:00.000
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
2
3
0.197375
0
false
1,525,943
1
2,259
3
1
0
1,523,706
mod_wsgi for Python 3.x is also not ready. There is no satisfactory definition of WSGI for Python 3.x yet; the WEB-SIG are still bashing out the issues. mod_wsgi targets a guess at what might be in it, but there are very likely to be changes to both the spec and to standard libraries. Any web application you write today in Python 3.1 is likely to break in the future. It's a bit of a shambles. Today, for webapps you can only realistically use Python 2.x.
1
0
0
Werkzeug in General, and in Python 3.1
3
python,python-3.x,werkzeug
0
2009-10-06T05:13:00.000
I have a web service with Django Framework. My friend's project is a WIN32 program and also a MS-sql server. The Win32 program currently has a login system that talks to a MS-sql for authentication. However, we would like to INTEGRATE this login system as one. Please answer the 2 things: I want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)? If not, what is the best way of combining the authentication?
1
0
0
0
false
1,529,146
1
203
1
0
0
1,529,128
If the only thing the WIN32 app uses the MS-SQL Server for is Authentication/Authorization then you could write a new Authentication/Authorization provider that uses a set of Web Services (that you would have to create) that expose the Django provider.
1
0
0
Can a WIN32 program authenticate into Django authentication system, using MYSQL?
2
python,windows,django,authentication,frameworks
0
2009-10-07T01:59:00.000
I have a web service with Django Framework. My friend's project is a WIN32 program and also a MS-sql server. The Win32 program currently has a login system that talks to a MS-sql for authentication. However, we would like to INTEGRATE this login system as one. Please answer the 2 things: I want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)? If not, what is the best way of combining the authentication?
0
1
1.2
0
true
1,581,622
1
103
1
0
0
1,533,259
The Win32 client can act like a web client to pass the user's credentials to the server. You will want to store the session cookie you get once you are authenticated and use that cookie in all following requests
1
0
0
Can a WIN32 program authenticate into Django authentication system, using MYSQL?
1
python,mysql,windows,django
0
2009-10-07T02:00:00.000
I'm coding a small piece of server software for the personal use of several users. Not hundreds, not thousands, but perhaps 3-10 at a time. Since it's a threaded server, SQLite doesn't work. It complains about threads like this: ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140735085562848 and this is thread id 4301299712 Besides, they say SQLite isn't great for concurrency anyhow. Now since I started working with Python 3 (and would rather continue using it) I can't seem to get the MySQL module to work properly and others seem equally frustrated. In that case, is there any other DB option for Python 3 that I could consider?
3
1
0.033321
0
false
10,863,434
0
3,552
3
0
0
1,547,365
pymongo works with Python 3 now.
1
0
0
A database for python 3?
6
python,database,python-3.x
0
2009-10-10T08:09:00.000
I'm coding a small piece of server software for the personal use of several users. Not hundreds, not thousands, but perhaps 3-10 at a time. Since it's a threaded server, SQLite doesn't work. It complains about threads like this: ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140735085562848 and this is thread id 4301299712 Besides, they say SQLite isn't great for concurrency anyhow. Now since I started working with Python 3 (and would rather continue using it) I can't seem to get the MySQL module to work properly and others seem equally frustrated. In that case, is there any other DB option for Python 3 that I could consider?
3
0
0
0
false
1,547,384
0
3,552
3
0
0
1,547,365
You could create a new sqlite object in each thread, each using the same database file. For such a small number of users you might not come across the problems with concurrency, unless they are all writing to it very heavily.
1
0
0
A database for python 3?
6
python,database,python-3.x
0
2009-10-10T08:09:00.000
I'm coding a small piece of server software for the personal use of several users. Not hundreds, not thousands, but perhaps 3-10 at a time. Since it's a threaded server, SQLite doesn't work. It complains about threads like this: ProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140735085562848 and this is thread id 4301299712 Besides, they say SQLite isn't great for concurrency anyhow. Now since I started working with Python 3 (and would rather continue using it) I can't seem to get the MySQL module to work properly and others seem equally frustrated. In that case, is there any other DB option for Python 3 that I could consider?
3
0
0
0
false
1,550,870
0
3,552
3
0
0
1,547,365
Surely a pragmatic option is to just use one SQLite connection per thread.
1
0
0
A database for python 3?
6
python,database,python-3.x
0
2009-10-10T08:09:00.000
I am doing some pylons work in a virtual python enviorment, I want to use MySQL with SQLalchemy but I can't install the MySQLdb module on my virtual enviorment, I can't use easyinstall because I am using a version that was compiled for python 2.6 in a .exe format, I tried running the install from inside the virtual enviorment but that did not work, any sugestions?
0
0
1.2
0
true
1,563,869
0
442
1
0
0
1,557,972
Ok Got it all figured out, After I installed the module on my normal python 2.6 install I went into my Python26 folder and low and behold I happened to find a file called MySQL-python-wininst which happened to be a list of all of the installed module files. Basicly it was two folders called MySQLdb and another called MySQL_python-1.2.2-py2.6.egg-info as well as three other files: _mysql.pyd, _mysql_exceptions.py, _mysql_exceptions.pyc. So I went into the folder where they were located (Python26/Lib/site-packages) and copied them to virtualenv's site-packages folder (env/Lib/site-packages) and the module was fully functional! Note: All paths are the defaults
1
0
1
Install custom modules in a python virtual enviroment
1
python,mysql,pylons,module,virtualenv
0
2009-10-13T02:43:00.000