[{"Question":"I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like \/usr\/lib\/python2.5\/site-packages\/mypackage\/).\nHow do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database:\n\ndbname = os.path.join(os.path.dirname(__file__), \"database.dat\")\n\nIt works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a \"dbconfig.py\" file that gets installed alongside the code that accesses the database.","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":28993,"Q_Id":39104,"Users Score":19,"Answer":"Use pkgutil.get_data. It\u2019s the cousin of pkg_resources.resource_stream, but in the standard library, and should work with flat filesystem installs as well as zipped packages and other importers.","Q_Score":32,"Tags":"python,distutils","A_Id":9918496,"CreationDate":"2008-09-02T09:40:00.000","Title":"Finding a file in a Python module distribution","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like \/usr\/lib\/python2.5\/site-packages\/mypackage\/).\nHow do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database:\n\ndbname = os.path.join(os.path.dirname(__file__), \"database.dat\")\n\nIt works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a \"dbconfig.py\" file that gets installed alongside the code that accesses the database.","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":28993,"Q_Id":39104,"Users Score":3,"Answer":"That's probably the way to do it, without resorting to something more advanced like using setuptools to install the files where they belong.\nNotice there's a problem with that approach, because on OSes with real a security framework (UNIXes, etc.) the user running your script might not have the rights to access the DB in the system directory where it gets installed.","Q_Score":32,"Tags":"python,distutils","A_Id":39295,"CreationDate":"2008-09-02T09:40:00.000","Title":"Finding a file in a Python module distribution","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... )\nDoing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables[\"tablename\"] to get the table reference.\nAm I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":2919,"Q_Id":75829,"Users Score":4,"Answer":"via the __table__ attribute on your declarative class","Q_Score":8,"Tags":"python,sql,sqlalchemy","A_Id":77962,"CreationDate":"2008-09-16T19:08:00.000","Title":"Best way to access table instances when using SQLAlchemy's declarative syntax","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... )\nDoing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables[\"tablename\"] to get the table reference.\nAm I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2919,"Q_Id":75829,"Users Score":0,"Answer":"There may be some confusion between table (the object) and tablename (the name of the table, a string). Using the table class attribute works fine for me.","Q_Score":8,"Tags":"python,sql,sqlalchemy","A_Id":315406,"CreationDate":"2008-09-16T19:08:00.000","Title":"Best way to access table instances when using SQLAlchemy's declarative syntax","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.\nSimply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.\nWhat I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.\nThe interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.\nI realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.\nEdit: Expanded description to clear up some misconceptions.","AnswerCount":9,"Available Count":6,"Score":0.022218565,"is_accepted":false,"ViewCount":2773,"Q_Id":140026,"Users Score":1,"Answer":"\"implement a Domain Specific Language\"\n\"nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime\"\nI want a DSL but I don't want Python to be that DSL. Okay. How will you execute this DSL? What runtime is acceptable if not Python?\nWhat if I have a C program that happens to embed the Python interpreter? Is that acceptable?\nAnd -- if Python is not an acceptable runtime -- why does this have a Python tag?","Q_Score":5,"Tags":"python,database,algorithm,dsl","A_Id":141872,"CreationDate":"2008-09-26T14:56:00.000","Title":"Writing a Domain Specific Language for selecting rows from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.\nSimply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.\nWhat I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.\nThe interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.\nI realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.\nEdit: Expanded description to clear up some misconceptions.","AnswerCount":9,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2773,"Q_Id":140026,"Users Score":0,"Answer":"Why not create a language that when it \"compiles\" it generates SQL or whatever query language your datastore requires ?\nYou would be basically creating an abstraction over your persistence layer.","Q_Score":5,"Tags":"python,database,algorithm,dsl","A_Id":140066,"CreationDate":"2008-09-26T14:56:00.000","Title":"Writing a Domain Specific Language for selecting rows from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.\nSimply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.\nWhat I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.\nThe interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.\nI realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.\nEdit: Expanded description to clear up some misconceptions.","AnswerCount":9,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2773,"Q_Id":140026,"Users Score":0,"Answer":"It really sounds like SQL, but perhaps it's worth to try using SQLite if you want to keep it simple?","Q_Score":5,"Tags":"python,database,algorithm,dsl","A_Id":140304,"CreationDate":"2008-09-26T14:56:00.000","Title":"Writing a Domain Specific Language for selecting rows from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.\nSimply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.\nWhat I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.\nThe interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.\nI realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.\nEdit: Expanded description to clear up some misconceptions.","AnswerCount":9,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2773,"Q_Id":140026,"Users Score":0,"Answer":"You mentioned Python. Why not use Python? If someone can \"type in\" an expression in your DSL, they can type in Python.\nYou'll need some rules on structure of the expression, but that's a lot easier than implementing something new.","Q_Score":5,"Tags":"python,database,algorithm,dsl","A_Id":140091,"CreationDate":"2008-09-26T14:56:00.000","Title":"Writing a Domain Specific Language for selecting rows from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.\nSimply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.\nWhat I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.\nThe interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.\nI realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.\nEdit: Expanded description to clear up some misconceptions.","AnswerCount":9,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2773,"Q_Id":140026,"Users Score":0,"Answer":"You said nobody is going to want to install a server that downloads and executes arbitrary code at runtime. However, that is exactly what your DSL will do (eventually) so there probably isn't that much of a difference. Unless you're doing something very specific with the data then I don't think a DSL will buy you that much and it will frustrate the users who are already versed in SQL. Don't underestimate the size of the task you'll be taking on.\nTo answer your question however, you will need to come up with a grammar for your language, something to parse the text and walk the tree, emitting code or calling an API that you've written (which is why my comment that you're still going to have to ship some code). \nThere are plenty of educational texts on grammars for mathematical expressions you can refer to on the net, that's fairly straight forward. You may have a parser generator tool like ANTLR or Yacc you can use to help you generate the parser (or use a language like Lisp\/Scheme and marry the two up). Coming up with a reasonable SQL grammar won't be easy. But google 'BNF SQL' and see what you come up with.\nBest of luck.","Q_Score":5,"Tags":"python,database,algorithm,dsl","A_Id":140228,"CreationDate":"2008-09-26T14:56:00.000","Title":"Writing a Domain Specific Language for selecting rows from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.\nSimply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.\nWhat I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.\nThe interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.\nI realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.\nEdit: Expanded description to clear up some misconceptions.","AnswerCount":9,"Available Count":6,"Score":0.022218565,"is_accepted":false,"ViewCount":2773,"Q_Id":140026,"Users Score":1,"Answer":"I think we're going to need a bit more information here. Let me know if any of the following is based on incorrect assumptions.\nFirst of all, as you pointed out yourself, there already exists a DSL for selecting rows from arbitrary tables-- it is called \"SQL\". Since you don't want to reinvent SQL, I'm assuming that you only need to query from a single table with a fixed format.\nIf this is the case, you probably don't need to implement a DSL (although that's certainly one way to go); it may be easier, if you are used to Object Orientation, to create a Filter object. \nMore specifically, a \"Filter\" collection that would hold one or more SelectionCriterion objects. You can implement these to inherit from one or more base classes representing types of selections (Range, LessThan, ExactMatch, Like, etc.) Once these base classes are in place, you can create column-specific inherited versions which are appropriate to that column. Finally, depending on the complexity of the queries you want to support, you'll want to implement some kind of connective glue to handle AND and OR and NOT linkages between the various criteria.\nIf you feel like it, you can create a simple GUI to load up the collection; I'd look at the filtering in Excel as a model, if you don't have anything else in mind.\nFinally, it should be trivial to convert the contents of this Collection to the corresponding SQL, and pass that to the database.\nHowever: if what you are after is simplicity, and your users understand SQL, you could simply ask them to type in the contents of a WHERE clause, and programmatically build up the rest of the query. From a security perspective, if your code has control over the columns selected and the FROM clause, and your database permissions are set properly, and you do some sanity checking on the string coming in from the users, this would be a relatively safe option.","Q_Score":5,"Tags":"python,database,algorithm,dsl","A_Id":140275,"CreationDate":"2008-09-26T14:56:00.000","Title":"Writing a Domain Specific Language for selecting rows from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore.\nWhich module do you recommend? Why?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":15582,"Q_Id":144448,"Users Score":0,"Answer":"I uses only psycopg2 and had no problems with that.","Q_Score":28,"Tags":"python,postgresql,module","A_Id":1579851,"CreationDate":"2008-09-27T20:55:00.000","Title":"Python PostgreSQL modules. Which is best?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore.\nWhich module do you recommend? Why?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":15582,"Q_Id":144448,"Users Score":0,"Answer":"Psycopg1 is known for better performance in heavilyy threaded environments (like web applications) than Psycopg2, although not maintained. Both are well written and rock solid, I'd choose one of these two depending on use case.","Q_Score":28,"Tags":"python,postgresql,module","A_Id":145801,"CreationDate":"2008-09-27T20:55:00.000","Title":"Python PostgreSQL modules. Which is best?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to develop an app using turbogears and sqlalchemy.\nThere is already an existing app using kinterbasdb directly under mod_wsgi on the same server.\nWhen both apps are used, neither seems to recognize that kinterbasdb is already initialized\nIs there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":270,"Q_Id":155029,"Users Score":2,"Answer":"I thought I posted my solution already...\nModifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file\nand patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True\nbefore calling self.dbapi.init(... was the only way I could manage to get this scenario up and running.\nThe SQLAlchemy 0.4.7 patch:\n\ndiff -Naur SQLAlchemy-0.4.7\/lib\/sqlalchemy\/databases\/firebird.py SQLAlchemy-0.4.7.new\/lib\/sqlalchemy\/databases\/firebird.py\n--- SQLAlchemy-0.4.7\/lib\/sqlalchemy\/databases\/firebird.py 2008-07-26 12:43:52.000000000 -0400\n+++ SQLAlchemy-0.4.7.new\/lib\/sqlalchemy\/databases\/firebird.py 2008-10-01 10:51:22.000000000 -0400\n@@ -291,7 +291,8 @@\n global _initialized_kb\n if not _initialized_kb and self.dbapi is not None:\n _initialized_kb = True\n- self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)\n+ if not self.dbapi.initialized:\n+ self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)\n return ([], opts)\n\n def create_execution_context(self, *args, **kwargs):","Q_Score":1,"Tags":"python,sqlalchemy,kinterbasdb","A_Id":175634,"CreationDate":"2008-09-30T20:47:00.000","Title":"SQLAlchemy and kinterbasdb in separate apps under mod_wsgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table that looks something like this:\n\nword big expensive smart fast\n\ndog 9 -10 -20 4\nprofessor 2 4 40 -7\nferrari 7 50 0 48\nalaska 10 0 1 0\ngnat -3 0 0 0\n\nThe + and - values are associated with the word, so professor is smart and dog is not smart. Alaska is big, as a proportion of the total value associated with its entries, and the opposite is true of gnat.\nIs there a good way to get the absolute value of the number farthest from zero, and some token whether absolute value =\/= value? Relatedly, how might I calculate whether the results for a given value are proportionately large with respect to the other values? I would write something to format the output to the effect of: \"dog: not smart, probably not expensive; professor smart; ferrari: fast, expensive; alaska: big; gnat: probably small.\" (The formatting is not a question, just an illustration, I am stuck on the underlying queries.) \nAlso, the rest of the program is python, so if there is any python solution with normal dbapi modules or a more abstract module, any help appreciated.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4588,"Q_Id":177284,"Users Score":0,"Answer":"Can you use the built-in database aggregate functions like MAX(column)?","Q_Score":1,"Tags":"python,mysql,sql,oracle,postgresql","A_Id":177302,"CreationDate":"2008-10-07T05:06:00.000","Title":"SQL Absolute value across columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end). \nThis is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.\nSo the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2995,"Q_Id":196217,"Users Score":2,"Answer":"if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.","Q_Score":2,"Tags":"python,mysql,timeout","A_Id":196308,"CreationDate":"2008-10-12T22:27:00.000","Title":"MySQLdb execute timeout","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end). \nThis is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.\nSo the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2995,"Q_Id":196217,"Users Score":1,"Answer":"You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.\nIf the database is \"flaky\", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.\nIf you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.\nYou might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go \"stale\", where the server and client both think it's still alive, but some stateful network element in between has \"forgotten\" about the TCP connection. An application trying to use such a \"stale\" connection will have a long wait before receiving an error (but it should eventually).","Q_Score":2,"Tags":"python,mysql,timeout","A_Id":196891,"CreationDate":"2008-10-12T22:27:00.000","Title":"MySQLdb execute timeout","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?\n(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)","AnswerCount":14,"Available Count":6,"Score":0.0285636566,"is_accepted":false,"ViewCount":31019,"Q_Id":198692,"Users Score":2,"Answer":"Since Pickle can dump your object graph to a string it should be possible. \nBe aware though that TEXT fields in SQLite uses database encoding so you might need to convert it to a simple string before you un-pickle.","Q_Score":40,"Tags":"python,sqlite,pickle","A_Id":198763,"CreationDate":"2008-10-13T19:11:00.000","Title":"Can I pickle a python dictionary into a sqlite3 text field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?\n(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)","AnswerCount":14,"Available Count":6,"Score":0.0713073417,"is_accepted":false,"ViewCount":31019,"Q_Id":198692,"Users Score":5,"Answer":"Pickle has both text and binary output formats. If you use the text-based format you can store it in a TEXT field, but it'll have to be a BLOB if you use the (more efficient) binary format.","Q_Score":40,"Tags":"python,sqlite,pickle","A_Id":198767,"CreationDate":"2008-10-13T19:11:00.000","Title":"Can I pickle a python dictionary into a sqlite3 text field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?\n(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)","AnswerCount":14,"Available Count":6,"Score":0.0285636566,"is_accepted":false,"ViewCount":31019,"Q_Id":198692,"Users Score":2,"Answer":"If a dictionary can be pickled, it can be stored in text\/blob field as well.\nJust be aware of the dictionaries that can't be pickled (aka that contain unpickable objects).","Q_Score":40,"Tags":"python,sqlite,pickle","A_Id":198770,"CreationDate":"2008-10-13T19:11:00.000","Title":"Can I pickle a python dictionary into a sqlite3 text field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?\n(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)","AnswerCount":14,"Available Count":6,"Score":0.0285636566,"is_accepted":false,"ViewCount":31019,"Q_Id":198692,"Users Score":2,"Answer":"Yes, you can store a pickled object in a TEXT or BLOB field in an SQLite3 database, as others have explained.\nJust be aware that some object cannot be pickled. The built-in container types can (dict, set, list, tuple, etc.). But some objects, such as file handles, refer to state that is external to their own data structures, and other extension types have similar problems.\nSince a dictionary can contain arbitrary nested data structures, it might not be pickle-able.","Q_Score":40,"Tags":"python,sqlite,pickle","A_Id":198829,"CreationDate":"2008-10-13T19:11:00.000","Title":"Can I pickle a python dictionary into a sqlite3 text field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?\n(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)","AnswerCount":14,"Available Count":6,"Score":0.0142847425,"is_accepted":false,"ViewCount":31019,"Q_Id":198692,"Users Score":1,"Answer":"SpoonMeiser is correct, you need to have a strong reason to pickle into a database. \nIt's not difficult to write Python objects that implement persistence with SQLite. Then you can use the SQLite CLI to fiddle with the data as well. Which in my experience is worth the extra bit of work, since many debug and admin functions can be simply performed from the CLI rather than writing specific Python code.\nIn the early stages of a project, I did what you propose and ended up re-writing with a Python class for each business object (note: I didn't say for each table!) This way the body of the application can focus on \"what\" needs to be done rather than \"how\" it is done.","Q_Score":40,"Tags":"python,sqlite,pickle","A_Id":199190,"CreationDate":"2008-10-13T19:11:00.000","Title":"Can I pickle a python dictionary into a sqlite3 text field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?\n(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)","AnswerCount":14,"Available Count":6,"Score":1.2,"is_accepted":true,"ViewCount":31019,"Q_Id":198692,"Users Score":23,"Answer":"If you want to store a pickled object, you'll need to use a blob, since it is binary data. However, you can, say, base64 encode the pickled object to get a string that can be stored in a text field.\nGenerally, though, doing this sort of thing is indicative of bad design, since you're storing opaque data you lose the ability to use SQL to do any useful manipulation on that data. Although without knowing what you're actually doing, I can't really make a moral call on it.","Q_Score":40,"Tags":"python,sqlite,pickle","A_Id":198748,"CreationDate":"2008-10-13T19:11:00.000","Title":"Can I pickle a python dictionary into a sqlite3 text field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a website like reddit with lots of up\/down votes and lots of comments per topic what should I go with?\nLighttpd\/Php or Lighttpd\/CherryPy\/Genshi\/SQLAlchemy?\nand for database what would scale better \/ be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":1670,"Q_Id":204802,"Users Score":2,"Answer":"I would go with nginx + php + xcache + postgresql","Q_Score":7,"Tags":"php,python,lighttpd,cherrypy,high-load","A_Id":244836,"CreationDate":"2008-10-15T13:57:00.000","Title":"What would you recommend for a high traffic ajax intensive website?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For a website like reddit with lots of up\/down votes and lots of comments per topic what should I go with?\nLighttpd\/Php or Lighttpd\/CherryPy\/Genshi\/SQLAlchemy?\nand for database what would scale better \/ be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":1670,"Q_Id":204802,"Users Score":2,"Answer":"Going to need more data. Jeff had a few articles on the same problems and the answer was to wait till you hit a performance issue.\nto start with - who is hosting and what do they have available ? what's your in house talent skill sets ? Are you going to be hiring an outside firm ? what do they recommend ? brand new project w\/ a team willing to learn a new framework ?\n2nd thing is to do some mockups - how is the interface going to work. what data does it need to load and persist ? the idea is to keep your traffic between the web and db side down. e.g. no chatty pages with lots of queries. etc.\nOnce you have a better idea of the data requirements and flow - then work on the database design. there are plenty of rules to follow but one of the better ones is to follow normalization rules (yea i'm a db guy why ?)\nNow you have a couple of pages build - run your tests. are you having a problem ? Yes, now look at what is it. Page serving or db pulls ? Measure then pick a course of action.","Q_Score":7,"Tags":"php,python,lighttpd,cherrypy,high-load","A_Id":204854,"CreationDate":"2008-10-15T13:57:00.000","Title":"What would you recommend for a high traffic ajax intensive website?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For a website like reddit with lots of up\/down votes and lots of comments per topic what should I go with?\nLighttpd\/Php or Lighttpd\/CherryPy\/Genshi\/SQLAlchemy?\nand for database what would scale better \/ be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1670,"Q_Id":204802,"Users Score":8,"Answer":"I can't speak to the MySQL\/PostgreSQL question as I have limited experience with Postgres, but my Masters research project was about high-performance websites with CherryPy, and I don't think you'll be disappointed if you use CherryPy for your site. It can easily scale to thousands of simultaneous users on commodity hardware.\nOf course, the same could be said for PHP, and I don't know of any reasonable benchmarks comparing PHP and CherryPy performance. But if you were wondering whether CherryPy can handle a high-traffic site with a huge number of requests per second, the answer is definitely yes.","Q_Score":7,"Tags":"php,python,lighttpd,cherrypy,high-load","A_Id":204853,"CreationDate":"2008-10-15T13:57:00.000","Title":"What would you recommend for a high traffic ajax intensive website?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For a website like reddit with lots of up\/down votes and lots of comments per topic what should I go with?\nLighttpd\/Php or Lighttpd\/CherryPy\/Genshi\/SQLAlchemy?\nand for database what would scale better \/ be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?","AnswerCount":5,"Available Count":4,"Score":0.1194272985,"is_accepted":false,"ViewCount":1670,"Q_Id":204802,"Users Score":3,"Answer":"On the DB question, I'd say PostgreSQL scales better and has better data integrity than MySQL. For a small site MySQL might be faster, but from what I've heard it slows significantly as the size of the database grows. (Note: I've never used MySQL for a large database, so you should probably get a second opinion about its scalability.) But PostgreSQL definitely scales well, and would be a good choice for a high traffic site.","Q_Score":7,"Tags":"php,python,lighttpd,cherrypy,high-load","A_Id":205425,"CreationDate":"2008-10-15T13:57:00.000","Title":"What would you recommend for a high traffic ajax intensive website?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially \"inline\", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.\nWhat I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if\/else statements, whichever is better).\nI created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try\/except before but figured this is a good time to learn).\nAre there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.","AnswerCount":8,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":31269,"Q_Id":211501,"Users Score":0,"Answer":"Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.\nThanks for all the other answers. They may come in handy in the future.","Q_Score":13,"Tags":"python,exception,sqlite","A_Id":214623,"CreationDate":"2008-10-17T09:02:00.000","Title":"Using SQLite in a Python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially \"inline\", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.\nWhat I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if\/else statements, whichever is better).\nI created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try\/except before but figured this is a good time to learn).\nAre there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.","AnswerCount":8,"Available Count":5,"Score":0.0748596907,"is_accepted":false,"ViewCount":31269,"Q_Id":211501,"Users Score":3,"Answer":"Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.\nHere's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.\n\nprovide table definitions and create ORM-mappings\nload database\nask it to create tables from the definitions (won't do so if they exist)\ncreate session maker (optional)\ncreate session\n\nAfter creating a session, you can commit and query from the database.","Q_Score":13,"Tags":"python,exception,sqlite","A_Id":211539,"CreationDate":"2008-10-17T09:02:00.000","Title":"Using SQLite in a Python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially \"inline\", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.\nWhat I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if\/else statements, whichever is better).\nI created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try\/except before but figured this is a good time to learn).\nAre there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.","AnswerCount":8,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":31269,"Q_Id":211501,"Users Score":7,"Answer":"SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.\nThe main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.","Q_Score":13,"Tags":"python,exception,sqlite","A_Id":211573,"CreationDate":"2008-10-17T09:02:00.000","Title":"Using SQLite in a Python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially \"inline\", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.\nWhat I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if\/else statements, whichever is better).\nI created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try\/except before but figured this is a good time to learn).\nAre there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.","AnswerCount":8,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":31269,"Q_Id":211501,"Users Score":29,"Answer":"Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.\nDo the following.\n\nAdd a table to your database for \"Components\" or \"Versions\" or \"Configuration\" or \"Release\" or something administrative like that. \nCREATE TABLE REVISION(\n RELEASE_NUMBER CHAR(20)\n);\nIn your application, connect to your database normally.\nExecute a simple query against the revision table. Here's what can happen.\n\n\nThe query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.\nThe query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.\nThe query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.","Q_Score":13,"Tags":"python,exception,sqlite","A_Id":211660,"CreationDate":"2008-10-17T09:02:00.000","Title":"Using SQLite in a Python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially \"inline\", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.\nWhat I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if\/else statements, whichever is better).\nI created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try\/except before but figured this is a good time to learn).\nAre there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.","AnswerCount":8,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":31269,"Q_Id":211501,"Users Score":13,"Answer":"AFAIK an SQLITE database is just a file.\nTo check if the database exists, check for file existence.\nWhen you open a SQLITE database it will automatically create one if the file that backs it up is not in place.\nIf you try and open a file as a sqlite3 database that is NOT a database, you will get this:\n\"sqlite3.DatabaseError: file is encrypted or is not a database\"\nso check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database","Q_Score":13,"Tags":"python,exception,sqlite","A_Id":211534,"CreationDate":"2008-10-17T09:02:00.000","Title":"Using SQLite in a Python program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"From what I understand, the parent attribute of a db.Model (typically defined\/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1067,"Q_Id":215570,"Users Score":15,"Answer":"There are several differences:\n\nAll entities with the same ancestor are in the same entity group. Transactions can only affect entities inside a single entity group.\nAll writes to a single entity group are serialized, so throughput is limited.\nThe parent entity is set on creation and is fixed. References can be changed at any time.\nWith reference properties, you can only query for direct relationships, but with parent properties you can use the .ancestor() filter to find everything (directly or indirectly) descended from a given ancestor.\nEach entity has only a single parent, but can have multiple reference properties.","Q_Score":10,"Tags":"python,api,google-app-engine","A_Id":216187,"CreationDate":"2008-10-18T21:12:00.000","Title":"What's the difference between a parent and a reference property in Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I need to update data to a mssql 2005 database so I have decided to use adodbapi, which is supposed to come built into the standard installation of python 2.1.1 and greater.\nIt needs pywin32 to work correctly and the open office python 2.3 installation does not have pywin32 built into it. It also seems like this built int python installation does not have adodbapi, as I get an error when I go import adodbapi. \nAny suggestions on how to get both pywin32 and adodbapi installed into this open office 2.4 python installation?\nthanks \n\noh yeah I tried those ways. annoyingly nothing. So i have reverted to jython, that way I can access Open Office for its conversion capabilities along with decent database access.\nThanks for the help.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":833,"Q_Id":239009,"Users Score":1,"Answer":"maybe the best way to install pywin32 is to place it in \n(openofficedir)\\program\\python-core-2.3.4\\lib\\site-packages\nit is easy if you have a python 2.3 installation (with pywin installed) under \nC:\\python2.3 \nmove the C:\\python2.3\\Lib\\site-packages\\ to your\n(openofficedir)\\program\\python-core-2.3.4\\lib\\site-packages","Q_Score":0,"Tags":"python,openoffice.org,pywin32,adodbapi","A_Id":239487,"CreationDate":"2008-10-27T03:32:00.000","Title":"getting pywin32 to work inside open office 2.4 built in python 2.3 interpreter","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded?\nFor example, given a lazy parent->children relation and an instance X of \"parent\", I'd like to know if \"X.children\" is already loaded, without triggering the query.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3701,"Q_Id":258775,"Users Score":5,"Answer":"I think you could look at the child's __dict__ attribute dictionary to check if the data is already there or not.","Q_Score":16,"Tags":"python,sqlalchemy","A_Id":261191,"CreationDate":"2008-11-03T14:28:00.000","Title":"How to find out if a lazy relation isn't loaded yet, with SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make my Python library working with MySQLdb be able to detect deadlocks and try again. I believe I've coded a good solution, and now I want to test it.\nAny ideas for the simplest queries I could run using MySQLdb to create a deadlock condition would be?\nsystem info:\n\nMySQL 5.0.19 \nClient 5.1.11 \nWindows XP\nPython 2.4 \/ MySQLdb 1.2.1 p2","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":7080,"Q_Id":269676,"Users Score":1,"Answer":"you can always run LOCK TABLE tablename from another session (mysql CLI for instance). That might do the trick.\nIt will remain locked until you release it or disconnect the session.","Q_Score":10,"Tags":"python,mysql,database,deadlock","A_Id":270449,"CreationDate":"2008-11-06T18:06:00.000","Title":"How can I Cause a Deadlock in MySQL for Testing Purposes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm starting a web project that likely should be fine with SQLite. I have SQLObject on top of it, but thinking long term here -- if this project should require a more robust (e.g. able to handle high traffic), I will need to have a transition plan ready. My questions:\n\nHow easy is it to transition from one DB (SQLite) to another (MySQL or Firebird or PostGre) under SQLObject? \nDoes SQLObject provide any tools to make such a transition easier? Is it simply take the objects I've defined and call createTable? \nWhat about having multiple SQLite databases instead? E.g. one per visitor group? Does SQLObject provide a mechanism for handling this scenario and if so, what is the mechanism to use?\n\nThanks,\nSean","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":876,"Q_Id":275572,"Users Score":2,"Answer":"Your success with createTable() will depend on your existing underlying table schema \/ data types. In other words, how well SQLite maps to the database you choose and how SQLObject decides to use your data types.\nThe safest option may be to create the new database by hand. Then you'll have to deal with data migration, which may be as easy as instantiating two SQLObject database connections over the same table definitions.\nWhy not just start with the more full-featured database?","Q_Score":1,"Tags":"python,mysql,database,sqlite,sqlobject","A_Id":275676,"CreationDate":"2008-11-09T03:46:00.000","Title":"Database change underneath SQLObject","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having a postgres production database in production (which contains a lot of Data). now I need to modify the model of the tg-app to add couple of new tables to the database. \nHow do i do this? I am using sqlAlchemy.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":889,"Q_Id":301566,"Users Score":1,"Answer":"This always works and requires little thinking -- only patience.\n\nMake a backup.\nActually make a backup. Everyone skips step 1 thinking that they have a backup, but they can never find it or work with it. Don't trust any backup that you can't recover from.\nCreate a new database schema.\nDefine your new structure from the ground up in the new schema. Ideally, you'll run a DDL script that builds the new schema. Don't have a script to build the schema? Create one and put it under version control.\nWith SA, you can define your tables and it can build your schema for you. This is ideal, since you have your schema under version control in Python.\nMove data.\na. For tables which did not change structure, move data from old schema to new schema using simple INSERT\/SELECT statements.\nb. For tables which did change structure, develop INSERT\/SELECT scripts to move the data from old to new. Often, this can be a single SQL statement per new table. In some cases, it has to be a Python loop with two open connections.\nc. For new tables, load the data.\nStop using the old schema. Start using the new schema. Find every program that used the old schema and fix the configuration. \nDon't have a list of applications? Make one. Seriously -- it's important. \nApplications have hard-coded DB configurations? Fix that, too, while you're at it. Either create a common config file, or use some common environment variable or something to (a) assure consistency and (b) centralize the notion of \"production\".\n\nYou can do this kind of procedure any time you do major surgery. It never touches the old database except to extract the data.","Q_Score":1,"Tags":"python,database,postgresql,data-migration,turbogears","A_Id":301708,"CreationDate":"2008-11-19T11:00:00.000","Title":"How to update turbogears application production database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Django and Python 2.6, and I want to grow my application using a MySQL backend. Problem is that there isn't a win32 package for MySQLdb on Python 2.6.\nNow I'm no hacker, but I thought I might compile it myself using MSVC++9 Express. But I run into a problem that the compiler quickly can't find config_win.h, which I assume is a header file for MySQL so that the MySQLdb package can know what calls it can make into MySQL.\nAm I right? And if so, where do I get the header files for MySQL?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3446,"Q_Id":316484,"Users Score":2,"Answer":"I think that the header files are shipped with MySQL, just make sure you check the appropriate options when installing (I think that sources and headers are under \"developer components\" in the installation dialog).","Q_Score":9,"Tags":"python,mysql,winapi","A_Id":317716,"CreationDate":"2008-11-25T06:14:00.000","Title":"Problem compiling MySQLdb for Python 2.6 on Win32","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the sqlalchemy equivalent column type for 'money' and 'OID' column types in Postgres?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10565,"Q_Id":359409,"Users Score":3,"Answer":"we've never had an \"OID\" type specifically, though we've supported the concept of an implicit \"OID\" column on every table through the 0.4 series, primarily for the benefit of postgres. However since user-table defined OID columns are deprecated in Postgres, and we in fact never really used the OID feature that was present, we've removed this feature from the library.\nIf a particular type is not supplied in SQLA, as an alternative to specifying a custom type, you can always use the NullType which just means SQLA doesn't know anything in particular about that type. If psycopg2 sends\/receives a useful Python type for the column already, there's not really any need for a SQLA type object, save for issuing CREATE TABLE statements.","Q_Score":8,"Tags":"python,postgresql,sqlalchemy","A_Id":405923,"CreationDate":"2008-12-11T13:52:00.000","Title":"What is the sqlalchemy equivalent column type for 'money' and 'OID' in Postgres?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I connect to a MySQL database using a python program?","AnswerCount":25,"Available Count":1,"Score":0.0079998293,"is_accepted":false,"ViewCount":1369727,"Q_Id":372885,"Users Score":1,"Answer":"First step to get The Library:\nOpen terminal and execute pip install mysql-python-connector.\nAfter the installation go the second step.\nSecond Step to import the library:\nOpen your python file and write the following code:\nimport mysql.connector\nThird step to connect to the server:\nWrite the following code:\n\nconn = mysql.connector.connect(host=you host name like localhost or 127.0.0.1,\nusername=your username like root,\npassword = your password)\n\nThird step Making the cursor:\nMaking a cursor makes it easy for us to run queries.\nTo make the cursor use the following code:\ncursor = conn.cursor()\nExecuting queries:\nFor executing queries you can do the following:\ncursor.execute(query)\nIf the query changes any thing in the table you need to add the following code after the execution of the query:\nconn.commit()\nGetting values from a query:\nIf you want to get values from a query then you can do the following:\ncursor.excecute('SELECT * FROM table_name') for i in cursor: print(i) #Or for i in cursor.fetchall(): print(i)\nThe fetchall() method returns a list with many tuples that contain the values that you requested ,row after row .\nClosing the connection:\nTo close the connection you should use the following code:\nconn.close()\nHandling exception:\nTo Handel exception you can do it Vai the following method:\ntry: #Logic pass except mysql.connector.errors.Error: #Logic pass\nTo use a database:\nFor example you are a account creating system where you are storing the data in a database named blabla, you can just add a database parameter to the connect() method ,like\nmysql.connector.connect(database = database name)\ndon't remove other informations like host,username,password.","Q_Score":1242,"Tags":"python,mysql","A_Id":64762149,"CreationDate":"2008-12-16T21:49:00.000","Title":"How do I connect to a MySQL Database in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, looking for a mysql-db-lib that is compatible with py3k\/py3.0\/py3000, any ideas? Google turned up nothing.","AnswerCount":9,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43916,"Q_Id":384471,"Users Score":0,"Answer":"You're probably better off using Python 2.x at the moment. It's going to be a while before all Python packages are ported to 3.x, and I expect writing a library or application with 3.x at the moment would be quite frustrating.","Q_Score":36,"Tags":"python,mysql,python-3.x","A_Id":385225,"CreationDate":"2008-12-21T13:37:00.000","Title":"MySQL-db lib for Python 3.x?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. \nI am a bit stuck on how to get started. Does anyone have any tips for me?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4847,"Q_Id":387606,"Users Score":0,"Answer":"To start with, treat the barcode input as plain old text. \nIt has been quite a while since I worked with barcode scanners, but I doubt they have changed that much, the older ones used to just piggyback on the keyboard input, so from a programming perspective, the net result was a stream of characters in the keyboard buffer, either typed or scanned made no difference. \nIf the device you are targeting differs from that, you will need to write something to deal with that before you get to the database query. \nIf you have one of the devices to play with, plug it in, start notepad, start scanning some barcodes and see what happens.","Q_Score":1,"Tags":"python,sql,user-input","A_Id":387800,"CreationDate":"2008-12-22T22:37:00.000","Title":"Using user input to find information in a Mysql database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. \nI am a bit stuck on how to get started. Does anyone have any tips for me?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":4847,"Q_Id":387606,"Users Score":1,"Answer":"A barcode is simply a graphical representation of a series of characters (alphanumeric)\nSo if you have a method for users to enter this code (a barcode scanner), then its just an issue of querying the mysql database for the character string.","Q_Score":1,"Tags":"python,sql,user-input","A_Id":387622,"CreationDate":"2008-12-22T22:37:00.000","Title":"Using user input to find information in a Mysql database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. \nI am a bit stuck on how to get started. Does anyone have any tips for me?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4847,"Q_Id":387606,"Users Score":0,"Answer":"That is a very ambiguous question. What you want can be done in many ways depending on what you actually want to do.\nHow are your users going to enter the bar code? Are they going to use a bar code scanner? Are they entering the bar code numbers manually? \nIs this going to run on a desktop\/laptop computer or is it going to run on a handheld device? \nIs the bar code scanner storing the bar codes for later retrieval or is it sending them directly to the computer. Will it send them through a USB cable or wireless?","Q_Score":1,"Tags":"python,sql,user-input","A_Id":387694,"CreationDate":"2008-12-22T22:37:00.000","Title":"Using user input to find information in a Mysql database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those \"write something you need\" to learn to program exercises.\nIn case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions.\nQuery Some WMI data\nUpdate that to the database\nQuery Other WMI data\nUpdate that to the database \nIs it better to open one mysql connection at the beginning and leave it open or close the connection after each update?\nIt seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1201,"Q_Id":387619,"Users Score":7,"Answer":"\"However, opening and closing the connection with each update seems more 'neat'. \" \nIt's also a huge amount of overhead -- and there's no actual benefit.\nCreating and disposing of connections is relatively expensive. More importantly, what's the actual reason? How does it improve, simplify, clarify?\nGenerally, most applications have one connection that they use from when they start to when they stop.","Q_Score":2,"Tags":"python,mysql","A_Id":387932,"CreationDate":"2008-12-22T22:40:00.000","Title":"Mysql Connection, one or many?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those \"write something you need\" to learn to program exercises.\nIn case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions.\nQuery Some WMI data\nUpdate that to the database\nQuery Other WMI data\nUpdate that to the database \nIs it better to open one mysql connection at the beginning and leave it open or close the connection after each update?\nIt seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":1201,"Q_Id":387619,"Users Score":2,"Answer":"I don't think that there is \"better\" solution. Its too early to think about resources. And since wmi is quite slow ( in comparison to sql connection ) the db is not an issue.\nJust make it work. And then make it better.\nThe good thing about working with open connection here, is that the \"natural\" solution is to use objects and not just functions. So it will be a learning experience( In case you are learning python and not mysql).","Q_Score":2,"Tags":"python,mysql","A_Id":387735,"CreationDate":"2008-12-22T22:40:00.000","Title":"Mysql Connection, one or many?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those \"write something you need\" to learn to program exercises.\nIn case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions.\nQuery Some WMI data\nUpdate that to the database\nQuery Other WMI data\nUpdate that to the database \nIs it better to open one mysql connection at the beginning and leave it open or close the connection after each update?\nIt seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1201,"Q_Id":387619,"Users Score":1,"Answer":"Useful clues in S.Lott's and Igal Serban's answers. I think you should first find out your actual requirements and code accordingly.\nJust to mention a different strategy; some applications keep a pool of database (or whatever) connections and in case of a transaction just pull one from that pool. It seems rather obvious you just need one connection for this kind of application. But you can still keep a pool of one connection and apply following;\n\nWhenever database transaction is needed the connection is pulled from the pool and returned back at the end.\n(optional) The connection is expired (and of replaced by a new one) after a certain amount of time.\n(optional) The connection is expired after a certain amount of usage.\n(optional) The pool can check (by sending an inexpensive query) if the connection is alive before handing it over the program.\n\nThis is somewhat in between single connection and connection per transaction strategies.","Q_Score":2,"Tags":"python,mysql","A_Id":389364,"CreationDate":"2008-12-22T22:40:00.000","Title":"Mysql Connection, one or many?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python to read a currency value from excel. The returned from the range.Value method is a tuple that I don't know how to parse.\nFor example, the cell appears as $548,982, but in python the value is returned as (1, 1194857614).\nHow can I get the numerical amount from excel or how can I convert this tuple value into the numerical value?\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":586,"Q_Id":390263,"Users Score":0,"Answer":"I tried this with Excel 2007 and VBA. It is giving correct value.\n1) Try pasting this value in a new excel workbook\n2) Press Alt + F11. Gets you to VBA Editor.\n3) Press Ctrl + G. Gets you to immediate window.\n4) In the immediate window, type ?cells(\"a1\").Value \nhere \"a1\" is the cell where you have pasted the value.\nI am doubting that the cell has some value or character due to which it is interpreted this way.\nPost your observations here.","Q_Score":1,"Tags":"python,excel,pywin32","A_Id":390304,"CreationDate":"2008-12-23T22:37:00.000","Title":"Interpreting Excel Currency Values","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently analyzing a wikipedia dump file; I am extracting a bunch of data from it using python and persisting it into a PostgreSQL db. I am always trying to make things go faster for this file is huge (18GB). In order to interface with PostgreSQL, I am using psycopg2, but this module seems to mimic many other such DBAPIs.\nAnyway, I have a question concerning cursor.executemany(command, values); it seems to me like executing an executemany once every 1000 values or so is better than calling cursor.execute(command % value) for each of these 5 million values (please confirm or correct me!).\nBut, you see, I am using an executemany to INSERT 1000 rows into a table which has a UNIQUE integrity constraint; this constraint is not verified in python beforehand, for this would either require me to SELECT all the time (this seems counter productive) or require me to get more than 3 GB of RAM. All this to say that I count on Postgres to warn me when my script tried to INSERT an already existing row via catching the psycopg2.DatabaseError. \nWhen my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one.\nSince psycopg2 is so poorly documented (as are so many great modules...), I cannot find an efficient and effective workaround. I have reduced the number of values INSERTed per executemany from 1000 to 100 in order to reduce the likeliness of a non-UNIQUE INSERT per executemany, but I am pretty certain their is a way to just tell psycopg2 to ignore these execeptions or to tell the cursor to continue the executemany. \nBasically, this seems like the kind of problem which has a solution so easy and popular, that all I can do is ask in order to learn about it.\nThanks again!","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":7742,"Q_Id":396455,"Users Score":-1,"Answer":"using a MERGE statement instead of an INSERT one would solve your problem.","Q_Score":6,"Tags":"python,postgresql,database,psycopg","A_Id":675865,"CreationDate":"2008-12-28T17:51:00.000","Title":"Python-PostgreSQL psycopg2 interface --> executemany","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently analyzing a wikipedia dump file; I am extracting a bunch of data from it using python and persisting it into a PostgreSQL db. I am always trying to make things go faster for this file is huge (18GB). In order to interface with PostgreSQL, I am using psycopg2, but this module seems to mimic many other such DBAPIs.\nAnyway, I have a question concerning cursor.executemany(command, values); it seems to me like executing an executemany once every 1000 values or so is better than calling cursor.execute(command % value) for each of these 5 million values (please confirm or correct me!).\nBut, you see, I am using an executemany to INSERT 1000 rows into a table which has a UNIQUE integrity constraint; this constraint is not verified in python beforehand, for this would either require me to SELECT all the time (this seems counter productive) or require me to get more than 3 GB of RAM. All this to say that I count on Postgres to warn me when my script tried to INSERT an already existing row via catching the psycopg2.DatabaseError. \nWhen my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one.\nSince psycopg2 is so poorly documented (as are so many great modules...), I cannot find an efficient and effective workaround. I have reduced the number of values INSERTed per executemany from 1000 to 100 in order to reduce the likeliness of a non-UNIQUE INSERT per executemany, but I am pretty certain their is a way to just tell psycopg2 to ignore these execeptions or to tell the cursor to continue the executemany. \nBasically, this seems like the kind of problem which has a solution so easy and popular, that all I can do is ask in order to learn about it.\nThanks again!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7742,"Q_Id":396455,"Users Score":0,"Answer":"\"When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one.\"\nThe question doesn't really make a lot of sense.\nDoes EVERY block of 1,000 rows fail due to non-unique rows? \nDoes 1 block of 1,000 rows fail (out 5,000 such blocks)? If so, then the execute many helps for 4,999 out of 5,000 and is far from \"worthless\".\nAre you worried about this non-Unique insert? Or do you have actual statistics on the number of times this happens?\nIf you've switched from 1,000 row blocks to 100 row blocks, you can -- obviously -- determine if there's a performance advantage for 1,000 row blocks, 100 row blocks and 1 row blocks.\nPlease actually run the actual program with actual database and different size blocks and post the numbers.","Q_Score":6,"Tags":"python,postgresql,database,psycopg","A_Id":396824,"CreationDate":"2008-12-28T17:51:00.000","Title":"Python-PostgreSQL psycopg2 interface --> executemany","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm wondering, is it possible to make an sql query that does the same function as \n'select products where barcode in table1 = barcode in table2'. I am writing this function in a python program. Once that function is called will the table be joined permanently or just while that function is running?\nthanks.","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":680,"Q_Id":403527,"Users Score":0,"Answer":"Here is an example of inner joining two tables based on a common field in both tables.\nSELECT table1.Products\nFROM table1 \nINNER JOIN table2 on table1.barcode = table2.barcode\nWHERE table1.Products is not null","Q_Score":0,"Tags":"python,sql","A_Id":403848,"CreationDate":"2008-12-31T17:20:00.000","Title":"Making a SQL Query in two tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm wondering, is it possible to make an sql query that does the same function as \n'select products where barcode in table1 = barcode in table2'. I am writing this function in a python program. Once that function is called will the table be joined permanently or just while that function is running?\nthanks.","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":680,"Q_Id":403527,"Users Score":0,"Answer":"Here's a way to talk yourself through table design in these cases, based on Object Role Modeling. (Yes, I realize this is only indirectly related to the question.)\nYou have products and barcodes. Products are uniquely identified by Product Code (e.g. 'A2111'; barcodes are uniquely identified by Value (e.g. 1002155061).\nA Product has a Barcode. Questions: Can a product have no barcode? Can the same product have multiple barcodes? Can multiple products have the same barcode? (If you have any experience with UPC labels, you know the answer to all these is TRUE.)\nSo you can make some assertions: \nA Product (code) has zero or more Barcode (value).\nA Barcode (value) has one or more Product (code). -- assumption: we barcodes don't have independent existence if they aren't\/haven't been\/won't be related to products).\nWhich leads directly (via your ORM model) to a schema with two tables: \nProduct\nProductCode(PK) Description etc\nProductBarcode\nProductCode(FK) BarcodeValue\n-- with a two-part natural primary key, ProductCode + BarcodeValue \nand you tie them together as described in the other answers.\nSimilar assertions can be used to determine which fields go into various tables in your design.","Q_Score":0,"Tags":"python,sql","A_Id":403904,"CreationDate":"2008-12-31T17:20:00.000","Title":"Making a SQL Query in two tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the difference between these two apis?\nWhich one faster, reliable using Python DB API?\nUpd:\nI see two psql drivers for Django. The first one is psycopg2.\nWhat is the second one? pygresql?","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":15364,"Q_Id":413228,"Users Score":5,"Answer":"For what it's worth, django uses psycopg2.","Q_Score":13,"Tags":"python,postgresql","A_Id":413259,"CreationDate":"2009-01-05T14:21:00.000","Title":"PyGreSQL vs psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the difference between these two apis?\nWhich one faster, reliable using Python DB API?\nUpd:\nI see two psql drivers for Django. The first one is psycopg2.\nWhat is the second one? pygresql?","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":15364,"Q_Id":413228,"Users Score":0,"Answer":"psycopg2 is partly written in C so you can expect a performance gain, but on the other hand, a bit harder to install. PyGreSQL is written in Python only, easy to deployed but slower.","Q_Score":13,"Tags":"python,postgresql","A_Id":413508,"CreationDate":"2009-01-05T14:21:00.000","Title":"PyGreSQL vs psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the difference between these two apis?\nWhich one faster, reliable using Python DB API?\nUpd:\nI see two psql drivers for Django. The first one is psycopg2.\nWhat is the second one? pygresql?","AnswerCount":5,"Available Count":4,"Score":0.1586485043,"is_accepted":false,"ViewCount":15364,"Q_Id":413228,"Users Score":4,"Answer":"\"PyGreSQL is written in Python only, easy to deployed but slower.\"\nPyGreSQL contains a C-coded module, too. I haven't done speed tests, but they're not likely to be much different, as the real work will happen inside the database server.","Q_Score":13,"Tags":"python,postgresql","A_Id":592846,"CreationDate":"2009-01-05T14:21:00.000","Title":"PyGreSQL vs psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the difference between these two apis?\nWhich one faster, reliable using Python DB API?\nUpd:\nI see two psql drivers for Django. The first one is psycopg2.\nWhat is the second one? pygresql?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":15364,"Q_Id":413228,"Users Score":2,"Answer":"Licensing may be an issue for you. PyGreSQL is MIT license. Psycopg2 is GPL license.\n(as long as you are accessing psycopg2 in normal ways from Python, with no internal API, and no direct C calls, this shouldn't cause you any headaches, and you can release your code under whatever license you like - but I am not a lawyer).","Q_Score":13,"Tags":"python,postgresql","A_Id":413537,"CreationDate":"2009-01-05T14:21:00.000","Title":"PyGreSQL vs psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The beauty of ORM lulled me into a soporific sleep. I've got an existing Django app with a lack of database indexes. Is there a way to automatically generate a list of columns that need indexing?\nI was thinking maybe some middleware that logs which columns are involved in WHERE clauses? but is there anything built into MySQL that might help?","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":620,"Q_Id":438559,"Users Score":4,"Answer":"No.\nAdding indexes willy-nilly to all \"slow\" queries will also slow down inserts, updates and deletes.\nIndexes are a balancing act between fast queries and fast changes. There is no general or \"right\" answer. There's certainly nothing that can automate this.\nYou have to measure the improvement across your whole application as you add and change indexes.","Q_Score":5,"Tags":"python,mysql,database,django,django-models","A_Id":438700,"CreationDate":"2009-01-13T10:36:00.000","Title":"Is there a way to automatically generate a list of columns that need indexing?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.\n\nI'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.\nI'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.\nI'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.\n\nThis is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. \nWhat is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.\nI'm NOT looking for a buy vs. build debate, as that's a different discussion.\nThanks for any insight","AnswerCount":7,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2512,"Q_Id":439759,"Users Score":0,"Answer":"Just to throw it out there... there are PHP frameworks utilizing MVC.\nCodeigniter does simple and yet powerful things. You can definitely separate the template layer from the logic layer.","Q_Score":2,"Tags":"php,python,postgresql","A_Id":494119,"CreationDate":"2009-01-13T16:47:00.000","Title":"Is a PHP, Python, PostgreSQL design suitable for a business application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.\n\nI'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.\nI'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.\nI'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.\n\nThis is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. \nWhat is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.\nI'm NOT looking for a buy vs. build debate, as that's a different discussion.\nThanks for any insight","AnswerCount":7,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2512,"Q_Id":439759,"Users Score":0,"Answer":"I personally agree with the second and the third points in your post. Speaking about PHP, in my opinion you can use Python also for presentation, there are many solutions (Zope, Plone ...) based on Python.","Q_Score":2,"Tags":"php,python,postgresql","A_Id":439793,"CreationDate":"2009-01-13T16:47:00.000","Title":"Is a PHP, Python, PostgreSQL design suitable for a business application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.\n\nI'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.\nI'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.\nI'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.\n\nThis is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. \nWhat is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.\nI'm NOT looking for a buy vs. build debate, as that's a different discussion.\nThanks for any insight","AnswerCount":7,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2512,"Q_Id":439759,"Users Score":0,"Answer":"Just skip PHP and use Python (with Django, as already noticed while I typed). Django already separates the layers as you mentioned.\nI have never used PgSQL myself, but I think it's mostly a matter of taste whether you prefer it over MySQL. It used to support more enterprise features than MySQL but I'm not sure if that's still true with MySQL 5.0 and 5.1. Transactions are supported in MySQL, anyway (you have to use the InnoDB table engine, however).","Q_Score":2,"Tags":"php,python,postgresql","A_Id":439818,"CreationDate":"2009-01-13T16:47:00.000","Title":"Is a PHP, Python, PostgreSQL design suitable for a business application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.\n\nI'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.\nI'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.\nI'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.\n\nThis is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. \nWhat is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.\nI'm NOT looking for a buy vs. build debate, as that's a different discussion.\nThanks for any insight","AnswerCount":7,"Available Count":6,"Score":0.0285636566,"is_accepted":false,"ViewCount":2512,"Q_Id":439759,"Users Score":1,"Answer":"I can only repeat what other peoples here already said : if you choose Python for the domain layer, you won't gain anything (quite on the contrary) using PHP for the presentation layer. Others already advised Django, and that might be a pretty good choice, but there's no shortage of good Python web frameworks.","Q_Score":2,"Tags":"php,python,postgresql","A_Id":440496,"CreationDate":"2009-01-13T16:47:00.000","Title":"Is a PHP, Python, PostgreSQL design suitable for a business application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.\n\nI'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.\nI'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.\nI'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.\n\nThis is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. \nWhat is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.\nI'm NOT looking for a buy vs. build debate, as that's a different discussion.\nThanks for any insight","AnswerCount":7,"Available Count":6,"Score":0.0285636566,"is_accepted":false,"ViewCount":2512,"Q_Id":439759,"Users Score":1,"Answer":"I'm going to assume that by \"business application\" you mean a web application hosted in an intranet environment as opposed to some sort of SaaS application on the internet.\nWhile you're in the process of architecting your application you need to consider the existing infrastructure and infrastructure support people of your employer\/customer. Also, if the company is large enough to have things such as \"approved software\/hardware lists,\" you should be aware of those. Keep in mind that some elements of the list may be downright retarded. Don't let past mistakes dictate the architecture of your app, but in cases where they are reasonably sensible I would pick my battles and stick with your enterprise standard. This can be a real pain when you pick a development stack that really works best on Unix\/Linux, and then someone tries to force onto a Windows server admined by someone who's never touched anything but ASP.NET applications.\nUnless there is a particular PHP module that you intend to use that has no Python equivalent, I would drop PHP and use Django. If there is a compelling reason to use PHP, then I'd drop Python. I'm having difficulty imagining a scenario where you would want to use both at the same time.\nAs for PG versus MySQL, either works. Look at what you customer already has deployed, and if they have a bunch of one and little of another, pick that. If they have existing Oracle infrastructure you should consider using it. If they are an SQL Server shop...reconsider your stack and remember to pick your battles.","Q_Score":2,"Tags":"php,python,postgresql","A_Id":440118,"CreationDate":"2009-01-13T16:47:00.000","Title":"Is a PHP, Python, PostgreSQL design suitable for a business application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.\n\nI'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.\nI'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.\nI'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.\n\nThis is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. \nWhat is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.\nI'm NOT looking for a buy vs. build debate, as that's a different discussion.\nThanks for any insight","AnswerCount":7,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":2512,"Q_Id":439759,"Users Score":0,"Answer":"Just to address the MySQL vs PgSQL issues - it shouldn't matter. They're both more than capable of the task, and any reasonable framework should isolate you from the differences relatively well. I think it's down to what you use already, what people have most experience in, and if there's a feature in one or the other you think you'd benefit from.\nIf you have no preference, you might want to go with MySQL purely because it's more popular for web work. This translates to more examples, easier to find help, etc. I actually prefer the philosophy of PgSQL, but this isn't a good enough reason to blow against the wind.","Q_Score":2,"Tags":"php,python,postgresql","A_Id":440098,"CreationDate":"2009-01-13T16:47:00.000","Title":"Is a PHP, Python, PostgreSQL design suitable for a business application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Example Problem:\nEntities:\n\nUser contains name and a list of friends (User references)\nBlog Post contains title, content, date and Writer (User)\n\nRequirement:\nI want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries.\nSQL Solution:\nSo in sql land it would be something like:\n\nselect * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date\n\nGAE solutions i can think of are:\n\nLoad user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries\nIn a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts.\n\nI don't believe either of these solutions will scale.\nIm sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":2112,"Q_Id":445827,"Users Score":13,"Answer":"If you look at how the SQL solution you provided will be executed, it will go basically like this:\n\nFetch a list of friends for the current user\nFor each user in the list, start an index scan over recent posts\nMerge-join all the scans from step 2, stopping when you've retrieved enough entries\n\nYou can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them.\nYou're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends.","Q_Score":13,"Tags":"python,google-app-engine,join,google-cloud-datastore","A_Id":446471,"CreationDate":"2009-01-15T06:07:00.000","Title":"GAE - How to live with no joins?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Example Problem:\nEntities:\n\nUser contains name and a list of friends (User references)\nBlog Post contains title, content, date and Writer (User)\n\nRequirement:\nI want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries.\nSQL Solution:\nSo in sql land it would be something like:\n\nselect * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date\n\nGAE solutions i can think of are:\n\nLoad user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries\nIn a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts.\n\nI don't believe either of these solutions will scale.\nIm sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":2112,"Q_Id":445827,"Users Score":1,"Answer":"\"Load user, loop through the list of friends and load their latest blog posts.\"\nThat's all a join is -- nested loops. Some kinds of joins are loops with lookups. Most lookups are just loops; some are hashes.\n\"Finally merge all the blog posts to find the latest 10 blog entries\"\nThat's a ORDER BY with a LIMIT. That's what the database is doing for you.\nI'm not sure what's not scalable about this; it's what a database does anyway.","Q_Score":13,"Tags":"python,google-app-engine,join,google-cloud-datastore","A_Id":446477,"CreationDate":"2009-01-15T06:07:00.000","Title":"GAE - How to live with no joins?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data.\nI am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? \nWhat if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":2618,"Q_Id":451952,"Users Score":13,"Answer":"It should be safe to do a repozo backup of the Data.fs followed by an rsync of the blobstorage directory, as long as the database doesn't get packed while those two operations are happening.\nThis is because, at least when using blobs with FileStorage, modifications to a blob always results in the creation of a new file named based on the object id and transaction id. So if new or updated blobs are written after the Data.fs is backed up, it shouldn't be a problem, as the files that are referenced by the Data.fs should still be around. Deletion of a blob doesn't result in the file being removed until the database is packed, so that should be okay too.\nPerforming a backup in a different order, or with packing during the backup, may result in a backup Data.fs that references blobs that are not included in the backup.","Q_Score":8,"Tags":"python,plone,zope,zodb,blobstorage","A_Id":2664479,"CreationDate":"2009-01-16T20:51:00.000","Title":"What is the correct way to backup ZODB blobs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data.\nI am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? \nWhat if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2618,"Q_Id":451952,"Users Score":3,"Answer":"Backing up \"blobstorage\" will do it. No need for a special order or anything else, it's very simple.\nAll operations in Plone are fully transactional, so hitting the backup in the middle of a transaction should work just fine. This is why you can do live backups of the ZODB. Without knowing what file system you're on, I'd guess that it should work as intended.","Q_Score":8,"Tags":"python,plone,zope,zodb,blobstorage","A_Id":453942,"CreationDate":"2009-01-16T20:51:00.000","Title":"What is the correct way to backup ZODB blobs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data.\nI am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? \nWhat if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":2618,"Q_Id":451952,"Users Score":1,"Answer":"Your backup strategy for the FileStorage is fine. However, making a backup of any database that stores data in multiple files never is easy as your copy has to happen with no writes to the various files. For the FileStorage a blind stupid copy is fine as it's just a single file. (Using repozo is even better.)\nIn this case (with BlobStorage combined with FileStorage) I have to point to the regular backup advice:\n\ntake the db offline while making a file-system copy\nuse snapshot tools like LVM to freeze the disk at a given point\ndo a transactional export (not feasable in practice)","Q_Score":8,"Tags":"python,plone,zope,zodb,blobstorage","A_Id":676364,"CreationDate":"2009-01-16T20:51:00.000","Title":"What is the correct way to backup ZODB blobs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.","AnswerCount":32,"Available Count":5,"Score":0.0312398314,"is_accepted":false,"ViewCount":804257,"Q_Id":454854,"Users Score":5,"Answer":"Go to your project directory with cd.\nsource\/bin\/activate (activate your env. if not previously).\nRun the command easy_install MySQL-python","Q_Score":493,"Tags":"python,django,python-2.x","A_Id":28278997,"CreationDate":"2009-01-18T09:13:00.000","Title":"No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.","AnswerCount":32,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":804257,"Q_Id":454854,"Users Score":6,"Answer":"I personally recommend using pymysql instead of using the genuine MySQL connector, which provides you with a platform independent interface and could be installed through pip.\nAnd you could edit the SQLAlchemy URL schema like this:\nmysql+pymysql:\/\/username:passwd@host\/database","Q_Score":493,"Tags":"python,django,python-2.x","A_Id":58246337,"CreationDate":"2009-01-18T09:13:00.000","Title":"No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.","AnswerCount":32,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":804257,"Q_Id":454854,"Users Score":93,"Answer":"if your python version is 3.5, do a pip install mysqlclient, other things didn't work for me","Q_Score":493,"Tags":"python,django,python-2.x","A_Id":38310817,"CreationDate":"2009-01-18T09:13:00.000","Title":"No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.","AnswerCount":32,"Available Count":5,"Score":0.012499349,"is_accepted":false,"ViewCount":804257,"Q_Id":454854,"Users Score":2,"Answer":"None of the above worked for me on an Ubuntu 18.04 fresh install via docker image.\nThe following solved it for me:\napt-get install holland python3-mysqldb","Q_Score":493,"Tags":"python,django,python-2.x","A_Id":58825148,"CreationDate":"2009-01-18T09:13:00.000","Title":"No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.","AnswerCount":32,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":804257,"Q_Id":454854,"Users Score":0,"Answer":"For CentOS 8 and Python3\n$ sudo dnf install python3-mysqlclient -y","Q_Score":493,"Tags":"python,django,python-2.x","A_Id":72496371,"CreationDate":"2009-01-18T09:13:00.000","Title":"No module named MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I think I am being a bonehead, maybe not importing the right package, but when I do...\n\nfrom pysqlite2 import dbapi2 as sqlite\nimport types\nimport re\nimport sys\n...\n def create_asgn(self):\n stmt = \"CREATE TABLE ? (login CHAR(8) PRIMARY KEY NOT NULL, grade INTEGER NOT NULL)\"\n stmt2 = \"insert into asgn values ('?', ?)\"\n self.cursor.execute(stmt, (sys.argv[2],))\n self.cursor.execute(stmt2, [sys.argv[2], sys.argv[3]])\n...\n I get the error pysqlite2.dbapi2.OperationalError: near \"?\": syntax error \nThis makes very little sense to me, as the docs show that pysqlite is qmark parametrized. I am new to python and db-api though, help me out! THANKS","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1629,"Q_Id":474261,"Users Score":7,"Answer":"That's because parameters can only be passed to VALUES. The table name can't be parametrized.\nAlso you have quotes around a parametrized argument on the second query. Remove the quotes, escaping is handled by the underlining library automatically for you.","Q_Score":1,"Tags":"python,sqlite,pysqlite,python-db-api","A_Id":474296,"CreationDate":"2009-01-23T19:55:00.000","Title":"Python pysqlite not accepting my qmark parameterization","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. \nI would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager.\nI now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. \nNow, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2801,"Q_Id":475302,"Users Score":1,"Answer":"I was in the exact same situation as you and went with PL\/Python after giving up on PL\/SQL after a while. It was a good decision, looking back. Some things that bit me where unicode issues (client encoding, byte sequence) and specific postgres data types (bytea).","Q_Score":3,"Tags":"python,postgresql","A_Id":476089,"CreationDate":"2009-01-24T01:38:00.000","Title":"PostgreSQL procedural languages: to choose?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. \nI would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager.\nI now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. \nNow, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":2801,"Q_Id":475302,"Users Score":2,"Answer":"Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have.","Q_Score":3,"Tags":"python,postgresql","A_Id":475939,"CreationDate":"2009-01-24T01:38:00.000","Title":"PostgreSQL procedural languages: to choose?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python?\nIt could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema.\nMulti platform support is also needed.\nClarification:\nThe idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages.\nOne other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1697,"Q_Id":482612,"Users Score":0,"Answer":"We have an O\/RM that has C++ and C# (actually COM) bindings (in FOST.3) and we're putting together the Python bindings which are new in version 4 together with Linux and Mac support.","Q_Score":7,"Tags":"c#,c++,python,orm","A_Id":496166,"CreationDate":"2009-01-27T08:10:00.000","Title":"ORM (object relational manager) solution with multiple programming language support","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python?\nIt could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema.\nMulti platform support is also needed.\nClarification:\nThe idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages.\nOne other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1697,"Q_Id":482612,"Users Score":1,"Answer":"With SQLAlchemy, you can use reflection to get the schema, so it should work with any of the supported engines.\nI've used this to migrate data from an old SQLite to Postgres.","Q_Score":7,"Tags":"c#,c++,python,orm","A_Id":482653,"CreationDate":"2009-01-27T08:10:00.000","Title":"ORM (object relational manager) solution with multiple programming language support","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is the situation: I have a parent model say BlogPost. It has many Comments. What I want is the list of BlogPosts ordered by the creation date of its' Comments. I.e. the blog post which has the most newest comment should be on top of the list. Is this possible with SQLAlchemy?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":595,"Q_Id":492223,"Users Score":1,"Answer":"I had the same question as the parent when using the ORM, and GHZ's link contained the answer on how it's possible. In sqlalchemy, assuming BlogPost.comments is a mapped relation to the Comments table, you can't do:\n\nsession.query(BlogPost).order_by(BlogPost.comments.creationDate.desc())\n\n, but you can do:\n\nsession.query(BlogPost).join(Comments).order_by(Comments.creationDate.desc())","Q_Score":3,"Tags":"python,sqlalchemy","A_Id":1227979,"CreationDate":"2009-01-29T16:01:00.000","Title":"How can I order objects according to some attribute of the child in sqlalchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a small project I am doing in Python using web.py. It's a name generator, using 4 \"parts\" of a name (firstname, middlename, anothername, surname). Each part of the name is a collection of entites in a MySQL databse (name_part (id, part, type_id), and name_part_type (id, description)). Basic stuff, I guess.\nMy generator picks a random entry of each \"type\", and assembles a comical name. Right now, I am using select * from name_part where type_id=[something] order by rand() limit 1 to select a random entry of each type (so I also have 4 queries that run per pageview, I figured this was better than one fat query returning potentially hundreds of rows; if you have a suggestion for how to pull this off in one query w\/o a sproc I'll listen).\nObviously I want to make this more random. Actually, I want to give it better coverage, not necessarily randomness. I want to make sure it's using as many possibilities as possible. That's what I am asking in this question, what sorts of strategies can I use to give coverage over a large random sample? \nMy idea, is to implement a counter column on each name_part, and increment it each time I use it. I would need some logic to then say like: \"get a name_part that is less than the highest \"counter\" for this \"name_part_type\", unless there are none then pick a random one\". I am not very good at SQL, is this kind of logic even possible? The only way I can think to do this would require up to 3 or 4 queries for each part of the name (so up to 12 queries per pageview). \nCan I get some input on my logic here? Am I overthinking it? This actually sounds ideal for a stored procedure... but can you guys at least help me solve how to do it without a sproc? (I don't know if I can even use a sproc with the built-in database stuff of web.py).\nI hope this isn't terribly dumb but thanks ahead of time. \nedit: Aside from my specific problem I am still curious if there are any alternate strategies I can use that may be better.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1758,"Q_Id":514617,"Users Score":1,"Answer":"I agree with your intuition that using a stored procedure is the right way to go, but then, I almost always try to implement database stuff in the database.\nIn your proc, I would introduce some kind of logic like say, there's only a 30% chance that returning the result will actually increment the counter. Just to increase the variability.","Q_Score":4,"Tags":"python,mysql,random,web.py","A_Id":514643,"CreationDate":"2009-02-05T04:51:00.000","Title":"Random name generator strategy - help me improve it","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a legacy application which is implemented in a number of Excel workbooks. It's not something that I have the authority to re-implement, however another application that I do maintain does need to be able to call functions in the Excel workbook. \nIt's been given a python interface using the Win32Com library. Other processes can call functions in my python package which in turn invokes the functions I need via Win32Com.\nUnfortunately COM does not allow me to specify a particular COM process, so at the moment no matter how powerful my server I can only control one instance of Excel at a time on the computer. If I were to try to run more than one instance of excel there would be no way of ensuring that the python layer is bound to a specific Excel instance. \nI'd like to be able to run more than 1 of my excel applications on my Windows server concurrently. Is there a way to do this? For example, could I compartmentalize my environment so that I could run as many Excel _ Python combinations as my application will support?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4391,"Q_Id":516946,"Users Score":0,"Answer":"If you application uses a single excel file which contains macros which you call, I fear the answer is probably no since aside from COM Excel does not allow the same file to be opened with the same name (even if in different directories). You may be able to get around this by dynamically copying the file to another name before opening. \nMy python knowledge isn't huge, but in most languages there is a way of specifying when you create a COM object whether you wish it to be a new object or connect to a preexisting instance by default. Check the python docs for something along these lines.\nCan you list the kind of specific problems you are having and exactly what you are hoping to do?","Q_Score":6,"Tags":"python,windows,excel,com","A_Id":516983,"CreationDate":"2009-02-05T17:32:00.000","Title":"Control 2 separate Excel instances by COM independently... can it be done?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server.\nEDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like:\npsycopg2.connect(connectionString)\nThanks\nFinal Edit:\nIt was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":27905,"Q_Id":519296,"Users Score":3,"Answer":"Make sure your db connection command isn't in any kind of loop. I was getting the same error from my script until I moved my db.database() out of my programs repeating execution loop.","Q_Score":22,"Tags":"python,postgresql,psycopg2","A_Id":15046529,"CreationDate":"2009-02-06T06:15:00.000","Title":"Getting OperationalError: FATAL: sorry, too many clients already using psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server.\nEDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like:\npsycopg2.connect(connectionString)\nThanks\nFinal Edit:\nIt was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":27905,"Q_Id":519296,"Users Score":15,"Answer":"This error means what it says, there are too many clients connected to postgreSQL.\nQuestions you should ask yourself:\n\nAre you the only one connected to this database?\nAre you running a graphical IDE?\nWhat method are you using to connect?\nAre you testing queries at the same time that you running the code?\n\nAny of these things could be the problem. If you are the admin, you can up the number of clients, but if a program is hanging it open, then that won't help for long.\nThere are many reasons why you could be having too many clients running at the same time.","Q_Score":22,"Tags":"python,postgresql,psycopg2","A_Id":519304,"CreationDate":"2009-02-06T06:15:00.000","Title":"Getting OperationalError: FATAL: sorry, too many clients already using psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server.\nEDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like:\npsycopg2.connect(connectionString)\nThanks\nFinal Edit:\nIt was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":27905,"Q_Id":519296,"Users Score":1,"Answer":"It simple means many clients are making transaction to PostgreSQL at same time.\nI was running Postgis container and Django in different docker container. Hence for my case restarting both db and system container solved the problem.","Q_Score":22,"Tags":"python,postgresql,psycopg2","A_Id":64746356,"CreationDate":"2009-02-06T06:15:00.000","Title":"Getting OperationalError: FATAL: sorry, too many clients already using psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP.\nSo I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. \nThen in the main thread start a CherryPy application that will query that SQLite database and serve the data.\nMy problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application?\nIf I'd do a connection per thread to the database will I also be able to create\/use an in memory database?","AnswerCount":6,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":13542,"Q_Id":524797,"Users Score":0,"Answer":"Depending on the data rate sqlite could be exactly the correct way to do this. The entire database is locked for each write so you aren't going to scale to 1000s of simultaneous writes per second. But if you only have a few it is the safest way of assuring you don't overwrite each other.","Q_Score":9,"Tags":"python,multithreading,sqlite","A_Id":524955,"CreationDate":"2009-02-07T23:18:00.000","Title":"Python, SQLite and threading","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP.\nSo I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. \nThen in the main thread start a CherryPy application that will query that SQLite database and serve the data.\nMy problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application?\nIf I'd do a connection per thread to the database will I also be able to create\/use an in memory database?","AnswerCount":6,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":13542,"Q_Id":524797,"Users Score":0,"Answer":"Depending on the application the DB could be a real overhead. If we are talking about volatile data, maybe you could skip the communication via DB completely and share the data between the data gathering process and the data serving process(es) via IPC. This is not an option if the data has to be persisted, of course.","Q_Score":9,"Tags":"python,multithreading,sqlite","A_Id":524937,"CreationDate":"2009-02-07T23:18:00.000","Title":"Python, SQLite and threading","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP.\nSo I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. \nThen in the main thread start a CherryPy application that will query that SQLite database and serve the data.\nMy problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application?\nIf I'd do a connection per thread to the database will I also be able to create\/use an in memory database?","AnswerCount":6,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":13542,"Q_Id":524797,"Users Score":8,"Answer":"Short answer: Don't use Sqlite3 in a threaded application.\nSqlite3 databases scale well for size, but rather terribly for concurrency. You will be plagued with \"Database is locked\" errors.\nIf you do, you will need a connection per thread, and you have to ensure that these connections clean up after themselves. This is traditionally handled using thread-local sessions, and is performed rather well (for example) using SQLAlchemy's ScopedSession. I would use this if I were you, even if you aren't using the SQLAlchemy ORM features.","Q_Score":9,"Tags":"python,multithreading,sqlite","A_Id":524806,"CreationDate":"2009-02-07T23:18:00.000","Title":"Python, SQLite and threading","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP.\nSo I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. \nThen in the main thread start a CherryPy application that will query that SQLite database and serve the data.\nMy problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application?\nIf I'd do a connection per thread to the database will I also be able to create\/use an in memory database?","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":13542,"Q_Id":524797,"Users Score":1,"Answer":"\"...create several threads that will gather data at a specified interval and cache that data locally into a sqlite database.\nThen in the main thread start a CherryPy app that will query that sqlite db and serve the data.\"\nDon't waste a lot of time on threads. The things you're describing are simply OS processes. Just start ordinary processes to do gathering and run Cherry Py.\nYou have no real use for concurrent threads in a single process for this. Gathering data at a specified interval -- when done with simple OS processes -- can be scheduled by the OS very simply. Cron, for example, does a great job of this.\nA CherryPy App, also, is an OS process, not a single thread of some larger process.\nJust use processes -- threads won't help you.","Q_Score":9,"Tags":"python,multithreading,sqlite","A_Id":524901,"CreationDate":"2009-02-07T23:18:00.000","Title":"Python, SQLite and threading","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I get \"database table is locked\" error in my sqlite3 db. My script is single threaded, no other app is using the program (i did have it open once in \"SQLite Database Browser.exe\"). I copied the file, del the original (success) and renamed the copy so i know no process is locking it yet when i run my script everything in table B cannot be written to and it looks like table A is fine. Whats happening?\n-edit-\nI fixed it but unsure how. I notice the code not doing the correct things (i copied the wrong field) and after fixing it up and cleaning it, it magically started working again.\n-edit2-\nSomeone else posted so i might as well update. I think the problem was i was trying to do a statement with a command\/cursor in use.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5791,"Q_Id":531711,"Users Score":0,"Answer":"I've also seen this error when the db file is on an NFS mounted file system.","Q_Score":3,"Tags":"python,sqlite,locking","A_Id":6345495,"CreationDate":"2009-02-10T09:50:00.000","Title":"python, sqlite error? db is locked? but it isnt?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"All I want to do is serialize and unserialize tuples of strings or ints.\nI looked at pickle.dumps() but the byte overhead is significant. Basically it looks like it takes up about 4x as much space as it needs to. Besides, all I need is basic types and have no need to serialize objects.\nmarshal is a little better in terms of space but the result is full of nasty \\x00 bytes. Ideally I would like the result to be human readable.\nI thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?\nThis is getting stored in a db, not a file. Byte overhead matters because it could make the difference between requiring a TEXT column versus a varchar, and generally data compactness affects all areas of db performance.","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2674,"Q_Id":532934,"Users Score":0,"Answer":"\"the byte overhead is significant\"\nWhy does this matter? It does the job. If you're running low on disk space, I'd be glad to sell you a 1Tb for $500. \nHave you run it? Is performance a problem? Can you demonstrate that the performance of serialization is the problem?\n\"I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?\"\nNothing simpler than repr and eval.\nWhat's wrong with eval?\nIs is the \"someone could insert malicious code into the file where I serialized my lists\" issue?\nWho -- specifically -- is going to find and edit this file to put in malicious code? Anything you do to secure this (i.e., encryption) removes \"simple\" from it.","Q_Score":7,"Tags":"python,serialization,pickle","A_Id":532989,"CreationDate":"2009-02-10T16:03:00.000","Title":"Lightweight pickle for basic types in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to fetch data from a mysql database using sqlalchemy and use the data in a different class.. Basically I fetch a row at a time, use the data, fetch another row, use the data and so on.. I am running into some problem doing this.. \nBasically, how do I output data a row at a time from mysql data?.. I have looked into all tutorials but they are not helping much.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":563,"Q_Id":536051,"Users Score":1,"Answer":"Exactly what problems are you running into?\nYou can simply iterate over the ResultProxy object:\n\nfor row in conn_or_sess_or_engine.execute(selectable_obj_or_SQLstring):\n do_something_with(row)","Q_Score":0,"Tags":"python,mysql,sqlalchemy","A_Id":536269,"CreationDate":"2009-02-11T09:19:00.000","Title":"Outputting data a row at a time from mysql using sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.\nSo my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).\nThen I plan to read the data in, and have a set of functions which provide access to and operations on the data.\nMy question is this:\nis there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a \"CollectionOfFruit\" class which contains a list of \"Fruit\" objects, or would I just have a \"CollectionOfFruit\" class which contains a list of tuples? Or would I just have a list of Fruit objects?\nI don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.\nAlternatively, is there a good book I should read that would point me in the right direction on this?","AnswerCount":8,"Available Count":5,"Score":0.024994793,"is_accepted":false,"ViewCount":330,"Q_Id":557199,"Users Score":1,"Answer":"Here are a couple points for you to consider. If your data is large reading it all into memory may be wasteful. If you need random access and not just sequential access to your data then you'll either have to scan the at most the entire file each time or read that table into an indexed memory structure like a dictionary. A list will still require some kind of scan (straight iteration or binary search if sorted). With that said, if you don't require some of the features of a DB then don't use one but if you just think MySQL is too heavy then +1 on the Sqlite suggestion from earlier. It gives you most of the features you'd want while using a database without the concurrency overhead.","Q_Score":0,"Tags":"python,object","A_Id":558822,"CreationDate":"2009-02-17T14:56:00.000","Title":"Converting a database-driven (non-OO) python script into a non-database driven, OO-script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.\nSo my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).\nThen I plan to read the data in, and have a set of functions which provide access to and operations on the data.\nMy question is this:\nis there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a \"CollectionOfFruit\" class which contains a list of \"Fruit\" objects, or would I just have a \"CollectionOfFruit\" class which contains a list of tuples? Or would I just have a list of Fruit objects?\nI don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.\nAlternatively, is there a good book I should read that would point me in the right direction on this?","AnswerCount":8,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":330,"Q_Id":557199,"Users Score":5,"Answer":"If the data is a natural fit for database tables (\"rectangular data\"), why not convert it to sqlite? It's portable -- just one file to move the db around, and sqlite is available anywhere you have python (2.5 and above anyway).","Q_Score":0,"Tags":"python,object","A_Id":557473,"CreationDate":"2009-02-17T14:56:00.000","Title":"Converting a database-driven (non-OO) python script into a non-database driven, OO-script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.\nSo my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).\nThen I plan to read the data in, and have a set of functions which provide access to and operations on the data.\nMy question is this:\nis there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a \"CollectionOfFruit\" class which contains a list of \"Fruit\" objects, or would I just have a \"CollectionOfFruit\" class which contains a list of tuples? Or would I just have a list of Fruit objects?\nI don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.\nAlternatively, is there a good book I should read that would point me in the right direction on this?","AnswerCount":8,"Available Count":5,"Score":0.024994793,"is_accepted":false,"ViewCount":330,"Q_Id":557199,"Users Score":1,"Answer":"you could have a fruit class with id and name instance variables. and a function to read\/write the information from a file, and maybe a class variable to keep track of the number of fruits (objects) created","Q_Score":0,"Tags":"python,object","A_Id":557279,"CreationDate":"2009-02-17T14:56:00.000","Title":"Converting a database-driven (non-OO) python script into a non-database driven, OO-script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.\nSo my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).\nThen I plan to read the data in, and have a set of functions which provide access to and operations on the data.\nMy question is this:\nis there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a \"CollectionOfFruit\" class which contains a list of \"Fruit\" objects, or would I just have a \"CollectionOfFruit\" class which contains a list of tuples? Or would I just have a list of Fruit objects?\nI don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.\nAlternatively, is there a good book I should read that would point me in the right direction on this?","AnswerCount":8,"Available Count":5,"Score":0.024994793,"is_accepted":false,"ViewCount":330,"Q_Id":557199,"Users Score":1,"Answer":"There's no \"one size fits all\" answer for this -- it'll depend a lot on the data and how it's used in the application. If the data and usage are simple enough you might want to store your fruit in a dict with id as key and the rest of the data as tuples. Or not. It totally depends. If there's a guiding principle out there then it's to extract the underlying requirements of the app and then write code against those requirements.","Q_Score":0,"Tags":"python,object","A_Id":557241,"CreationDate":"2009-02-17T14:56:00.000","Title":"Converting a database-driven (non-OO) python script into a non-database driven, OO-script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.\nSo my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).\nThen I plan to read the data in, and have a set of functions which provide access to and operations on the data.\nMy question is this:\nis there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a \"CollectionOfFruit\" class which contains a list of \"Fruit\" objects, or would I just have a \"CollectionOfFruit\" class which contains a list of tuples? Or would I just have a list of Fruit objects?\nI don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.\nAlternatively, is there a good book I should read that would point me in the right direction on this?","AnswerCount":8,"Available Count":5,"Score":0.049958375,"is_accepted":false,"ViewCount":330,"Q_Id":557199,"Users Score":2,"Answer":"Generally you want your Objects to absolutely match your \"real world entities\".\nSince you're starting from a database, it's not always the case that the database has any real-world fidelity, either. Some database designs are simply awful.\nIf your database has reasonable models for Fruit, that's where you start. Get that right first.\nA \"collection\" may -- or may not -- be an artificial construct that's part of the solution algorithm, not really a proper part of the problem. Usually collections are part of the problem, and you should design those classes, also.\nOther times, however, the collection is an artifact of having used a database, and a simple Python list is all you need.\nStill other times, the collection is actually a proper mapping from some unique key value to an entity, in which case, it's a Python dictionary.\nAnd sometimes, the collection is a proper mapping from some non-unique key value to some collection of entities, in which case it's a Python collections.defaultdict(list).\nStart with the fundamental, real-world-like entities. Those get class definitions.\nCollections may use built-in Python collections or may require their own classes.","Q_Score":0,"Tags":"python,object","A_Id":557291,"CreationDate":"2009-02-17T14:56:00.000","Title":"Converting a database-driven (non-OO) python script into a non-database driven, OO-script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to experiment\/play around with non-relational databases, it'd be best if the solution was:\n\nportable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.\naccessible from python\nworks on both windows and linux\n\nWhat can you recommend for me?\nEssentially, I would like to be able to install this system on a shared linux server where I have little user privileges.","AnswerCount":9,"Available Count":1,"Score":0.0886555158,"is_accepted":false,"ViewCount":3510,"Q_Id":575172,"Users Score":4,"Answer":"If you're used to thinking a relational database has to be huge and heavy like PostgreSQL or MySQL, then you'll be pleasantly surprised by SQLite.\nIt is relational, very small, uses a single file, has Python bindings, requires no extra priviledges, and works on Linux, Windows, and many other platforms.","Q_Score":2,"Tags":"python,non-relational-database,portable-database","A_Id":575197,"CreationDate":"2009-02-22T16:31:00.000","Title":"portable non-relational database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I don't expect to need much more than basic CRUD type functionality. I know that SQLAlchemy is more flexible, but the syntax etc of sqlobject just seem to be a bit easier to get up and going with.","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3351,"Q_Id":592332,"Users Score":9,"Answer":"I think SQLObject is more pythonic\/simpler, so if it works for you, then stick with it.\nSQLAlchemy takes a little more to learn, but can do more advanced things if you need that.","Q_Score":14,"Tags":"python,orm,sqlalchemy,sqlobject","A_Id":592348,"CreationDate":"2009-02-26T20:37:00.000","Title":"Any reasons not to use SQLObject over SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on ajax-game. The abstract: 2+ gamers(browsers) change a variable which is saved to DB through json. All gamers are synchronized by javascript-timer+json - periodically reading that variable from DB. \nIn general, all changes are stored in DB as history, but I want the recent change duplicated in memory. \nSo the problem is: i want one variable to be stored in memory instead of DB.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":127,"Q_Id":602030,"Users Score":0,"Answer":"You'd either have to use a cache, or fetch the most recent change on each request (since you can't persist objects between requests in-memory).\nFrom what you describe, it sounds as if it's being hit fairly frequently, so the cache is probably the way to go.","Q_Score":0,"Tags":"python,django","A_Id":603637,"CreationDate":"2009-03-02T11:46:00.000","Title":"Store last created model's row in memory","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a Python (2.5) GUI Application that does the following:\n\nImports from Access to an Sqlite database \nSaves ui form settings to an Sqlite database\n\nCurrently I am using pywin32 to read Access, and pysqlite2\/dbapi2 to read\/write Sqlite.\nHowever, certain Qt objects don't automatically cast to Python or Sqlite equivalents when updating the Sqlite database. For example, a QDate, QDateTime, QString and others raise an error. Currently I am maintaining conversion functions.\nI investigated using QSql, which appears to overcome the casting problem. In addition, it is able to connect to both Access and Sqlite. These two benefits would appear to allow me to refactor my code to use less modules and not maintain my own conversion functions.\nWhat I am looking for is a list of important side-effects, performance gains\/losses, functionality gains\/losses that any of the SO community has experienced as a result from the switch to QSql.\nOne functionality loss I have experienced thus far is the inability to use Access functions using the QODBC driver (e.g., 'SELECT LCASE(fieldname) from tablename' fails, as does 'SELECT FORMAT(fieldname, \"General Number\") from tablename')","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":608098,"Users Score":0,"Answer":"When dealing with databases and PyQt UIs, I'll use something similar to model-view-controller model to help organize and simplify the code. \nView module\n\nuses\/holds any QObjects that are necessary\nfor the UI \ncontain simple functions\/methods\nfor updating your QTGui Object, as\nwell as extracting input from GUI\nobjects\n\nController module\n\nwill perform all DB interactions\nthe more complex code lives here\n\nBy using a MVC, you will not need to rely on the QT Library as much, and you will run into less problems linking QT with Python.\nSo I guess my suggestion is to continue using pysqlite (since that's what you are used to), but refactor your design a little so the only thing dealing with the QT libraries is the UI. From the description of your GUI, it should be fairly straightforward.","Q_Score":1,"Tags":"python,qt,sqlite,pyqt4,pywin32","A_Id":608262,"CreationDate":"2009-03-03T20:45:00.000","Title":"What will I lose or gain from switching database APIs? (from pywin32 and pysqlite to QSql)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using pysqlite how can a user-defined-type be used as a value in a comparison, e. g: \u201c... WHERE columnName > userType\u201d?\nFor example, I've defined a bool type with the requisite registration, converter, etc. Pysqlite\/Sqlite responds as expected for INSERT and SELECT operations (bool 'True' stored as an integer 1 and returned as True).\nBut it fails when the bool is used in either \u201cSELECT * from tasks WHERE display = True\u201d or \u201c... WHERE display = 'True.' \u201c In the first case Sqlite reports an error that there is not a column named True. And in the second case no records are returned. The select works if a 1 is used in place of True. I seem to have the same problem when using pysqlite's own date and timestamp adaptors.\nI can work around this behavior for this and other user-types but that's not as fun. I'd like to know if using a user-defined type in a query is or is not possible so that I don't keep banging my head on this particular wall.\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":671,"Q_Id":609516,"Users Score":0,"Answer":"You probably have to cast it to the correct type. Try \"SELECT * FROM tasks WHERE (display = CAST ('True' AS bool))\".","Q_Score":1,"Tags":"python,sqlite,pysqlite","A_Id":610761,"CreationDate":"2009-03-04T07:08:00.000","Title":"pysqlite user types in select statement","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I often need to execute custom sql queries in django, and manually converting query results into objects every time is kinda painful. I wonder how fellow Slackers deal with this. Maybe someone had written some kind of a library to help dealing with custom SQL in Django?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3263,"Q_Id":619384,"Users Score":3,"Answer":"Since the issue is \"manually converting query results into objects,\" the simplest solution is often to see if your custom SQL can fit into an ORM .extra() call rather than being a pure-SQL query. Often it can, and then you let the ORM do all the work of building up objects as usual.","Q_Score":2,"Tags":"python,django,orm","A_Id":620117,"CreationDate":"2009-03-06T16:11:00.000","Title":"Tools to ease executing raw SQL with Django ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I have created a table with an auto-incrementing primary key, is there a way to obtain what the primary key would be (that is, do something like reserve the primary key) without actually committing?\nI would like to place two operations inside a transaction however one of the operations will depend on what primary key was assigned in the previous operation.","AnswerCount":2,"Available Count":1,"Score":-0.2913126125,"is_accepted":false,"ViewCount":19996,"Q_Id":620610,"Users Score":-3,"Answer":"You can use multiple transactions and manage it within scope.","Q_Score":53,"Tags":"python,sql,sqlalchemy","A_Id":620784,"CreationDate":"2009-03-06T22:07:00.000","Title":"SQLAlchemy Obtain Primary Key With Autoincrement Before Commit","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our Python CMS stores some date values in a generic \"attribute\" table's varchar column. Some of these dates are later moved into a table with an actual date column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an \"Invalid string date\" error.\nHow can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":902,"Q_Id":639949,"Users Score":1,"Answer":"The format of a date string that Oracle recognizes as a date is a configurable property of the database and as such it's considered bad form to rely on implicit conversions of strings to dates.\nTypically Oracle dates format to 'DD-MON-YYYY' but you can't always rely on it being set that way.\nPersonally I would have the CMS write to this \"attribute\" table in a standard format like 'YYYY-MM-DD', and then whichever job moves that to a DATE column can explicitly cast the value with to_date( value, 'YYYY-MM-DD' ) and you won't have any problems.","Q_Score":1,"Tags":"python,oracle,validation","A_Id":640115,"CreationDate":"2009-03-12T18:41:00.000","Title":"Validating Oracle dates in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our Python CMS stores some date values in a generic \"attribute\" table's varchar column. Some of these dates are later moved into a table with an actual date column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an \"Invalid string date\" error.\nHow can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":902,"Q_Id":639949,"Users Score":-1,"Answer":"Validate as early as possible. Why don't you store dates as dates in your Python CMS? \nIt is difficult to know what date a string like '03-04-2008' is. Is it 3 april 2008 or 4 march 2008? An American will say 4 march 2008 but a Dutch person will say 3 april 2008.","Q_Score":1,"Tags":"python,oracle,validation","A_Id":640153,"CreationDate":"2009-03-12T18:41:00.000","Title":"Validating Oracle dates in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am finding it difficult to use MySQL with Python in my windows system.\nI am currently using Python 2.6. I have tried to compile MySQL-python-1.2.3b1 (which is supposed to work for Python 2.6 ?) source code using the provided setup scripts. The setup script runs and it doesn't report any error but it doesn't generate _mysql module. \nI have also tried setting up MySQL for Python 2.5 with out success. The problem with using 2.5 is that Python 2.5 is compiled with visual studio 2003 (I installed it using the provided binaries). I have visual studio 2005 on my windows system. Hence setuptools fails to generate _mysql module.\nAny help ?","AnswerCount":16,"Available Count":1,"Score":0.012499349,"is_accepted":false,"ViewCount":110355,"Q_Id":645943,"Users Score":1,"Answer":"Because I am running python in a (pylons\/pyramid) virtualenv, I could not run the binary installers (helpfully) linked to previously. \nI had problems following the steps with Willie's answer, but I determined that the problem is (probably) that I am running windows 7 x64 install, which puts the registry key for mysql in a slightly different location, specifically in my case (note: I am running version 5.5) in: \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\MySQL AB\\MySQL Server 5.5\".\nHOWEVER, \"HKEY_LOCAL_MACHINE\\\" cannot be included in the path or it will fail.\nAlso, I had to do a restart between steps 3 and 4.\nAfter working through all of this, IMO it would have been smarter to run the entire python dev environment from cygwin.","Q_Score":102,"Tags":"python,mysql,windows","A_Id":5294670,"CreationDate":"2009-03-14T13:53:00.000","Title":"Integrating MySQL with Python in Windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"OMG!\nWhat an apparent problem... my django based scripts have locked my sqlite db...\nDoes anyone know how to fix?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4579,"Q_Id":652750,"Users Score":6,"Answer":"Your database is locked because you have a transaction running somewhere. \nStop all your Django apps. If necessary, reboot.\nIt's also remotely possible that you crashed a SQLite client in the middle of a transaction and the file lock was left in place.","Q_Score":1,"Tags":"python,django","A_Id":652758,"CreationDate":"2009-03-17T01:33:00.000","Title":"How to unlock an sqlite3 db?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":985,"Q_Id":674030,"Users Score":0,"Answer":"Strange here too, but simply restarting the PostgreSQL service (or server) solved it. I'd tried manually pasting the table creation code in psql too, but that wasn't solving it either (well, no way it could if it was a lock thing) - so I just used the restart:\n\nsystemctl restart postgresql.service \n\nthat's on my Suse box.\nAm not sure whether reloading the service\/server might lift existing table locks too?","Q_Score":3,"Tags":"python,django,django-syncdb","A_Id":21254637,"CreationDate":"2009-03-23T16:16:00.000","Title":"Django syncdb locking up on table creation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":985,"Q_Id":674030,"Users Score":1,"Answer":"I just experienced this as well, and it turned out to just be a plain old lock on that particular table, unrelated to Django. Once that cleared the sync went through just fine.\nTry querying the table that the sync is getting stuck on and make sure that's working correctly first.","Q_Score":3,"Tags":"python,django,django-syncdb","A_Id":10438955,"CreationDate":"2009-03-23T16:16:00.000","Title":"Django syncdb locking up on table creation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":985,"Q_Id":674030,"Users Score":1,"Answer":"We use postgres, and while we've not run into this particular issue, there are some steps you may find helpful in debugging:\na. What version of postgres and psycopg2 are you using? For that matter, what version of django?\nb. Try running the syncdb command with the \"--verbosity=2\" option to show all output.\nc. Find the SQL that django is generating by running the \"manage.py sql \" command. Run the CREATE TABLE statements for your new models in the postgres shell and see what develops.\nd. Turn the error logging, statement logging, and server status logging on postgres way up to see if you can catch any particular messages.\nIn the past, we've usually found that either option b or option c points out the problem.","Q_Score":3,"Tags":"python,django,django-syncdb","A_Id":674105,"CreationDate":"2009-03-23T16:16:00.000","Title":"Django syncdb locking up on table creation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Let's say I have two or more processes dealing with an SQLite database - a \"player\" process and many \"editor\" processes. \nThe \"player\" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.\nAn \"editor\" process is any editor for that database: it changes the database constantly.\nNow I want the player to reflect the editing changes quickly.\nI know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.\nI could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.\nI am thinking about using a log table and triggers, but I wonder if there is a simpler method.","AnswerCount":8,"Available Count":5,"Score":0.049958375,"is_accepted":false,"ViewCount":15631,"Q_Id":677028,"Users Score":2,"Answer":"Just open a socket between the two processes and have the editor tell all the players about the update.","Q_Score":15,"Tags":"python,sqlite,notifications","A_Id":677042,"CreationDate":"2009-03-24T11:34:00.000","Title":"How do I notify a process of an SQLite database change done in a different process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have two or more processes dealing with an SQLite database - a \"player\" process and many \"editor\" processes. \nThe \"player\" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.\nAn \"editor\" process is any editor for that database: it changes the database constantly.\nNow I want the player to reflect the editing changes quickly.\nI know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.\nI could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.\nI am thinking about using a log table and triggers, but I wonder if there is a simpler method.","AnswerCount":8,"Available Count":5,"Score":0.049958375,"is_accepted":false,"ViewCount":15631,"Q_Id":677028,"Users Score":2,"Answer":"I think in that case, I would make a process to manage the database read\/writes.\nEach editor that want to make some modifications to the database makes a call to this proccess, be it through IPC or network, or whatever method.\nThis process can then notify the player of a change in the database. The player, when he wants to retrieve some data should make a request of the data it wants to the process managing the database. (Or the db process tells it what it needs, when it notifies of a change, so no request from the player needed)\nDoing this will have the advantage of having only one process accessing the SQLite DB, so no locking or concurrency issues on the database.","Q_Score":15,"Tags":"python,sqlite,notifications","A_Id":677215,"CreationDate":"2009-03-24T11:34:00.000","Title":"How do I notify a process of an SQLite database change done in a different process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have two or more processes dealing with an SQLite database - a \"player\" process and many \"editor\" processes. \nThe \"player\" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.\nAn \"editor\" process is any editor for that database: it changes the database constantly.\nNow I want the player to reflect the editing changes quickly.\nI know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.\nI could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.\nI am thinking about using a log table and triggers, but I wonder if there is a simpler method.","AnswerCount":8,"Available Count":5,"Score":0.049958375,"is_accepted":false,"ViewCount":15631,"Q_Id":677028,"Users Score":2,"Answer":"If it's on the same machine, the simplest way would be to have named pipe, \"player\" with blocking read() and \"editors\" putting a token in pipe whenever they modify DB.","Q_Score":15,"Tags":"python,sqlite,notifications","A_Id":677087,"CreationDate":"2009-03-24T11:34:00.000","Title":"How do I notify a process of an SQLite database change done in a different process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have two or more processes dealing with an SQLite database - a \"player\" process and many \"editor\" processes. \nThe \"player\" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.\nAn \"editor\" process is any editor for that database: it changes the database constantly.\nNow I want the player to reflect the editing changes quickly.\nI know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.\nI could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.\nI am thinking about using a log table and triggers, but I wonder if there is a simpler method.","AnswerCount":8,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":15631,"Q_Id":677028,"Users Score":4,"Answer":"A relational database is not your best first choice for this.\nWhy?\nYou want all of your editors to pass changes to your player. \nYour player is -- effectively -- a server for all those editors. Your player needs multiple open connections. It must listen to all those connections for changes. It must display those changes.\nIf the changes are really large, you can move to a hybrid solution where the editors persist the changes and notify the player. \nEither way, the editors must notify they player that they have a change. It's much, much simpler than the player trying to discover changes in a database.\n\nA better design is a server which accepts messages from the editors, persists them, and notifies the player. This server is neither editor nor player, but merely a broker that assures that all the messages are handled. It accepts connections from editors and players. It manages the database.\nThere are two implementations. Server IS the player. Server is separate from the player. The design of server doesn't change -- only the protocol. When server is the player, then server calls the player objects directly. When server is separate from the player, then the server writes to the player's socket.\nWhen the player is part of the server, player objects are invoked directly when a message is received from an editor. When the player is separate, a small reader collects the messages from a socket and calls the player objects.\nThe player connects to the server and then waits for a stream of information. This can either be input from the editors or references to data that the server persisted in the database.\nIf your message traffic is small enough so that network latency is not a problem, editor sends all the data to the server\/player. If message traffic is too large, then the editor writes to a database and sends a message with just a database FK to the server\/player. \n\nPlease clarify \"If the editor crashes while notifying, the player is permanently messed up\" in your question.\nThis sounds like a poor design for the player service. It can't be \"permanently messed up\" unless it's not getting state from the various editors. If it's getting state from the editors (but attempting to mirror that state, for example) then you should consider a design where the player simply gets state from the editor and cannot get \"permanently messed up\".","Q_Score":15,"Tags":"python,sqlite,notifications","A_Id":677085,"CreationDate":"2009-03-24T11:34:00.000","Title":"How do I notify a process of an SQLite database change done in a different process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have two or more processes dealing with an SQLite database - a \"player\" process and many \"editor\" processes. \nThe \"player\" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.\nAn \"editor\" process is any editor for that database: it changes the database constantly.\nNow I want the player to reflect the editing changes quickly.\nI know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.\nI could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.\nI am thinking about using a log table and triggers, but I wonder if there is a simpler method.","AnswerCount":8,"Available Count":5,"Score":0.024994793,"is_accepted":false,"ViewCount":15631,"Q_Id":677028,"Users Score":1,"Answer":"How many editor processes (why processes?), and how often do you expect updates? This doesn't sound like a good design, especially not considering sqlite really isn't too happy about multiple concurrent accesses to the database.\nIf multiple processes makes sense and you want persistence, it would probably be smarter to have the editors notify your player via sockets, pipes, shared memory or the like and then have the player (aka server process) do the persisting.","Q_Score":15,"Tags":"python,sqlite,notifications","A_Id":677169,"CreationDate":"2009-03-24T11:34:00.000","Title":"How do I notify a process of an SQLite database change done in a different process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"does anybody know what is the equivalent to SQL \"INSERT OR REPLACE\" clause in SQLAlchemy and its SQL expression language?\nMany thanks -- honzas","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20634,"Q_Id":708762,"Users Score":5,"Answer":"I don't think (correct me if I'm wrong) INSERT OR REPLACE is in any of the SQL standards; it's an SQLite-specific thing. There is MERGE, but that isn't supported by all dialects either. So it's not available in SQLAlchemy's general dialect.\nThe cleanest solution is to use Session, as suggested by M. Utku. You could also use SAVEPOINTs to save, try: an insert, except IntegrityError: then rollback and do an update instead. A third solution is to write your INSERT with an OUTER JOIN and a WHERE clause that filters on the rows with nulls.","Q_Score":13,"Tags":"python,sqlalchemy","A_Id":709452,"CreationDate":"2009-04-02T08:05:00.000","Title":"SQLAlchemy - INSERT OR REPLACE equivalent","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":0,"Answer":"Disagreeing with the noble colleagues, I often use DBD::CSV from Perl. There are good reasons to do it. Foremost is data update made simple using a spreadsheet. As a bonus, since I am using SQL queries, the application can be easily upgraded to a real database engine. Bear in mind these were extremely small database in a single user application. \nSo rephrasing the question: Is there a python module equivalent to Perl's DBD:CSV","Q_Score":4,"Tags":"python,csv","A_Id":1396578,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":0,"Answer":"What about postgresql? I've found that quite nice to work with, and python supports it well.\nBut I really would look for another provider unless it's really not an option.","Q_Score":4,"Tags":"python,csv","A_Id":713531,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0166651236,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":1,"Answer":"\"Anyways, now, the question: is it possible to update values SQL-style in a CSV database?\"\nTechnically, it's possible. However, it can be hard.\nIf both PHP and Python are writing the file, you'll need to use OS-level locking to assure that they don't overwrite each other. Each part of your system will have to lock the file, rewrite it from scratch with all the updates, and unlock the file.\nThis means that PHP and Python must load the entire file into memory before rewriting it.\nThere are a couple of ways to handle the OS locking.\n\nUse the same file and actually use some OS lock module. Both processes have the file open at all times.\nWrite to a temp file and do a rename. This means each program must open and read the file for each transaction. Very safe and reliable. A little slow.\n\nOr.\nYou can rearchitect it so that only Python writes the file. The front-end reads the file when it changes, and drops off little transaction files to create a work queue for Python. In this case, you don't have multiple writers -- you have one reader and one writer -- and life is much, much simpler.","Q_Score":4,"Tags":"python,csv","A_Id":713396,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":0,"Answer":"I agree. Tell them that 5 random strangers agree that you being forced into a corner to use CSV is absurd and unacceptable.","Q_Score":4,"Tags":"python,csv","A_Id":712567,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0166651236,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":1,"Answer":"You can probably used sqlite3 for more real database. It's hard to imagine hosting that won't allow you to install it as a python module.\nDon't even think of using CSV, your data will be corrupted and lost faster than you say \"s#&t\"","Q_Score":4,"Tags":"python,csv","A_Id":712568,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0166651236,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":1,"Answer":"I couldn't imagine this ever being a good idea. The current mess I've inherited writes vital billing information to CSV and updates it after projects are complete. It runs horribly and thousands of dollars are missed a month. For the current restrictions that you have, I'd consider finding better hosting.","Q_Score":4,"Tags":"python,csv","A_Id":712522,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0166651236,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":1,"Answer":"Keep calling on the help desk.\nWhile you can use a CSV as a database, it's generally a bad idea. You would have to implement you own locking, searching, updating, and be very careful with how you write it out to make sure that it isn't erased in case of a power outage or other abnormal shutdown. There will be no transactions, no query language unless you write your own, etc.","Q_Score":4,"Tags":"python,csv","A_Id":712515,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":0,"Answer":"I'd keep calling help desk. You don't want to use CSV for data if it's relational at all. It's going to be nightmare.","Q_Score":4,"Tags":"python,csv","A_Id":712512,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database. \nWhile I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.\nAnyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?","AnswerCount":12,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":2433,"Q_Id":712510,"Users Score":0,"Answer":"If I understand you correctly: you need to access the same database from both python and php, and you're screwed because you can only use mysql from php, and only sqlite from python?\nCould you further explain this? Maybe you could use xml-rpc or plain http requests with xml\/json\/... to get the php program to communicate with the python program (or the other way around?), so that only one of them directly accesses the db.\nIf this is not the case, I'm not really sure what the problem.","Q_Score":4,"Tags":"python,csv","A_Id":712974,"CreationDate":"2009-04-03T04:02:00.000","Title":"Using CSV as a mutable database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule.\nThere are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before.\nThere is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week.\nNow, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well.\nAny help would be greatly appreciated.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":2359,"Q_Id":719886,"Users Score":2,"Answer":"Have you considered keeping your same \"schedule\", and just shuffling the teams? Generating a schedule where everyone plays each other the proper number of times is possible, but if you already have such a schedule then it's much easier to just shuffle the teams.\nYou could keep your current table, but replace each team in it with an id (0-23, or A-X, or whatever), then randomly generate into another table where you assign each team to each id (0 = TeamJoe, 1 = TeamBob, etc). Then when it's time to shuffle again next year, just regenerate that mapping table.\nNot sure if this answers the question the way you want, but is probably what I would go with (and is actually how I do it on my fantasy football website).","Q_Score":0,"Tags":"python,postgresql","A_Id":719913,"CreationDate":"2009-04-05T23:34:00.000","Title":"Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule.\nThere are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before.\nThere is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week.\nNow, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well.\nAny help would be greatly appreciated.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2359,"Q_Id":719886,"Users Score":1,"Answer":"I'm not sure I fully understand the problem, but here is how I would do it:\n1. create a complete list of matches that need to happen\n2. iterate over the weeks, selecting which match needs to happen in this week.\nYou can use Python lists to represent the matches that still need to happen, and, for each week, the matches that are happening in this week.\nIn step 2, selecting a match to happen would work this way:\na. use random.choice to select a random match to happen.\nb. determine which team has a home round for this match, using random.choice([1,2]) (if it could have been a home round for either team)\nc. temporarily remove all matches that get blocked by this selection. a match is blocked if one of its teams has already two matches in the week, or if both teams already have a home match in this week, or if both teams already have a road match in this week.\nd. when there are no available matches anymore for a week, proceed to the next week, readding all the matches that got blocked for the previous week.","Q_Score":0,"Tags":"python,postgresql","A_Id":719909,"CreationDate":"2009-04-05T23:34:00.000","Title":"Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with a 20 gig XML file that I would like to import into a SQL database (preferably MySQL, since that is what I am familiar with). This seems like it would be a common task, but after Googling around a bit I haven't been able to figure out how to do it. What is the best way to do this? \nI know this ability is built into MySQL 6.0, but that is not an option right now because it is an alpha development release.\nAlso, if I have to do any scripting I would prefer to use Python because that's what I am most familiar with. \nThanks.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12197,"Q_Id":723757,"Users Score":0,"Answer":"It may be a common task, but maybe 20GB isn't as common with MySQL as it is with SQL Server.\nI've done this using SQL Server Integration Services and a bit of custom code. Whether you need either of those depends on what you need to do with 20GB of XML in a database. Is it going to be a single column of a single row of a table? One row per child element?\nSQL Server has an XML datatype if you simply want to store the XML as XML. This type allows you to do queries using XQuery, allows you to create XML indexes over the XML, and allows the XML column to be \"strongly-typed\" by referring it to a set of XML schemas, which you store in the database.","Q_Score":5,"Tags":"python,sql,xml","A_Id":723931,"CreationDate":"2009-04-07T00:39:00.000","Title":"Import XML into SQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've written a web-app in python using SQLite and it runs fine on my server at home (with apache and python 2.5.2). I'm now trying to upload it to my web host and there servers use python 2.2.3 without SQLite.\nAnyone know of a way to use SQLite in python 2.2.3 e.g. a module that I can upload and import? I've tried butchering the module from newer versions of python, but they don't seem to be compatible.\nThanks,\nMike","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1012,"Q_Id":737511,"Users Score":2,"Answer":"There is no out-of-the-box solution; you either have to backport the SQLlite module from Python 2.5 to Python 2.2 or ask your web hoster to upgrade to the latest Python version. \nPython 2.2 is really ancient! At least for security reasons, they should upgrade (no more security fixes for 2.2 since May 30, 2003!).\nNote that you can install several versions of Python in parallel. Just make sure you use \"\/usr\/bin\/python25\" instead of \"\/usr\/bin\/python\" in your scripts. To make sure all the old stuff is still working, after installing Python 2.5, you just have to fix the two symbolic links \"\/usr\/bin\/python\" and \"\/usr\/lib\/python\" which should now point to 2.5. Bend them back to 2.2 and you're good.","Q_Score":1,"Tags":"python,sql,linux,sqlite,hosting","A_Id":737617,"CreationDate":"2009-04-10T12:40:00.000","Title":"SQLite in Python 2.2.3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've written a web-app in python using SQLite and it runs fine on my server at home (with apache and python 2.5.2). I'm now trying to upload it to my web host and there servers use python 2.2.3 without SQLite.\nAnyone know of a way to use SQLite in python 2.2.3 e.g. a module that I can upload and import? I've tried butchering the module from newer versions of python, but they don't seem to be compatible.\nThanks,\nMike","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1012,"Q_Id":737511,"Users Score":0,"Answer":"In case anyone comes across this question, the reason why neither pysqlite nor APSW are available for Python 2.2 is because Python 2.3 added the simplified GIL API. Prior to Python 2.3 it required a lot of code to keep track of the GIL. (The GIL is the lock used by Python to ensure correct behaviour while multi-threading.)\nDoing a backport to 2.2 would require ripping out all the threading code. Trying to make it also be thread safe under 2.2 would be a nightmare. There was a reason they introduced the simplified GIL API!\nI am still astonished at just how popular older Python versions are. APSW for Python 2.3 is still regularly downloaded.","Q_Score":1,"Tags":"python,sql,linux,sqlite,hosting","A_Id":4066757,"CreationDate":"2009-04-10T12:40:00.000","Title":"SQLite in Python 2.2.3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.\nWhen a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).\nThe problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.\nAdding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.\nWhat is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.\nI understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy).","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3255,"Q_Id":740630,"Users Score":0,"Answer":"Could you post a sample of your table and mapper set up? It might be easier to spot what is going on.\nWithout seeing the code it is hard to tell, but perhaps there is something wrong with the direction of the relationship?","Q_Score":9,"Tags":"python,sqlalchemy","A_Id":776246,"CreationDate":"2009-04-11T19:07:00.000","Title":"SQLAlchemy many-to-many orphan deletion","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.\nWhen a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).\nThe problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.\nAdding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.\nWhat is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.\nI understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy).","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3255,"Q_Id":740630,"Users Score":3,"Answer":"The way I've generally handled this is to have a function on your user or group called leave_group. When you want a user to leave a group, you call that function, and you can add any side effects you want into there. In the long term, this makes it easier to add more and more side effects. (For example when you want to check that someone is allowed to leave a group).","Q_Score":9,"Tags":"python,sqlalchemy","A_Id":763256,"CreationDate":"2009-04-11T19:07:00.000","Title":"SQLAlchemy many-to-many orphan deletion","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.\nWhen a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).\nThe problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.\nAdding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.\nWhat is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.\nI understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy).","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":3255,"Q_Id":740630,"Users Score":3,"Answer":"I think you want cascade='save, update, merge, expunge, refresh, delete-orphan'. This will prevent the \"delete\" cascade (which you get from \"all\") but maintain the \"delete-orphan\", which is what you're looking for, I think (delete when there are no more parents).","Q_Score":9,"Tags":"python,sqlalchemy","A_Id":770287,"CreationDate":"2009-04-11T19:07:00.000","Title":"SQLAlchemy many-to-many orphan deletion","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":4305,"Q_Id":765964,"Users Score":1,"Answer":"You will have to build the whole access logic to S3 in your applications","Q_Score":10,"Tags":"python,django,amazon-web-services,amazon-s3","A_Id":766030,"CreationDate":"2009-04-19T19:51:00.000","Title":"Amazon S3 permissions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":4305,"Q_Id":765964,"Users Score":8,"Answer":"Have the user hit your server\nHave the server set up a query-string authentication with a short expiration (minutes, hours?)\nHave your server redirect to #2","Q_Score":10,"Tags":"python,django,amazon-web-services,amazon-s3","A_Id":768090,"CreationDate":"2009-04-19T19:51:00.000","Title":"Amazon S3 permissions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":4305,"Q_Id":765964,"Users Score":14,"Answer":"There are various ways to control access to the S3 objects:\n\nUse the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done.\nUse the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for.\nYou proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box.\nYou can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation.","Q_Score":10,"Tags":"python,django,amazon-web-services,amazon-s3","A_Id":768050,"CreationDate":"2009-04-19T19:51:00.000","Title":"Amazon S3 permissions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses. \n\nI have the open and close times for every business for every day of the week\nLet's assume a business can open\/close only on 00, 15, 30, 45 minute marks of each hour\nI'm assuming the same schedule each week.\nI am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.\nMind you, some my open at 11pm one day and close 1am the next day. \nHolidays don't matter - I will handle these separately\n\nWhat's the most efficient way to store these open\/close times such that with a single time\/day-of-week tuple I can speedily figure out which businesses are open?\nI am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.","AnswerCount":7,"Available Count":5,"Score":0.1137907297,"is_accepted":false,"ViewCount":1489,"Q_Id":775161,"Users Score":4,"Answer":"You say you're using SOLR, don't care about storage, and want the lookups to be fast. Then instead of storing open\/close tuples, index an entry for every open block of time at the level of granularity you need (15 mins). For the encoding itself, you could use just cumulative hours:minutes.\nFor example, a store open from 4-5 pm on Monday, would have indexed values added for [40:00, 40:15, 40:30, 40:45]. A query at 4:24 pm on Monday would be normalized to 40:15, and therefore match that store document.\nThis may seem inefficient at first glance, but it's a relatively small constant penalty for indexing speed and space. And makes the searches as fast as possible.","Q_Score":6,"Tags":"python,mysql,performance,solr","A_Id":775354,"CreationDate":"2009-04-21T23:48:00.000","Title":"Efficiently determining if a business is open or not based on store hours","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses. \n\nI have the open and close times for every business for every day of the week\nLet's assume a business can open\/close only on 00, 15, 30, 45 minute marks of each hour\nI'm assuming the same schedule each week.\nI am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.\nMind you, some my open at 11pm one day and close 1am the next day. \nHolidays don't matter - I will handle these separately\n\nWhat's the most efficient way to store these open\/close times such that with a single time\/day-of-week tuple I can speedily figure out which businesses are open?\nI am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.","AnswerCount":7,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":1489,"Q_Id":775161,"Users Score":8,"Answer":"If you are willing to just look at single week at a time, you can canonicalize all opening\/closing times to be set numbers of minutes since the start of the week, say Sunday 0 hrs. For each store, you create a number of tuples of the form [startTime, endTime, storeId]. (For hours that spanned Sunday midnight, you'd have to create two tuples, one going to the end of the week, one starting at the beginning of the week). This set of tuples would be indexed (say, with a tree you would pre-process) on both startTime and endTime. The tuples shouldn't be that large: there are only ~10k minutes in a week, which can fit in 2 bytes. This structure would be graceful inside a MySQL table with appropriate indexes, and would be very resilient to constant insertions & deletions of records as information changed. Your query would simply be \"select storeId where startTime <= time and endtime >= time\", where time was the canonicalized minutes since midnight on sunday.\nIf information doesn't change very often, and you want to have lookups be very fast, you could solve every possible query up front and cache the results. For instance, there are only 672 quarter-hour periods in a week. With a list of businesses, each of which had a list of opening & closing times like Brandon Rhodes's solution, you could simply, iterate through every 15-minute period in a week, figure out who's open, then store the answer in a lookup table or in-memory list.","Q_Score":6,"Tags":"python,mysql,performance,solr","A_Id":775247,"CreationDate":"2009-04-21T23:48:00.000","Title":"Efficiently determining if a business is open or not based on store hours","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses. \n\nI have the open and close times for every business for every day of the week\nLet's assume a business can open\/close only on 00, 15, 30, 45 minute marks of each hour\nI'm assuming the same schedule each week.\nI am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.\nMind you, some my open at 11pm one day and close 1am the next day. \nHolidays don't matter - I will handle these separately\n\nWhat's the most efficient way to store these open\/close times such that with a single time\/day-of-week tuple I can speedily figure out which businesses are open?\nI am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.","AnswerCount":7,"Available Count":5,"Score":0.0855049882,"is_accepted":false,"ViewCount":1489,"Q_Id":775161,"Users Score":3,"Answer":"Sorry I don't have an easy answer, but I can tell you that as the manager of a development team at a company in the late 90's we were tasked with solving this very problem and it was HARD.\nIt's not the weekly hours that's tough, that can be done with a relatively small bitmask (168 bits = 1 per hour of the week), the trick is the businesses which are closed every alternating Tuesday.\nStarting with a bitmask then moving on to an exceptions field is the best solution I've ever seen.","Q_Score":6,"Tags":"python,mysql,performance,solr","A_Id":775175,"CreationDate":"2009-04-21T23:48:00.000","Title":"Efficiently determining if a business is open or not based on store hours","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses. \n\nI have the open and close times for every business for every day of the week\nLet's assume a business can open\/close only on 00, 15, 30, 45 minute marks of each hour\nI'm assuming the same schedule each week.\nI am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.\nMind you, some my open at 11pm one day and close 1am the next day. \nHolidays don't matter - I will handle these separately\n\nWhat's the most efficient way to store these open\/close times such that with a single time\/day-of-week tuple I can speedily figure out which businesses are open?\nI am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.","AnswerCount":7,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":1489,"Q_Id":775161,"Users Score":0,"Answer":"Have you looked at how many unique open\/close time combinations there are? If there are not that many, make a reference table of the unique combinations and store the index of the appropriate entry against each business. Then you only have to search the reference table and then find the business with those indices.","Q_Score":6,"Tags":"python,mysql,performance,solr","A_Id":775459,"CreationDate":"2009-04-21T23:48:00.000","Title":"Efficiently determining if a business is open or not based on store hours","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses. \n\nI have the open and close times for every business for every day of the week\nLet's assume a business can open\/close only on 00, 15, 30, 45 minute marks of each hour\nI'm assuming the same schedule each week.\nI am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.\nMind you, some my open at 11pm one day and close 1am the next day. \nHolidays don't matter - I will handle these separately\n\nWhat's the most efficient way to store these open\/close times such that with a single time\/day-of-week tuple I can speedily figure out which businesses are open?\nI am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.","AnswerCount":7,"Available Count":5,"Score":0.0285636566,"is_accepted":false,"ViewCount":1489,"Q_Id":775161,"Users Score":1,"Answer":"In your Solr index, instead of indexing each business as one document with hours, index every \"retail session\" for every business during the course of a week. \nFor example if Joe's coffee is open Mon-Sat 6am-9pm and closed on Sunday, you would index six distinct documents, each with two indexed fields, \"open\" and \"close\". If your units are 15 minute intervals, then the values can range from 0 to 7*24*4. Assuming you have a unique ID for each business, store this in each document so you can map the sessions to businesses.\nThen you can simply do a range search in Solr:\nopen:[* TO N] AND close:[N+1 TO *]\nwhere N is computed to the Nth 15 minute interval that the current time falls into. For examples if it's 10:10AM on Wednesday, your query would be:\nopen:[* TO 112] AND close:[113 TO *]\naka \"find a session that starts at or before 10:00am Wed and ends at or after 10:15am Wed\"\nIf you want to include other criteria in your search, such as location or products, you will need to index this with each session document as well. This is a bit redundant, but if your index is not huge, it shouldn't be a problem.","Q_Score":6,"Tags":"python,mysql,performance,solr","A_Id":777443,"CreationDate":"2009-04-21T23:48:00.000","Title":"Efficiently determining if a business is open or not based on store hours","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a legacy database with an integer set as a primary key. It was initially managed manually, but since we are wanting to move to django, the admin tool seemed to be the right place to start. I created the model and am trying to set the primary key to be an autofield. It doesn't seem to be remembering the old id in updates, and it doesn't create new id's on insert. What am I doing wrong?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":454,"Q_Id":777778,"Users Score":2,"Answer":"The DB is responsible for managing the value of the ID. If you want to use AutoField, you have to change the column in the DB to use that. Django is not responsible for managing the generated ID","Q_Score":1,"Tags":"python,django,oracle,autofield","A_Id":778346,"CreationDate":"2009-04-22T15:18:00.000","Title":"How do I set up a model to use an AutoField with a legacy database in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Python communicating with EXCEL... i need to find a way so that I can find\/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND\/SEARCH\/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based!\nFIND\/SEARCH Excel rows through Python.... Any idea, anybody?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":7840,"Q_Id":778093,"Users Score":0,"Answer":"With pyExcelerator you can do a simple optimization by finding the maximum row and column indices first (and storing them), so that you iterate over (row, i) for i in range(maxcol+1) instead of iterating over all the dictionary keys. That may be the best you get, unless you want to go through and build up a dictionary mapping value to set of keys.\nIncidentally, if you're using pyExcelerator to write spreadsheets, be aware that it has some bugs. I've encountered one involving writing integers between 230 and 232 (or thereabouts). The original author is apparently hard to contact these days, so xlwt is a fork that fixes the (known) bugs. For writing spreadsheets, it's a drop-in replacement for pyExcelerator; you could do import xlwt as pyExcelerator and change nothing else. It doesn't read spreadsheets, though.","Q_Score":2,"Tags":"python,excel,search,pyexcelerator,xlrd","A_Id":779599,"CreationDate":"2009-04-22T16:23:00.000","Title":"pyExcelerator or xlrd - How to FIND\/SEARCH a row for the given few column data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python communicating with EXCEL... i need to find a way so that I can find\/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND\/SEARCH\/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based!\nFIND\/SEARCH Excel rows through Python.... Any idea, anybody?","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":7840,"Q_Id":778093,"Users Score":2,"Answer":"You can't. Those tools don't offer search capabilities. You must iterate over the data in a loop and search yourself. Sorry.","Q_Score":2,"Tags":"python,excel,search,pyexcelerator,xlrd","A_Id":779030,"CreationDate":"2009-04-22T16:23:00.000","Title":"pyExcelerator or xlrd - How to FIND\/SEARCH a row for the given few column data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python communicating with EXCEL... i need to find a way so that I can find\/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND\/SEARCH\/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based!\nFIND\/SEARCH Excel rows through Python.... Any idea, anybody?","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":7840,"Q_Id":778093,"Users Score":2,"Answer":"\"Now, i m scanning entire rows one by one\"\nWhat's wrong with that? \"search\" -- in a spreadsheet context -- is really complicated. Search values? Search formulas? Search down rows then across columns? Search specific columns only? Search specific rows only?\nA spreadsheet isn't simple text -- simple text processing design patterns don't apply.\nSpreadsheet search is hard and you're doing it correctly. There's nothing better because it's hard.","Q_Score":2,"Tags":"python,excel,search,pyexcelerator,xlrd","A_Id":778282,"CreationDate":"2009-04-22T16:23:00.000","Title":"pyExcelerator or xlrd - How to FIND\/SEARCH a row for the given few column data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently I have tables like: Pages, Groups, GroupPage, Users, UserGroup. With pickled sets I can implement the same thing with only 3 tables: Pages, Groups, Users.\nset seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow\/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.\nIf human readability is important, I can always use json instead of cPickle for serialization and use set when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?\nWe're using SQLAlchemy as ORM, so it's likely to be implemented with PickleType column. I'm not planning to store the whole pickled \"resource\" recordset, only the set object made out of \"resource\" primary key values.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2877,"Q_Id":790613,"Users Score":2,"Answer":"Me, I'd stick with keeping persistent info in the relational DB in a form that's independent from a specific programming language used to access it -- much as I love Python (and that's a lot), some day I may want to access that info from some other language, and if I went for Python-specific formats... boy would I ever regret it...","Q_Score":1,"Tags":"python,set,acl,pickle","A_Id":791425,"CreationDate":"2009-04-26T10:37:00.000","Title":"Using Python set type to implement ACL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Currently I have tables like: Pages, Groups, GroupPage, Users, UserGroup. With pickled sets I can implement the same thing with only 3 tables: Pages, Groups, Users.\nset seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow\/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.\nIf human readability is important, I can always use json instead of cPickle for serialization and use set when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?\nWe're using SQLAlchemy as ORM, so it's likely to be implemented with PickleType column. I'm not planning to store the whole pickled \"resource\" recordset, only the set object made out of \"resource\" primary key values.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2877,"Q_Id":790613,"Users Score":2,"Answer":"You need to consider what it is that a DBMS provides you with, and which of those features you'll need to reimplement.\nThe issue of concurrency is a big one. There are a few race conditions to be considered (such as multiple writes taking place in different threads and processes and overwriting the new data), performance issues (write policy? What if your process crashes and you lose your data?), memory issues (how big are your permission sets? Will it all fit in RAM?).\nIf you have enough memory and you don't have to worry about concurrency, then your solution might be a good one. Otherwise I'd stick with a databases -- it takes care of those problems for you, and lots of work has gone into them to make sure that they always take your data from one consistent state to another.","Q_Score":1,"Tags":"python,set,acl,pickle","A_Id":790662,"CreationDate":"2009-04-26T10:37:00.000","Title":"Using Python set type to implement ACL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have downloaded & installed the latest Python InformixDB package, but when I try to import it from the shell, I am getting the following error in the form of a Windows dialog box!\n\"A procedure entry point sqli_describe_input_stmt could not be located in the dynamic link isqlit09a.dll\"\nAny ideas what's happening?\nPlatform: Windows Vista (Biz Edition), Python 2.5.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":435,"Q_Id":801515,"Users Score":0,"Answer":"Does other way to connect to database work?\nCan you use (configure in control panel) ODBC? If ODBC works then you can use Python win32 extensions (ActiveState distribution comes with it) and there is ODBC support. You can also use Jython which can work with ODBC via JDBC-ODBC bridge or with Informix JDBC driver.","Q_Score":1,"Tags":"python,informix","A_Id":823474,"CreationDate":"2009-04-29T09:01:00.000","Title":"Why Python informixdb package is throwing an error!","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded & installed the latest Python InformixDB package, but when I try to import it from the shell, I am getting the following error in the form of a Windows dialog box!\n\"A procedure entry point sqli_describe_input_stmt could not be located in the dynamic link isqlit09a.dll\"\nAny ideas what's happening?\nPlatform: Windows Vista (Biz Edition), Python 2.5.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":435,"Q_Id":801515,"Users Score":1,"Answer":"Which version of IBM Informix Connect (I-Connect) or IBM Informix ClientSDK (CSDK) are you using? The 'describe input' function is a more recent addition, but it is likely that you have it.\nHave you been able to connect to any Informix DBMS from the command shell? If not, then the suspicion must be that you don't have the correct environment. You would probably need to specify $INFORMIXDIR (or %INFORMIXDIR% - I'm going to omit '$' and '%' sigils from here on); you would need to set INFORMIXSERVER to connect successfully; you would need to have the correct directory (probably INFORMIXDIR\/bin on Windows; on Unix, it would be INFORMIXDIR\/lib and INFORMIXDIR\/lib\/esql or INFORMIXDIR\/lib\/odbc) on your PATH.","Q_Score":1,"Tags":"python,informix","A_Id":803958,"CreationDate":"2009-04-29T09:01:00.000","Title":"Why Python informixdb package is throwing an error!","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing software using the Google App Engine. \nI have some considerations about the optimal design regarding the following issue: I need to create and save snapshots of some entities at regular intervals.\nIn the conventional relational db world, I would create db jobs which would insert new summary records.\nFor example, a job would insert a record for every active user that would contain his current score to the \"userrank\" table, say, every hour.\nI'd like to know what's the best method to achieve this in Google App Engine. I know that there is the Cron service, but does it allow us to execute jobs which will insert\/update thousands of records?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1473,"Q_Id":814896,"Users Score":3,"Answer":"I think you'll find that snapshotting every user's state every hour isn't something that will scale well no matter what your framework. A more ordinary environment will disguise this by letting you have longer running tasks, but you'll still reach the point where it's not practical to take a snapshot of every user's data, every hour.\nMy suggestion would be this: Add a 'last snapshot' field, and subclass the put() function of your model (assuming you're using Python; the same is possible in Java, but I don't know the syntax), such that whenever you update a record, it checks if it's been more than an hour since the last snapshot, and if so, creates and writes a snapshot record.\nIn order to prevent concurrent updates creating two identical snapshots, you'll want to give the snapshots a key name derived from the time at which the snapshot was taken. That way, if two concurrent updates try to write a snapshot, one will harmlessly overwrite the other.\nTo get the snapshot for a given hour, simply query for the oldest snapshot newer than the requested period. As an added bonus, since inactive records aren't snapshotted, you're saving a lot of space, too.","Q_Score":1,"Tags":"python,database,google-app-engine,cron","A_Id":815113,"CreationDate":"2009-05-02T13:54:00.000","Title":"Google App Engine - design considerations about cron tasks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"So, I've been tossing this idea around in my head for a while now. At its core, it's mostly a project for me to learn programming. The idea is that, I have a large set of data, my music collection. There are quite a few datasets that my music has. Format, artist, title, album, genre, length, year of release, filename, directory, just to name a few. Ideally, I'd like to create a database that has all of this data stored in it, and in the future, create a web interface on top of it that I can manage my music collection with. So, my questions are as follows:\n\nDoes this sound like a good project to begin building databases from scratch with?\nWhat language would you recommend I start with? I know tidbits of PHP, but I would imagine it would be awful to index data in a filesystem with. Python was the other language I was thinking of, considering it's the language most people consider as a beginner language.\nIf you were going to implement this kind of system (the web interface) in your home (if you had PCs connected to a couple of stereos in your home and this was the software connected), what kind of features would you want to see?\n\nMy idea for building up the indexing script would be as follows:\n\nGet it to populate the database with only the filenames\nFrom the extension of the filename, determine format\nGet file size\nUsing the filenames in the database as a reference, pull ID3 or other applicable metadata (artist, track name, album, etc)\nCheck if all files still exist on disk, and if not, flag the file as unavailable\n\nAnother script would go in later and check if the files are back, if they are not, the will remove the row from the database.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":2085,"Q_Id":818752,"Users Score":1,"Answer":"Working on something you care about is the best way to learn programming, so I think this is a great idea.\nI also recommend Python as a place to start. Have fun!","Q_Score":2,"Tags":"php,python,mysql","A_Id":818763,"CreationDate":"2009-05-04T04:10:00.000","Title":"Web-Based Music Library (programming concept)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"in my google app application, whenever a user purchases a number of contracts, these events are executed (simplified for clarity):\n\nuser.cash is decreased\nuser.contracts is increased by the number\ncontracts.current_price is updated.\nmarket.no_of_transactions is increased by 1.\n\nin a rdms, these would be placed within the same transaction. I conceive that google datastore does not allow entities of more than one model to be in the same transaction.\nwhat is the correct approach to this issue? how can I ensure that if a write fails, all preceding writes are rolled back? \nedit: I have obviously missed entity groups. Now I'd appreciate some further information regarding how they are used. Another point to clarify is google says \"Only use entity groups when they are needed for transactions. For other relationships between entities, use ReferenceProperty properties and Key values, which can be used in queries\". does it mean I have to define both a reference property (since I need queriying them) and a parent-child relationship (for transactions)? \nedit 2: and finally, how do I define two parents for an entity if the entity is being created to establish an n-to-n relationship between 2 parents?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":384,"Q_Id":836992,"Users Score":0,"Answer":"After a through research, I have found that a distributed transaction layer that provides a solution to the single entity group restriction has been developed in userland with the help of some google people. But so far, it is not released and is only available in java.","Q_Score":3,"Tags":"python,google-app-engine,transactions,google-cloud-datastore","A_Id":838960,"CreationDate":"2009-05-07T20:55:00.000","Title":"datastore transaction restrictions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Has anybody got recent experience with deploying a Django application with an SQL Server database back end? Our workplace is heavily invested in SQL Server and will not support Django if there isn't a sufficiently developed back end for it.\nI'm aware of mssql.django-pyodbc and django-mssql as unofficially supported back ends. Both projects seem to have only one person contributing which is a bit of a worry though the contributions seem to be somewhat regular.\nAre there any other back ends for SQL Server that are well supported? Are the two I mentioned here 'good enough' for production? What are your experiences?","AnswerCount":7,"Available Count":2,"Score":0.1137907297,"is_accepted":false,"ViewCount":48333,"Q_Id":842831,"Users Score":4,"Answer":"We are using django-mssql in production at our company. We too had an existing system using mssql. For me personally it was the best design decision I have ever made because my productivity increased dramatically now that I can use django . \nI submitted a patch but when I started using django-mssql and did a week or two of testing.Since then (October 2008) we run our system on django and it runs solid. I also tried pyodbc but I did not like to much. \nWe are running a repair system where all transactions run through this system 40 heavy users. If you have more questions let me know.","Q_Score":52,"Tags":"python,sql-server,django,pyodbc","A_Id":843500,"CreationDate":"2009-05-09T06:45:00.000","Title":"Using Sql Server with Django in production","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Has anybody got recent experience with deploying a Django application with an SQL Server database back end? Our workplace is heavily invested in SQL Server and will not support Django if there isn't a sufficiently developed back end for it.\nI'm aware of mssql.django-pyodbc and django-mssql as unofficially supported back ends. Both projects seem to have only one person contributing which is a bit of a worry though the contributions seem to be somewhat regular.\nAre there any other back ends for SQL Server that are well supported? Are the two I mentioned here 'good enough' for production? What are your experiences?","AnswerCount":7,"Available Count":2,"Score":0.0285636566,"is_accepted":false,"ViewCount":48333,"Q_Id":842831,"Users Score":1,"Answer":"Haven't used it in production yet, but my initial experiences with django-mssql have been pretty solid. All you need are the Python Win32 extensions and to get the sqlserver_ado module onto your Python path. From there, you just use sql_server.pyodbc as your DATABASE_ENGINE. So far I haven't noticed anything missing, but I haven't fully banged on it yet either.","Q_Score":52,"Tags":"python,sql-server,django,pyodbc","A_Id":843476,"CreationDate":"2009-05-09T06:45:00.000","Title":"Using Sql Server with Django in production","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to launch a server side process off a mysql row insert. I'd appreciate some feedback\/suggestions. So far I can think of three options:\n1st (least attractive): My preliminary understanding is that I can write a kind of \"custom trigger\" in C that could fire off a row insert. In addition to having to renew my C skills this would requite a (custom?) recompile of MySQl ... yuck!\n2nd (slightly more attractive): I could schedule a cron task server side of a program that I write that would query the table for new rows periodically. This has the benefit of being DB and language independent. The problem with this is that I suffer the delay of the cron's schedule.\n3rd (the option I'm leading with): I could write a multi threaded program that would query the table for changes on a single thread, spawning new threads to process the newly inserted rows as needed. This has all the benefits of option 2 with less delay.\nI'll also mention that I'm leaning towards python for this task, as easy access to the system (linux) commands, as well as some in house perl scripts, is going to be very very useful.\nI'd appreciate any feedback\/suggestion\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":227,"Q_Id":856173,"Users Score":4,"Answer":"Write an insert trigger which duplicates inserted rows to a secondary table. Periodically poll the secondary table for rows with an external application\/cronjob; if any rows are in the table, delete them and do your processing (or set a 'processing started' flag and only delete from the secondary table upon successful processing).\nThis will work very nicely for low to medium insert volumes. If you have a ton of data coming at your table, some kind of custom trigger in C is probably your only choice.","Q_Score":3,"Tags":"python,mysql,linux,perl","A_Id":856208,"CreationDate":"2009-05-13T05:15:00.000","Title":"launch a process off a mysql row insert","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to launch a server side process off a mysql row insert. I'd appreciate some feedback\/suggestions. So far I can think of three options:\n1st (least attractive): My preliminary understanding is that I can write a kind of \"custom trigger\" in C that could fire off a row insert. In addition to having to renew my C skills this would requite a (custom?) recompile of MySQl ... yuck!\n2nd (slightly more attractive): I could schedule a cron task server side of a program that I write that would query the table for new rows periodically. This has the benefit of being DB and language independent. The problem with this is that I suffer the delay of the cron's schedule.\n3rd (the option I'm leading with): I could write a multi threaded program that would query the table for changes on a single thread, spawning new threads to process the newly inserted rows as needed. This has all the benefits of option 2 with less delay.\nI'll also mention that I'm leaning towards python for this task, as easy access to the system (linux) commands, as well as some in house perl scripts, is going to be very very useful.\nI'd appreciate any feedback\/suggestion\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":227,"Q_Id":856173,"Users Score":0,"Answer":"I had this issue about 2 years ago in .NET and I went with the 3rd approach. However, looking back at it, I'm wondering if looking into Triggers with PhpMyAdmin & MySQL isn't the approach to look into.","Q_Score":3,"Tags":"python,mysql,linux,perl","A_Id":856210,"CreationDate":"2009-05-13T05:15:00.000","Title":"launch a process off a mysql row insert","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a code in python in which I established a connection with database. I have queries in a loop. While queries being executed in the loop , If i unplug the network cable it should stop with an exception. But this not happens, When i again plug yhe network cabe after 2 minutes it starts again from where it ended. I am using linux and psycopg2. It is not showing exception","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":442,"Q_Id":867175,"Users Score":1,"Answer":"If you want to implement timeouts that work no matter how the client library is connecting to the server, it's best to attempt the DB operations in a separate thread, or, better, a separate process, which a \"monitor\" thread\/process can kill if needed; see the multiprocessing module in Python 2.6 standard library (there's a backported version for 2.5 if you need that). A process is better because when it's killed the operating system will take care of deallocating and cleaning up resources, while killing a thread is always a pretty unsafe and messy business.","Q_Score":0,"Tags":"python,tcp,database-connection","A_Id":867433,"CreationDate":"2009-05-15T06:03:00.000","Title":"db connection in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a code in python in which I established a connection with database. I have queries in a loop. While queries being executed in the loop , If i unplug the network cable it should stop with an exception. But this not happens, When i again plug yhe network cabe after 2 minutes it starts again from where it ended. I am using linux and psycopg2. It is not showing exception","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":442,"Q_Id":867175,"Users Score":2,"Answer":"Your database connection will almost certainly be based on a TCP socket. TCP sockets will hang around for a long time retrying before failing and (in python) raising an exception. Not to mention and retries\/automatic reconnection attempts in the database layer.","Q_Score":0,"Tags":"python,tcp,database-connection","A_Id":867202,"CreationDate":"2009-05-15T06:03:00.000","Title":"db connection in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database. \nMy question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this?","AnswerCount":6,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":3014,"Q_Id":878143,"Users Score":4,"Answer":"What's wrong with FTP? The protocol supports reusability and there are lots and lots of clients.","Q_Score":6,"Tags":"c#,java,python","A_Id":878154,"CreationDate":"2009-05-18T14:56:00.000","Title":"Resumable File Upload","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database. \nMy question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3014,"Q_Id":878143,"Users Score":0,"Answer":"On client side, flash; On server side, whatever (it wouldn't make any difference).\nNo existing technologies (except for using FTP or something).","Q_Score":6,"Tags":"c#,java,python","A_Id":878160,"CreationDate":"2009-05-18T14:56:00.000","Title":"Resumable File Upload","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database. \nMy question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3014,"Q_Id":878143,"Users Score":0,"Answer":"I'm surprised no one has mentioned torrent files. They can also be packaged into a script that then triggers something to execute.","Q_Score":6,"Tags":"c#,java,python","A_Id":30990243,"CreationDate":"2009-05-18T14:56:00.000","Title":"Resumable File Upload","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.\nI'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?\nThe only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this?","AnswerCount":4,"Available Count":4,"Score":0.1488850336,"is_accepted":false,"ViewCount":967,"Q_Id":881517,"Users Score":3,"Answer":"For a user login and basic permission tokens in a simple web application I will definitely store that in a cookie-based session. It's true that a few SELECTs per request is not a big deal at all, but then again if you can get some\/all of your web requests to execute from cached data with no DB hits at all, that just adds that much more scalability to an app which is planning on receiving a lot of load. \nThe issue of the user token being changed on the database is handled in two ways. One is, ignore it - for a lot of use cases its not that big a deal for the user to log out and log back in again to get at new permissions that have been granted elsewhere (witness unix as an example). The other is that all mutations of the user row are filtered through a method that also resets the state within the cookie-based session, but this is only effective if the user him\/herself is the one initiating the changes through the browser interface.\nIf OTOH neither of the above use cases apply to you, then you probably need to stick with a little bit of database access built into every request.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":890202,"CreationDate":"2009-05-19T08:11:00.000","Title":"SQLAlchemy - Database hits on every request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.\nI'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?\nThe only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":967,"Q_Id":881517,"Users Score":1,"Answer":"It's a Database, so often it's fairly common to \"hit\" the Database to pull the required data. You can reduce single queries if you build up Joins or Stored Procedures.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":881535,"CreationDate":"2009-05-19T08:11:00.000","Title":"SQLAlchemy - Database hits on every request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.\nI'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?\nThe only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this?","AnswerCount":4,"Available Count":4,"Score":0.1488850336,"is_accepted":false,"ViewCount":967,"Q_Id":881517,"Users Score":3,"Answer":"\"hitting the database for something like this on every request isn't efficient.\"\nFalse. And, you've assumed that there's no caching, which is also false.\nMost ORM layers are perfectly capable of caching rows, saving some DB queries.\nMost RDBMS's have extensive caching, resulting in remarkably fast responses to common queries.\nAll ORM layers will use consistent SQL, further aiding the database in optimizing the repetitive operations. (Specifically, the SQL statement is cached, saving parsing and planning time.)\n\" Or is this considered a normal thing to do?\"\nTrue.\nUntil you can prove that your queries are the slowest part of your application, don't worry. Build something that actually works. Then optimize the part that you can prove is the bottleneck.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":882021,"CreationDate":"2009-05-19T08:11:00.000","Title":"SQLAlchemy - Database hits on every request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.\nI'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?\nThe only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this?","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":967,"Q_Id":881517,"Users Score":2,"Answer":"You are basically talking about caching data as a performance optimization. As always, premature optimization is a bad idea. It's hard to know where the bottlenecks are beforehand, even more so if the application domain is new to you. Optimization adds complexity and if you optimize the wrong things, you not only have wasted the effort, but have made the necessary optimizations harder.\nRequesting user data usually is usually a pretty trivial query. You can build yourself a simple benchmark to see what kind of overhead it will introduce. If it isn't a significant percentage of your time-budget, just leave it be.\nIf you still want to cache the data on the application server then you have to come up with a cache invalidation scheme.\nPossible schemes are to check for changes from the database. If you don't have a lot of data to cache, this really isn't significantly more efficient than just reloading it.\nAnother option is to just time out cached data. This is a good option if instant visibility of changes isn't important.\nAnother option is to actively invalidate caches on changes. This depends on whether you only modify the database through your application and if you have a single application server or a clustered solution.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":882171,"CreationDate":"2009-05-19T08:11:00.000","Title":"SQLAlchemy - Database hits on every request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"MySQL has a RENAME TABLE statemnt that will allow you to change the name of a table. \nThe manual mentions \n\nThe rename operation is done atomically, which means that no other session can \n access any of the tables while the rename is running\n\nThe manual does not (to my knowedge) state how this renaming is accomplished. Is an entire copy of the table created, given a new name, and then the old table deleted? Or does MySQL do some magic behind the scenes to quickly rename the table?\nIn other words, does the size of the table have an effect on how long the RENAME table statement will take to run. Are there other things that might cause the renaming of a block to significantly block?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2472,"Q_Id":885771,"Users Score":5,"Answer":"I believe MySQL only needs to alter metadata and references to the table's old name in stored procedures -- the number of records in the table should be irrelevant.","Q_Score":3,"Tags":"php,python,mysql,ruby,migration","A_Id":885783,"CreationDate":"2009-05-20T01:22:00.000","Title":"How does MySQL's RENAME TABLE statment work\/perform?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"At start you have a string 'DDMMYYYY HHMMSS' and I want at the end to insert the string in a date field in sqlite3 database. The program is made in python. How can I do that ?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":359,"Q_Id":889974,"Users Score":1,"Answer":"Even though the \".schema\" indicates that the field is a date or timestamp field... the field is actually a string. You can format the string anyway you want. If memory serves... their is no validation at all.","Q_Score":0,"Tags":"python,sqlite,date","A_Id":890025,"CreationDate":"2009-05-20T20:08:00.000","Title":"Is it possible to format a date with sqlite3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I posted this in the mailing list, but the reply I got wasn't too clear, so maybe I'll have better luck here.\nI currently have a grid with data in it.\nI would like to know if there is a way to give each generated row an\nID, or at least, associate each row with an object.\nIt may make it more clear if I clarify what i'm doing. It is described\nbelow.\nI pull data from an SQL table and display them in the grid.\nI am allowing for the user to add\/delete rows and edit cells.\nSay the user is viewing a grid that has 3 rows(which is, in turn, a\nmysql table with 3 rows).\nIf he is on the last row and presses the down arrow key, a new row is\ncreated and he can enter data into it and it will be inserted in the\ndatabase when he presses enter.\nHowever, I need a way to find out which rows will use \"insert\" query\nand which will use \"update\" query.\nSo ideally, when the user creates a new row by pressing the down\narrow, I would give that row an ID and store it in a list(or, if rows\nalready have IDs, just store it in a list) and when the user finishes\nentering data in the cells and presses enter, I would check if that\nrow's ID is in the in the list. If it is, i would insert all of that\nrow's cells values into the table, if not, i would update mysql with\nthe values.\nHope I made this clear.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1026,"Q_Id":901704,"Users Score":3,"Answer":"What I did when I encountered such a case was to create a column for IDs and set its width to 0.","Q_Score":2,"Tags":"python,wxpython,wxwidgets","A_Id":901806,"CreationDate":"2009-05-23T15:13:00.000","Title":"Give Wxwidget Grid rows an ID","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In diagnosing SQL query problems, it would sometimes be useful to be able to see the query string after parameters are interpolated into it, using MySQLdb's safe interpolation.\nIs there a way to get that information from either a MySQL exception object or from the connection object itself?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":817,"Q_Id":904042,"Users Score":2,"Answer":"Use mysql's own ability to log the queries and watch for them.","Q_Score":2,"Tags":"python,mysql","A_Id":904077,"CreationDate":"2009-05-24T15:56:00.000","Title":"python-mysql : How to get interpolated query string?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have lots of data and some charts repesenting one logical item. Charts and data is stored in various files. As a result, most users can easily access and re-use the information in their applications.\nHowever, this not exactly a good way of storing data. Amongst other reasons, charts belong to some data, the charts and data have some meta-information that is not reflected in the file system, there are a lot of files, etc.\nIdeally, we want\n\none big \"file\" that can store all\ninformation (text, data and charts)\nthe \"file\" is human readable,\nportable and accessible by\nnon-technical users\nallows typical office applications\nlike MS Word or MS Excel to extract\ntext, data and charts easily.\nlight-weight, easy solution. Quick\nand dirty is sufficient. Not many\nusers.\n\nI am happy to use some scripting language like Python to generate the \"file\", third-party tools (ideally free as in beer), and everything that you find on a typical Windows-centric office computer.\nSome ideas that we currently ponder:\n\nusing VB or pywin32 to script MS Word or Excel \ncreating html and publish it on a RESTful web server\n\nCould you expand on the ideas above? Do you have any other ideas? What should we consider?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":325,"Q_Id":915726,"Users Score":2,"Answer":"I can only agree with Reef on the general concepts he presented:\n\nYou will almost certainly prefer the data in a database than in a single large file\nYou should not worry that the data is not directly manipulated by users because as Reef mentioned, it can only go wrong. And you would be suprised at how ugly it can get\n\nConcerning the usage of MS Office integration tools I disagree with Reef. You can quite easily create an ActiveX Server (in Python if you like) that is accessible from the MS Office suite. As long as you have a solid infrastructure that allows some sort of file share, you could use that shared area to keep your code. I guess the mess Reef was talking about mostly is about keeping users' versions of your extract\/import code in sync. If you do not use some sort of shared repository (a simple shared folder) or if your infrastructure fails you often so that the shared folder becomes unavailable you will be in great pain. Note what is also somewhat painful if you do not have the appropriate tools but deal with many users: The ActiveX Server is best registered on each machine.\nSo.. I just said MS Office integration is very doable. But whether it is the best thing to do is a different matter. I strongly believe you will serve your users better if you build a web-site that handles their data for them. This sort of tool however almost certainly becomes an \"ongoing project\". Often, even as an \"ongoing project\", the time saved by your users could still make it worth it. But sometimes, strategically, you want to give your users a poorer experience to control project costs. In that case the ActiveX Server I mentioned could be what you want.","Q_Score":2,"Tags":"python,web-services,scripting,reporting,ms-office","A_Id":921061,"CreationDate":"2009-05-27T13:34:00.000","Title":"Reporting charts and data for MS-Office users","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have lots of data and some charts repesenting one logical item. Charts and data is stored in various files. As a result, most users can easily access and re-use the information in their applications.\nHowever, this not exactly a good way of storing data. Amongst other reasons, charts belong to some data, the charts and data have some meta-information that is not reflected in the file system, there are a lot of files, etc.\nIdeally, we want\n\none big \"file\" that can store all\ninformation (text, data and charts)\nthe \"file\" is human readable,\nportable and accessible by\nnon-technical users\nallows typical office applications\nlike MS Word or MS Excel to extract\ntext, data and charts easily.\nlight-weight, easy solution. Quick\nand dirty is sufficient. Not many\nusers.\n\nI am happy to use some scripting language like Python to generate the \"file\", third-party tools (ideally free as in beer), and everything that you find on a typical Windows-centric office computer.\nSome ideas that we currently ponder:\n\nusing VB or pywin32 to script MS Word or Excel \ncreating html and publish it on a RESTful web server\n\nCould you expand on the ideas above? Do you have any other ideas? What should we consider?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":325,"Q_Id":915726,"Users Score":1,"Answer":"Instead of using one big file, You should use a database. Yes, You can store various types of files like gifs in the database if You like to.\nThe file would not be human readable or accessible by non-technical users, but this is good.\nThe database would have a website that Your non-technical users would use to insert, update and get data from. They would be able to display it on the page or export it to csv (or even xls - it's not that hard, I've seen some csv->xls converters). You could look into some open standard document formats, I think it should be quite easy to output data with in it. Do not try to output in \"doc\" format (but You could try \"docx\"). You should be able to easily teach the users how to export their data to a CSV and upload it to the site, or they could use the web interface to insert the data if they like to.\nIf You will allow Your users to mess with the raw data, they will break it (i have tried that, You have no idea how those guys could do that). The only way to prevent it is to make a web form that only allows them to perform certain actions that You exactly know how that they should suppose to perform.\nThe database + web page solution is the good one. Using VB or pywin32 to script MSOffice will get You in so much trouble I cannot even imagine.\nYou could use gnuplot or some other graphics library to draw (pretty straightforward to implement, it does all the hard work for You).\nI am afraid that the \"quick\" and dirty solution is tempting, but I only can say one thing: it will not be quick. In a few weeks You will find that hacking around with MSOffice scripting is messy, buggy and unreliable and the non-technical guys will hate it and say that in other companies they used to have a simple web panel that did that. Then You will find that You will not be able to ask about the scripting because everyone uses the web interfaces nowadays, as they are quite easy to implement and maintain.\nThis is not a small project, it's a medium sized one, You need to remember this while writing it. It will take some time to do it and test it and You will have to add new features as the non-technical guys will start using it. I knew some passionate php teenagers who would be able to write this panel in a week, but as I understand You have some better resources so I hope You will come with a really reliable, modular, extensible solution with good usability and happy users.\nGood luck!","Q_Score":2,"Tags":"python,web-services,scripting,reporting,ms-office","A_Id":920669,"CreationDate":"2009-05-27T13:34:00.000","Title":"Reporting charts and data for MS-Office users","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A table has been ETLed to another table. My task is to verify the data between two tables programmatically. \nOne of the difficulties I m facing rite now is:\nhow to use the expression that I can get from, let s say, derived column task and verify with the source and destination.\nor in other words, how can I use the expression to work in the code. \nAny ideas....highly appreciated\nSagar","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1102,"Q_Id":921268,"Users Score":2,"Answer":"Set up a column which holds a CHECKSUM() of each row. Do a left outer join between the two tables . If you have any nulls for the right side, you have problems.","Q_Score":0,"Tags":".net,ssis,ironpython","A_Id":1233648,"CreationDate":"2009-05-28T14:55:00.000","Title":"How to compare data of two tables transformed in SSIS package","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having a script which makes a db connection and pereform some select operation.accroding to the fetch data i am calling different functions which also perform db operations.How can i pass db connection to the functions which are being called as i donot want to make new connection","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":515,"Q_Id":934221,"Users Score":2,"Answer":"Why to pass connection itself? Maybe build a class that handles all the DB-operation and just pass this class' instance around, calling it's methods to perform selects, inserts and all that DB-specific code?","Q_Score":0,"Tags":"python","A_Id":934709,"CreationDate":"2009-06-01T10:08:00.000","Title":"python db connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It seems cx_Oracle doesn't.\nAny other suggestion for handling xml with Oracle and Python is appreciated.\nThanks.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1256,"Q_Id":936381,"Users Score":1,"Answer":"I managed to do this with cx_Oracle.\nI used the sys.xmltype.createxml() function in the statement that inserts the rows in a table with XMLTYPE fields; then I used prepare() and setinputsizes() to specify that the bind variables I used for XMLTYPE fields were of cx_Oracle.CLOB type.","Q_Score":5,"Tags":"python,xml,oracle,xmltype","A_Id":946854,"CreationDate":"2009-06-01T19:36:00.000","Title":"Is there an Oracle wrapper for Python that supports xmltype columns?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find a way to cause SQLAlchemy to generate a query of the following form:\n\nselect * from t where (a,b) in ((a1,b1),(a2,b2));\n\nIs this possible?\nIf not, any suggestions on a way to emulate it?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6490,"Q_Id":948212,"Users Score":3,"Answer":"Well, thanks to Hao Lian above, I came up with a functional if painful solution.\nAssume that we have a declarative-style mapped class, Clazz, and a list of tuples of compound primary key values, values\n(Edited to use a better (IMO) sql generation style):\n\nfrom sqlalchemy.sql.expression import text,bindparam\n...\n def __gParams(self, f, vs, ts, bs):\n for j,v in enumerate(vs):\n key = f % (j+97)\n bs.append(bindparam(key, value=v, type_=ts[j]))\n yield ':%s' % key\n\n def __gRows(self, ts, values, bs):\n for i,vs in enumerate(values):\n f = '%%c%d' % i\n yield '(%s)' % ', '.join(self.__gParams(f, vs, ts, bs))\n\n def __gKeys(self, k, ts):\n for c in k: \n ts.append(c.type)\n yield str(c)\n\n def __makeSql(self,Clazz, values):\n t = []\n b = []\n return text(\n '(%s) in (%s)' % (\n ', '.join(self.__gKeys(Clazz.__table__.primary_key,t)),\n ', '.join(self.__gRows(t,values,b))),\n bindparams=b)\n\nThis solution works for compound or simple primary keys. It's probably marginally slower than the col.in_(keys) for simple primary keys though.\nI'm still interested in suggestions of better ways to do this, but this way is working for now and performs noticeably better than the or_(and_(conditions)) way, or the for key in keys: do_stuff(q.get(key)) way.","Q_Score":10,"Tags":"python,sql,sqlalchemy","A_Id":951640,"CreationDate":"2009-06-04T01:40:00.000","Title":"Sqlalchemy complex in_ clause with tuple in list of tuples","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a webapp (call it myapp.com) that allows users to upload files. The webapp will be deployed on Amazon EC2 instance. I would like to serve these files back out to the webapp consumers via an s3 bucket based domain (i.e. uploads.myapp.com). \nWhen the user uploads the files, I can easily drop them in into a folder called \"site_uploads\" on the local ec2 instance. However, since my ec2 instance has finite storage, with a lot of uploads, the ec2 file system will fill up quickly. \nIt would be great if the ec2 instance could mount and s3 bucket as the \"site_upload\" directory. So that uploads to the EC2 \"site_upload\" directory automatically end up on uploads.myapp.com (and my webapp can use template tags to make sure the links for this uploaded content is based on that s3 backed domain). This also gives me scalable file serving, as request for files hits s3 and not my ec2 instance. Also, it makes it easy for my webapp to perform scaling\/resizing of the images that appear locally in \"site_upload\" but are actually on s3.\nI'm looking at s3fs, but judging from the comments, it doesn't look like a fully baked solution. I'm looking for a non-commercial solution.\nFYI, The webapp is written in django, not that that changes the particulars too much.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8589,"Q_Id":956904,"Users Score":0,"Answer":"I'd suggest using a separately-mounted EBS volume. I tried doing the same thing for some movie files. Access to S3 was slow, and S3 has some limitations like not being able to rename files, no real directory structure, etc.\nYou can set up EBS volumes in a RAID5 configuration and add space as you need it.","Q_Score":4,"Tags":"python,django,amazon-s3,amazon-ec2","A_Id":6308720,"CreationDate":"2009-06-05T16:39:00.000","Title":"mounting an s3 bucket in ec2 and using transparently as a mnt point","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm looking for resources to help migrate my design skills from traditional RDBMS data store over to AppEngine DataStore (ie: 'Soft Schema' style). I've seen several presentations and all touch on the the overarching themes and some specific techniques. \nI'm wondering if there's a place we could pool knowledge from experience (\"from the trenches\") on real-world approaches to rethinking how data is structured, especially porting existing applications. We're heavily Hibernate based and have probably travelled a bit down the wrong path with our data model already, generating some gnarly queries which our DB is struggling with.\nPlease respond if:\n\nYou have ported a non-trivial application over to AppEngine\nYou've created a common type of application from scratch in AppEngine\nYou've done neither 1 or 2, but are considering it and want to share your own findings so far.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1022,"Q_Id":976639,"Users Score":1,"Answer":"The non relational database design essentially involves denormalization wherever possible.\nExample: Since the BigTable doesnt provide enough aggregation features, the sum(cash) option that would be in the RDBMS world is not available. Instead it would have to be stored on the model and the model save method must be overridden to compute the denormalized field sum.\nEssential basic design that comes to mind is that each template has its own model where all the required fields to be populated are present denormalized in the corresponding model; and you have an entire signals-update-bots complexity going on in the models.","Q_Score":6,"Tags":"java,python,google-app-engine,data-modeling","A_Id":979391,"CreationDate":"2009-06-10T16:13:00.000","Title":"Thinking in AppEngine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm looking for resources to help migrate my design skills from traditional RDBMS data store over to AppEngine DataStore (ie: 'Soft Schema' style). I've seen several presentations and all touch on the the overarching themes and some specific techniques. \nI'm wondering if there's a place we could pool knowledge from experience (\"from the trenches\") on real-world approaches to rethinking how data is structured, especially porting existing applications. We're heavily Hibernate based and have probably travelled a bit down the wrong path with our data model already, generating some gnarly queries which our DB is struggling with.\nPlease respond if:\n\nYou have ported a non-trivial application over to AppEngine\nYou've created a common type of application from scratch in AppEngine\nYou've done neither 1 or 2, but are considering it and want to share your own findings so far.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1022,"Q_Id":976639,"Users Score":1,"Answer":"The timeouts are tight and performance was ok but not great, so I found myself using extra space to save time; for example I had a many-to-many relationship between trading cards and players, so I duplicated the information of who owns what: Card objects have a list of Players and Player objects have a list of Cards.\nNormally storing all your information twice would have been silly (and prone to get out of sync) but it worked really well.\nIn Python they recently released a remote API so you can get an interactive shell to the datastore so you can play with your datastore without any timeouts or limits (for example, you can delete large swaths of data, or refactor your models); this is fantastically useful since otherwise as Julien mentioned it was very difficult to do any bulk operations.","Q_Score":6,"Tags":"java,python,google-app-engine,data-modeling","A_Id":978757,"CreationDate":"2009-06-10T16:13:00.000","Title":"Thinking in AppEngine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a sqlite3 db which i insert\/select from in python. The app works great but i want to tweak it so no one can read from the DB without a password. How can i do this in python? note i have no idea where to start.","AnswerCount":5,"Available Count":1,"Score":0.1194272985,"is_accepted":false,"ViewCount":29194,"Q_Id":986403,"Users Score":3,"Answer":"SQLite databases are pretty human-readable, and there isn't any built-in encryption.\nAre you concerned about someone accessing and reading the database files directly, or accessing them through your program? \nI'm assuming the former, because the latter isn't really database related--it's your application's security you're asking about. \nA few options come to mind:\n\nProtect the db with filesystem permissions rather than encryption. You haven't mentioned what your environment is, so I can't say if this is workable for you or not, but it's probably the simplest and most reliable way, as you can't attempt to decrypt what you can't read.\nEncrypt in Python before writing, and decrypt in Python after reading. Fairly simple, but you lose most of the power of SQL's set-based matching operations.\nSwitch to another database; user authentication and permissions are standard features of most multi-user databases. When you find yourself up against the limitations of a tool, it may be easier to look around at other tools rather than hacking new features into the current tool.","Q_Score":10,"Tags":"python,sqlite,encryption","A_Id":987942,"CreationDate":"2009-06-12T12:35:00.000","Title":"Encrypted file or db in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A hashtable in memcached will be discarded either when it's Expired or when there's not enough memory and it's choosen to die based on the Least Recently Used algorithm.\nCan we put a Priority to hint or influence the LRU algorithm? I want to use memcached to store Web Sessions so i can use the cheap round-robin. \nI need to give Sessions Top Priority and nothing can kill them (not even if it's the Least Recently Used) except their own Max_Expiry.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":799,"Q_Id":1000540,"Users Score":1,"Answer":"Not that I know of.\nmemcached is designed to be very fast and very straightforward, no fancy weights and priorities keep it simple.\nYou should not rely on memcache for persistent session storage. You should keep your sessions in the DB, but you can cache them in memcache. This way you can enjoy both worlds.","Q_Score":0,"Tags":"python,database,session,caching,memcached","A_Id":1130289,"CreationDate":"2009-06-16T09:56:00.000","Title":"Is there an option to configure a priority in memcached? (Similiar to Expiry)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need your advices to choose a Python Web Framework for developing a large project:\nDatabase (Postgresql)will have at least 500 tables, most of them with a composite primary\nkey, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.\nWill a ORM be helpful?","AnswerCount":7,"Available Count":4,"Score":0.0285636566,"is_accepted":false,"ViewCount":2101,"Q_Id":1003131,"Users Score":1,"Answer":"I would absolutely recommend Repoze.bfg with SQLAlchemy for what you describe. I've done projects now in Django, TurboGears 1, TurboGears 2, Pylons, and dabbled in pure Zope3. BFG is far and away the framework most designed to accomodate a project growing in ways you don't anticipate at the beginning, but is far more lightweight and pared down than Grok or Zope 3. Also, the docs are the best technical docs of all of them, not the easiest, but the ones that answer the hard questions you're going to encounter the best. I'm currently doing a similar thing where we are overhauling a bunch of legacy databases into a new web deliverable app and we're using BFG, some Pylons, Zope 3 adapters, Genshi for templating, SQLAlchemy, and Dojo for the front end. We couldn't be happier with BFG, and it's working out great. BFGs classes as views that are actually zope multi-adapters is absolutely perfect for being able to override only very specific bits for certain domain resources. And the complete lack of magic globals anywhere makes testing and packaging the easiest we've had with any framework.\nymmv!","Q_Score":4,"Tags":"python,frameworks,web-frameworks","A_Id":2246687,"CreationDate":"2009-06-16T18:21:00.000","Title":"python web framework large project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need your advices to choose a Python Web Framework for developing a large project:\nDatabase (Postgresql)will have at least 500 tables, most of them with a composite primary\nkey, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.\nWill a ORM be helpful?","AnswerCount":7,"Available Count":4,"Score":0.057080742,"is_accepted":false,"ViewCount":2101,"Q_Id":1003131,"Users Score":2,"Answer":"Depending on what you want to do, you actually have a few possible frameworks :\n[Django] Big, strong (to the limit of what a python framework can be), and the older in the race. Used by a few 'big' sites around the world ([Django sites]). Still is a bit of an overkill for almost everything and with a deprecated coding approach.\n[Turbogears] is a recent framework based on Pylons. Don't know much about it, but got many good feedbacks from friends who tried it.\n[Pylons] ( which Turbogears2 is based on ). Often saw at the \"PHP of Python\" , it allow very quick developements from scratch. Even if it can seem inappropriate for big projects, it's often the faster and easier way to go.\nThe last option is [Zope] ( with or without Plone ), but Plone is way to slow, and Zope learning curve is way too long ( not even speaking in replacing the ZODB with an SQL connector ) so if you don't know the framework yet, just forget about it.\nAnd yes, An ORM seem mandatory for a project of this size. For Django, you'll have to handle migration to their database models (don't know how hard it is to plug SQLAlchemy in Django). For turbogears and Pylons, the most suitable solution is [SQLAlchemy], which is actually the most complete ( and rising ) ORM for python. For zope ... well, nevermind\nLast but not least, I'm not sure you're starting on a good basis for your project. 500 tables on any python framework would scare me to death. A boring but rigid language such as java (hibernate+spring+tapestry or so) seem really more appropriate.","Q_Score":4,"Tags":"python,frameworks,web-frameworks","A_Id":1003329,"CreationDate":"2009-06-16T18:21:00.000","Title":"python web framework large project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need your advices to choose a Python Web Framework for developing a large project:\nDatabase (Postgresql)will have at least 500 tables, most of them with a composite primary\nkey, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.\nWill a ORM be helpful?","AnswerCount":7,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":2101,"Q_Id":1003131,"Users Score":8,"Answer":"Yes. An ORM is essential for mapping SQL stuff to objects. \nYou have three choices.\n\nUse someone else's ORM\nRoll your own.\nTry to execute low-level SQL queries and pick out the fields they want from the result set. This is -- actually -- a kind of ORM with the mappings scattered throughout the applications. It may be fast to execute and appear easy to develop, but it is a maintenance nightmare.\n\nIf you're designing the tables first, any ORM will be painful. For example, \"composite primary key\" is generally a bad idea, and with an ORM it's almost always a bad idea. You'll need to have a surrogate primary key. Then you can have all the composite keys with indexes you want. They just won't be \"primary\".\nIf you design the objects first, then work out tables that will implement the objects, the ORM will be pleasant, simple and will run quickly, also.","Q_Score":4,"Tags":"python,frameworks,web-frameworks","A_Id":1003173,"CreationDate":"2009-06-16T18:21:00.000","Title":"python web framework large project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need your advices to choose a Python Web Framework for developing a large project:\nDatabase (Postgresql)will have at least 500 tables, most of them with a composite primary\nkey, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.\nWill a ORM be helpful?","AnswerCount":7,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":2101,"Q_Id":1003131,"Users Score":12,"Answer":"Django has been used by many large organizations (Washington Post, etc.) and can connect with Postgresql easily enough. I use it fairly often and have had no trouble.","Q_Score":4,"Tags":"python,frameworks,web-frameworks","A_Id":1003161,"CreationDate":"2009-06-16T18:21:00.000","Title":"python web framework large project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I configure Django with SQLAlchemy?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30261,"Q_Id":1011476,"Users Score":0,"Answer":"There are many benefits of using SQLAlchemy instead of Django ORM, but consider developing a built-in-Django choice of SQLAlchemy\n(to have something called a production ready)\nBy the way, Django ORM is going better - in Django 1.11 they added UNION support (a SQL basic operator), so maybe some day there will be no need to change ORM.","Q_Score":29,"Tags":"python,django,sqlalchemy,configure","A_Id":45878579,"CreationDate":"2009-06-18T08:31:00.000","Title":"Configuring Django to use SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is mainly just a \"check my understanding\" type of question. Here's my understanding of CLOBs and BLOBs as they work in Oracle:\n\nCLOBs are for text like XML, JSON, etc. You should not assume what encoding the database will store it as (at least in an application) as it will be converted to whatever encoding the database was configured to use.\nBLOBs are for binary data. You can be reasonably assured that they will be stored how you send them and that you will get them back with exactly the same data as they were sent as.\n\nSo in other words, say I have some binary data (in this case a pickled python object). I need to be assured that when I send it, it will be stored exactly how I sent it and that when I get it back it will be exactly the same. A BLOB is what I want, correct?\nIs it really feasible to use a CLOB for this? Or will character encoding cause enough problems that it's not worth it?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":43567,"Q_Id":1018073,"Users Score":56,"Answer":"CLOB is encoding and collation sensitive, BLOB is not.\nWhen you write into a CLOB using, say, CL8WIN1251, you write a 0xC0 (which is Cyrillic letter \u0410).\nWhen you read data back using AL16UTF16, you get back 0x0410, which is a UTF16 represenation of this letter.\nIf you were reading from a BLOB, you would get same 0xC0 back.","Q_Score":39,"Tags":"python,oracle","A_Id":1018096,"CreationDate":"2009-06-19T13:51:00.000","Title":"Help me understand the difference between CLOBs and BLOBs in Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is mainly just a \"check my understanding\" type of question. Here's my understanding of CLOBs and BLOBs as they work in Oracle:\n\nCLOBs are for text like XML, JSON, etc. You should not assume what encoding the database will store it as (at least in an application) as it will be converted to whatever encoding the database was configured to use.\nBLOBs are for binary data. You can be reasonably assured that they will be stored how you send them and that you will get them back with exactly the same data as they were sent as.\n\nSo in other words, say I have some binary data (in this case a pickled python object). I need to be assured that when I send it, it will be stored exactly how I sent it and that when I get it back it will be exactly the same. A BLOB is what I want, correct?\nIs it really feasible to use a CLOB for this? Or will character encoding cause enough problems that it's not worth it?","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":43567,"Q_Id":1018073,"Users Score":10,"Answer":"Your understanding is correct. Since you mention Python, think of the Python 3 distinction between strings and bytes: CLOBs and BLOBs are quite analogous, with the extra issue that the encoding of CLOBs is not under your app's control.","Q_Score":39,"Tags":"python,oracle","A_Id":1018102,"CreationDate":"2009-06-19T13:51:00.000","Title":"Help me understand the difference between CLOBs and BLOBs in Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been building business database applications such as finance, inventory and other business requirement applications. I am planning to shift to Python. What would be the tools to start with best. I would need to do master, transaction forms, processing (back end), reports and that sort of thing. The database would be postgress or mysql. As I am new to Python I understand that I need besides Python the ORM and also a framework. My application is not a web site related but it could also be need to be done over the web if needed. \nHow to choose the initial setup of tool combinations?","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":9831,"Q_Id":1020775,"Users Score":0,"Answer":"just FYI, for PyQT, the book has a chapter 15 with Databases, It looks good. and the book has something with data and view etc. I have read it and I think it's well worth your time:)","Q_Score":9,"Tags":"python,frame","A_Id":1021195,"CreationDate":"2009-06-20T02:22:00.000","Title":"Python database application framework and tools","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"which versions of sqlite may best suite for python 2.6.2?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":444,"Q_Id":1025493,"Users Score":0,"Answer":"I'm using 3.4.0 out of inertia (it's what came with the Python 2.* versions I'm using) but there's no real reason (save powerful inertia;-) to avoid upgrading to 3.4.2, which fixes a couple of bugs that could lead to DB corruption and introduces no incompatibilities that I know of. (If you stick with 3.4.0 I'm told the key thing is to avoid VACUUM as it might mangle your data).\nPython 3.1 comes with SQLite 3.6.11 (which is supposed to work with Python 2.* just as well) and I might one day update to that (or probably to the latest, currently 3.6.15, to pick up a slew of minor bug fixes and enhancements) just to make sure I'm using identical releases on either Python 2 or Python 3 -- I've never observed a compatibility problem, but I doubt there has been thorough testing to support reading and writing the same DB from 3.4.0 and 3.6.11 (or any two releases so far apart from each other!-).","Q_Score":2,"Tags":"python,sqlite,python-2.6","A_Id":1028006,"CreationDate":"2009-06-22T04:49:00.000","Title":"sqlite version for python26","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the above mentioned Python lib to connect to a MySQL server. So far I've worked locally and all worked fine, until i realized I'll have to use my program in a network where all access goes through a proxy.\nDoes anyone now how I can set the connections managed by that lib to use a proxy?\nAlternatively: do you know of another Python lib for MySQL that can handle this?\nI also have no idea if the if the proxy server will allow access to the standard MySQL port or how I can trick it to allow it. Help on this is also welcomed.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2939,"Q_Id":1027751,"Users Score":1,"Answer":"there are a lot of different possibilities here. the only way you're going to get a definitive answer is to talk to the person that runs the proxy.\nif this is a web app and the web server and the database serve are both on the other side of a proxy, then you won't need to connect to the mysql server at all since the web app will do it for you.","Q_Score":2,"Tags":"python,mysql,proxy","A_Id":1027817,"CreationDate":"2009-06-22T15:11:00.000","Title":"MySQLdb through proxy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am newbie in Google App Engine. While I was going through the tutorial, I found several things that we do in php-mysql is not available in GAE. For example in dataStore auto increment feature is not available. Also I am confused about session management in GAE. Over all I am confused and can not visualize the whole thing.\nPlease advise me a simple user management system with user registration, user login, user logout, session (create,manage,destroy) with data Store. Also please advise me where I can get simple but effective examples.\nThanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":7295,"Q_Id":1030293,"Users Score":1,"Answer":"You don't write user management and registration and all that, because you use Google's own authentication services. This is all included in the App Engine documentation.","Q_Score":17,"Tags":"php,python,google-app-engine","A_Id":1030362,"CreationDate":"2009-06-23T01:58:00.000","Title":"Simple User management example for Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm making a Django web-app which allows a user to build up a set of changes over a series of GETs\/POSTs before committing them to the database (or reverting) with a final POST. I have to keep the updates isolated from any concurrent database users until they are confirmed (this is a configuration front-end), ruling out committing after each POST.\nMy preferred solution is to use a per-session transaction. This keeps all the problems of remembering what's changed (and how it affects subsequent queries), together with implementing commit\/rollback, in the database where it belongs. Deadlock and long-held locks are not an issue, as due to external constraints there can only be one user configuring the system at any one time, and they are well-behaved.\nHowever, I cannot find documentation on setting up Django's ORM to use this sort of transaction model. I have thrown together a minimal monkey-patch (ew!) to solve the problem, but dislike such a fragile solution. Has anyone else done this before? Have I missed some documentation somewhere?\n(My version of Django is 1.0.2 Final, and I am using an Oracle database.)","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":2065,"Q_Id":1033934,"Users Score":2,"Answer":"I came up with something similar to the Memento pattern, but different enough that I think it bears posting. When a user starts an editing session, I duplicate the target object to a temporary object in the database. All subsequent editing operations affect the duplicate. Instead of saving the object state in a memento at each change, I store operation objects. When I apply an operation to an object, it returns the inverse operation, which I store. \nSaving operations is much cheaper for me than mementos, since the operations can be described with a few small data items, while the object being edited is much bigger. Also I apply the operations as I go and save the undos, so that the temporary in the db always corresponds to the version in the user's browser. I never have to replay a collection of changes; the temporary is always only one operation away from the next version.\nTo implement \"undo,\" I pop the last undo object off the stack (as it were--by retrieving the latest operation for the temporary object from the db) apply it to the temporary and return the transformed temporary. I could also push the resultant operation onto a redo stack if I cared to implement redo.\nTo implement \"save changes,\" i.e. commit, I de-activate and time-stamp the original object and activate the temporary in it's place.\nTo implement \"cancel,\" i.e. rollback, I do nothing! I could delete the temporary, of course, because there's no way for the user to retrieve it once the editing session is over, but I like to keep the canceled edit sessions so I can run stats on them before clearing them out with a cron job.","Q_Score":8,"Tags":"python,django,transactions","A_Id":1121915,"CreationDate":"2009-06-23T17:16:00.000","Title":"Per-session transactions in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a system sitting on a \"Master Server\", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.\nBoth servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.\nCurrently I'm looking into:\n\nXMLRPC\nRestFul Services\na simple POST to a processing script\nsocket transfers\n\nThe app on my master is a TurboGears app, so I would prefer \"pythonic\" aka less ugly solutions. Copying a dumped table to another server via FTP \/ SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.\nCan anyone describe shortly how you would do this the \"best-practise\" way?\nThis doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.\nThanks,\nTom","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":810,"Q_Id":1043528,"Users Score":2,"Answer":"Server 1: Convert rows to JSON, call the RESTful api of second with JSON data\nServer 2: listens on a URI e.g. POST \/data , get json data convert back to dictionary or ORM objects, insert into db\nsqlalchemy\/sqlobject and simplejson is what you need.","Q_Score":2,"Tags":"python,web-services,database-design","A_Id":1043653,"CreationDate":"2009-06-25T11:59:00.000","Title":"Best Practise for transferring a MySQL table to another server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a system sitting on a \"Master Server\", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.\nBoth servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.\nCurrently I'm looking into:\n\nXMLRPC\nRestFul Services\na simple POST to a processing script\nsocket transfers\n\nThe app on my master is a TurboGears app, so I would prefer \"pythonic\" aka less ugly solutions. Copying a dumped table to another server via FTP \/ SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.\nCan anyone describe shortly how you would do this the \"best-practise\" way?\nThis doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.\nThanks,\nTom","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":810,"Q_Id":1043528,"Users Score":0,"Answer":"Assuming your situation allows this security-wise, you forgot one transport mechanism: simply opening a mysql connection from one server to another.\nMe, I would start by thinking about one script that ran regularly on the write server and opens a read only db connection to the read server (A bit of added security) and a full connection to it's own data base server. \nHow you then proceed depends on the data (is it just inserts to deal with? do you have to mirror deletes? how many inserts vs updates? etc) but basically you could write a script that pulled data from the read server and processed it immediately into the write server.\nAlso, would mysql server replication work or would it be to over-blown as a solution?","Q_Score":2,"Tags":"python,web-services,database-design","A_Id":1043595,"CreationDate":"2009-06-25T11:59:00.000","Title":"Best Practise for transferring a MySQL table to another server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to access a postgreSQL database that's running on a remote machine, from Python in OS\/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own. \nAny hints for a good installation guide for psycopg2 for os\/x?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4829,"Q_Id":1052957,"Users Score":3,"Answer":"macports tells me that the psycopg2 package has a dependency on the postgres client and libraries (but not the db server). If you successfully installed psycopg, then you should be good to go.\nIf you haven't installed yet, consider using macports or fink to deal with dependency resolution for you. In most cases, this will make things easier (occasionally build problems erupt).","Q_Score":2,"Tags":"python,macos,postgresql","A_Id":1052990,"CreationDate":"2009-06-27T14:52:00.000","Title":"psycopg2 on OSX: do I have to install PostgreSQL too?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them.\nThe setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions.\nBest practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance:\n 1) I have never written a stored procedure.\n 2) I heard that pyodbc does not return results from stored procedures as of yet.\n 3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth.\nSo, what's the best way to go about this?","AnswerCount":2,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":27883,"Q_Id":1063770,"Users Score":-10,"Answer":"I don't think pyodbc has any specific support for transactions. You need to send the SQL command to start\/commit\/rollback transactions.","Q_Score":15,"Tags":"python,transactions,pyodbc","A_Id":1063879,"CreationDate":"2009-06-30T13:45:00.000","Title":"In Python, Using pyodbc, How Do You Perform Transactions?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am not very familiar with databases, and so I do not know how to partition a table using SQLAlchemy.\nYour help would be greatly appreciated.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":7505,"Q_Id":1085304,"Users Score":3,"Answer":"Automatic partitioning is a very database engine specific concept and SQLAlchemy doesn't provide any generic tools to manage partitioning. Mostly because it wouldn't provide anything really useful while being another API to learn. If you want to do database level partitioning then do the CREATE TABLE statements using custom Oracle DDL statements (see Oracle documentation how to create partitioned tables and migrate data to them). You can use a partitioned table in SQLAlchemy just like you would use a normal table, you just need the table declaration so that SQLAlchemy knows what to query. You can reflect the definition from the database, or just duplicate the table declaration in SQLAlchemy code.\nVery large datasets are usually time-based, with older data becoming read-only or read-mostly and queries usually only look at data from a time interval. If that describes your data, you should probably partition your data using the date field.\nThere's also application level partitioning, or sharding, where you use your application to split data across different database instances. This isn't all that popular in the Oracle world due to the exorbitant pricing models. If you do want to use sharding, then look at SQLAlchemy documentation and examples for that, for how SQLAlchemy can support you in that, but be aware that application level sharding will affect how you need to build your application code.","Q_Score":2,"Tags":"python,sqlalchemy","A_Id":1087081,"CreationDate":"2009-07-06T03:33:00.000","Title":"how to make table partitions?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write some unittests for an application that uses MySQL. However, I do not want to connect to a real mysql database, but rather to a temporary one that doesn't require any SQL server at all.\nAny library (I could not find anything on google)? Any design pattern? Note that DIP doesn't work since I will still have to test the injected class.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3416,"Q_Id":1088077,"Users Score":12,"Answer":"There isn't a good way to do that. You want to run your queries against a real MySQL server, otherwise you don't know if they will work or not.\nHowever, that doesn't mean you have to run them against a production server. We have scripts that create a Unit Test database, and then tear it down once the unit tests have run. That way we don't have to maintain a static test database, but we still get to test against the real server.","Q_Score":8,"Tags":"python,mysql,unit-testing","A_Id":1088090,"CreationDate":"2009-07-06T17:00:00.000","Title":"testing python applications that use mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":0.0199973338,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":1,"Answer":"You won't be able to do comparisons correctly. \"... where x > 500\" is not same as \".. where x > '500'\" because \"500\" > \"100000\"\nPerformance wise string it would be a hit especially if you use indexes as integer indexes are much faster than string indexes.\n\nOn the other hand it really depends upon your situation. If you intend to store something like phone numbers or student enrollment numbers, then it makes perfect sense to use strings.","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090708,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":0.0,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":0,"Answer":"Better use independent ID and add string ID if necessary: if there's a business indicator you need to include, why make it system ID?\nMain drawbacks:\n\nInteger operations and indexing always show better performance on large scales of data (more than 1k rows in a table, not to speak of connected tables)\nYou'll have to make additional checks to restrict numeric-only values in a column: these can be regex whether on client or database side. Anyway, you'll have to guarantee somehow that there's actually integer.\nAnd you will create additional context layer for developers to know, and anyway someone will always mess this up :)","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090924,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":0.0599281035,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":3,"Answer":"I've just spent the last year dealing with a database that has almost all IDs as strings, some with digits only, and others mixed. These are the problems:\n\nGrossly restricted ID space. A 4 char (digit-only) ID has capacity for 10,000 unique values. A 4 byte numeric has capacity for over 4 billion.\nUnpredictable ID space coverage. Once IDs start including non-digits it becomes hard to predict where you can create new IDs without collisions.\nConversion and display problems in certain circumstances, when scripting or on export for instance. If the ID gets interpreted as a number and there is a leading zero, the ID gets altered.\nSorting problems. You can't rely on the natural order being helpful.\n\nOf course, if you run out of IDs, or don't know how to create new IDs, your app is dead. I suggest that if you can't control the format of your incoming IDs then you need to create your own (numeric) IDs and relate the user provided ID to that. You can then ensure that your own ID is reliable and unique (and numeric) but provide a user-viewable ID that can have whatever format your users want, and doesn't even have to be unique across the whole app. This is more work, but if you'd been through what I have you'd know which way to go.\nAnil G","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090390,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":1.2,"is_accepted":true,"ViewCount":10684,"Q_Id":1090022,"Users Score":37,"Answer":"Unless you really need the features of an integer (that is, the ability to do arithmetic), then it is probably better for you to store the product IDs as strings. You will never need to do anything like add two product IDs together, or compute the average of a group of product IDs, so there is no need for an actual numeric type.\nIt is unlikely that storing product IDs as strings will cause a measurable difference in performance. While there will be a slight increase in storage size, the size of a product ID string is likely to be much smaller than the data in the rest of your database row anyway.\nStoring product IDs as strings today will save you much pain in the future if the data provider decides to start using alphabetic or symbol characters. There is no real downside.","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090065,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":0.0599281035,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":3,"Answer":"It really depends on what kind of id you are talking about. If it's a code like a phone number it would actually be better to use a varchar for the id and then have your own id to be a serial for the db and use for primary key. In a case where the integer have no numerical value, varchars are generally prefered.","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090057,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":0.0,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":0,"Answer":"Integers are more efficient from a storage and performance perspective. However, if there is a remote chance that alpha characters may be introduced, then you should use a string. In my opinion, the efficiency and performance benefits are likely to be negligible, whereas the time it takes to modify your code may not be.","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090035,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":0.0199973338,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":1,"Answer":"The space an integer would take up would me much less than a string. For example 2^32-1 = 4,294,967,295. This would take 10 bytes to store, where as the integer would take 4 bytes to store. For a single entry this is not very much space, but when you start in the millions... As many other posts suggest there are several other issues to consider, but this is one drawback of the string representation.","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090132,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.\nAre there performance or other disadvantages to saving the values as strings?","AnswerCount":10,"Available Count":8,"Score":1.0,"is_accepted":false,"ViewCount":10684,"Q_Id":1090022,"Users Score":18,"Answer":"Do NOT consider performance. Consider meaning.\nID \"numbers\" are not numeric except that they are written with an alphabet of all digits.\nIf I have part number 12 and part number 14, what is the difference between the two? Is part number 2 or -2 meaningful? No.\nPart numbers (and anything that doesn't have units of measure) are not \"numeric\". They're just strings of digits.\nZip codes in the US, for example. Phone numbers. Social security numbers. These are not numbers. In my town the difference between zip code 12345 and 12309 isn't the distance from my house to downtown. \nDo not conflate numbers -- with units -- where sums and differences mean something with strings of digits without sums or differences.\nPart ID numbers are -- properly -- strings. Not integers. They'll never be integers because they don't have sums, differences or averages.","Q_Score":22,"Tags":"python,mysql,database,database-design","A_Id":1090100,"CreationDate":"2009-07-07T01:58:00.000","Title":"Drawbacks of storing an integer as a string in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data.\nI want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":163,"Q_Id":1093589,"Users Score":1,"Answer":"I am using two databases for a read-only application. The second database is a cache in case the primary database is down. I use two objects to hold the connection, metadata and compatible Table instances. The top of the view function assigns db = primary or db = secondary and the rest is just queries against db.tableA.join(db.tableB). I am not using the ORM.\nThe schemata are not strictly identical. The primary database needs a schema. prefix (Table(...schema='schema')) and the cache database does not. To get around this, I create my table objects in a function that takes the schema name as an argument. By calling the function once for each database, I wind up with compatible prefixed and non-prefixed Table objects.\nAt least in Pylons, the SQLAlchemy meta.Session is a ScopedSession. The application's BaseController in appname\/lib\/base.py calls Session.remove() after each request. It's probably better to have a single Session that talks to both databases, but if you don't you may need to modify your BaseController to call .remove() on each Session.","Q_Score":1,"Tags":"python,sqlite,turbogears,turbogears2","A_Id":1422838,"CreationDate":"2009-07-07T17:14:00.000","Title":"Switching databases in TG2 during runtime","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data.\nI want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":163,"Q_Id":1093589,"Users Score":1,"Answer":"If ALL databases have the same schema then you should be able to create several Sessions using the same model to the different DBs.","Q_Score":1,"Tags":"python,sqlite,turbogears,turbogears2","A_Id":1387164,"CreationDate":"2009-07-07T17:14:00.000","Title":"Switching databases in TG2 during runtime","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My specific situation\nProperty management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. \nFor photos, there will be thumbnails of each.\nMy question\nMy #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. \nShould I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?\nThanks in advance!","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":7730,"Q_Id":1105429,"Users Score":1,"Answer":"a DB might be faster than a filesystem on some operations, but loading a well-identified chunk of data 100s of KB is not one of them.\nalso, a good frontend webserver (like nginx) is way faster than any webapp layer you'd have to write to read the blob from the DB. in some tests nginx is roughly on par with memcached for raw data serving of medium-sized files (like big HTMLs or medium-sized images).\ngo FS. no contest.","Q_Score":11,"Tags":"python,postgresql,storage,photos,photo-management","A_Id":1105534,"CreationDate":"2009-07-09T17:39:00.000","Title":"storing uploaded photos and documents - filesystem vs database blob","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My specific situation\nProperty management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. \nFor photos, there will be thumbnails of each.\nMy question\nMy #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. \nShould I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?\nThanks in advance!","AnswerCount":6,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":7730,"Q_Id":1105429,"Users Score":9,"Answer":"File system. No contest.\nThe data has to go through a lot more layers when you store it in the db.\nEdit on caching:\nIf you want to cache the file while the user uploads it to ensure the operation finishes as soon as possible, dumping it straight to disk (i.e. file system) is about as quick as it gets. As long as the files aren't too big and you don't have too many concurrent users, you can 'cache' the file in memory, return to the user, then save to disk. To be honest, I wouldn't bother.\nIf you are making the files available on the web after they have been uploaded and want to cache to improve the performance, file system is still the best option. You'll get caching for free (may have to adjust a setting or two) from your web server. You wont get this if the files are in the database.\nAfter all that it sounds like you should never store files in the database. Not the case, you just need a good reason to do so.","Q_Score":11,"Tags":"python,postgresql,storage,photos,photo-management","A_Id":1105444,"CreationDate":"2009-07-09T17:39:00.000","Title":"storing uploaded photos and documents - filesystem vs database blob","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My specific situation\nProperty management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. \nFor photos, there will be thumbnails of each.\nMy question\nMy #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. \nShould I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?\nThanks in advance!","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":7730,"Q_Id":1105429,"Users Score":3,"Answer":"Definitely store your images on the filesystem. One concern that folks don't consider enough when considering these types of things is bloat; cramming images as binary blobs into your database is a really quick way to bloat your DB way up. With a large database comes higher hardware requirements, more difficult replication and backup requirements, etc. Sticking your images on a filesystem means you can back them up \/ replicate them with many existing tools easily and simply. Storage space is far easier to increase on filesystem than in database, as well.","Q_Score":11,"Tags":"python,postgresql,storage,photos,photo-management","A_Id":1105479,"CreationDate":"2009-07-09T17:39:00.000","Title":"storing uploaded photos and documents - filesystem vs database blob","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My specific situation\nProperty management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. \nFor photos, there will be thumbnails of each.\nMy question\nMy #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. \nShould I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?\nThanks in advance!","AnswerCount":6,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":7730,"Q_Id":1105429,"Users Score":10,"Answer":"While there are exceptions to everything, the general case is that storing images in the file system is your best bet. You can easily provide caching services to the images, you don't need to worry about additional code to handle image processing, and you can easily do maintenance on the images if needed through standard image editing methods.\nIt sounds like your business model fits nicely into this scenario.","Q_Score":11,"Tags":"python,postgresql,storage,photos,photo-management","A_Id":1105453,"CreationDate":"2009-07-09T17:39:00.000","Title":"storing uploaded photos and documents - filesystem vs database blob","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON.\nThese tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs. \nWe'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc.\nI am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses.\nEDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue.\nEDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools.","AnswerCount":7,"Available Count":3,"Score":0.057080742,"is_accepted":false,"ViewCount":14213,"Q_Id":1108918,"Users Score":2,"Answer":"for both ipv4 and ipv6 compatibility, use VARBINARY(16) , ipv4's will always be BINARY(4) and ipv6 will always be BINARY(16), so VARBINARY(16) seems like the most efficient way to support both. and to convert them from the normal readable format to binary, use INET6_ATON('127.0.0.1'), and to reverse that, use INET6_NTOA(binary)","Q_Score":25,"Tags":"python,mysql,perl,ip-address","A_Id":59109834,"CreationDate":"2009-07-10T10:58:00.000","Title":"How to store an IP in mySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON.\nThese tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs. \nWe'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc.\nI am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses.\nEDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue.\nEDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools.","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":14213,"Q_Id":1108918,"Users Score":0,"Answer":"Old thread, but for the benefit of readers, consider using ip2long. It translates ip into an integer. \nBasically, you will be converting with ip2long when storing into DB then converting back with long2ip when retrieving from DB. The field type in DB will INT, so you will save space and gain better performance compared to storing ip as a string.","Q_Score":25,"Tags":"python,mysql,perl,ip-address","A_Id":56818264,"CreationDate":"2009-07-10T10:58:00.000","Title":"How to store an IP in mySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON.\nThese tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs. \nWe'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc.\nI am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses.\nEDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue.\nEDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools.","AnswerCount":7,"Available Count":3,"Score":0.0855049882,"is_accepted":false,"ViewCount":14213,"Q_Id":1108918,"Users Score":3,"Answer":"Having seperate fields doesn't sound particularly sensible to me - much like splitting a zipcode into sections or a phone number.\nMight be useful if you wanted specific info on the sections, but I see no real reason to not use a 32 bit int.","Q_Score":25,"Tags":"python,mysql,perl,ip-address","A_Id":1109278,"CreationDate":"2009-07-10T10:58:00.000","Title":"How to store an IP in mySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"HI \uff0c I made a ICAPServer (similar with httpserver) for which the performance is very important.\nThe DB module is sqlalchemy.\nI then made a test about the performance of sqlalchemy, as a result, i found that it takes about 30ms for sqlalchemy to write <50kb data to DB (Oracle), i don`t know if the result is normal, or i did something wrong?\nBUT, no matter right or wrong, it seems the bottle-neck comes from the DB part.\nHOW can i improve the performance of sqlalchemy? OR it is up to DBA to improve Oracle?\nBTW, ICAPServer and Oracle are on the same pc , and i used the essential way of sqlalchemy..","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":4462,"Q_Id":1110805,"Users Score":1,"Answer":"I had some issues with sqlalchemy's performance as well - I think you should first figure out in which ways you are using it ... they recommend that for big data sets is better to use the sql expression language. Either ways try and optimize the sqlalchemy code and have the Oracle database optimized as well, so you can better figure out what's wrong.\nAlso, do some tests on the database.","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":1110990,"CreationDate":"2009-07-10T17:16:00.000","Title":"python sqlalchemy performance?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"HI \uff0c I made a ICAPServer (similar with httpserver) for which the performance is very important.\nThe DB module is sqlalchemy.\nI then made a test about the performance of sqlalchemy, as a result, i found that it takes about 30ms for sqlalchemy to write <50kb data to DB (Oracle), i don`t know if the result is normal, or i did something wrong?\nBUT, no matter right or wrong, it seems the bottle-neck comes from the DB part.\nHOW can i improve the performance of sqlalchemy? OR it is up to DBA to improve Oracle?\nBTW, ICAPServer and Oracle are on the same pc , and i used the essential way of sqlalchemy..","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4462,"Q_Id":1110805,"Users Score":1,"Answer":"You can only push SQLAlchemy so far as a programmer. I would agree with you that the rest of the performance is up to your DBA, including creating proper indexes on tables, etc.","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":1110888,"CreationDate":"2009-07-10T17:16:00.000","Title":"python sqlalchemy performance?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to insert a calculation in Excel using Python.\nGenerally it can be done by inserting a formula string into the relevant cell.\nHowever, if i need to calculate a formula multiple times for the whole column\nthe formula must be updated for each individual cell. For example, if i need to \ncalculate the sum of two cells, then for cell C(k) the computation would be A(k)+B(k).\nIn excel it is possible to calculate C1=A1+B1 and then automatically expand the \ncalculation by dragging the mouse from C1 downwards.\nMy question is: Is it possible to the same thing with Python, i.e. to define a formula in only one cell and then to use Excel capabilities to extend the calculation for the whole column\/row?\nThank you in advance,\nSasha","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12592,"Q_Id":1116725,"Users Score":0,"Answer":"If you are using COM bindings, then you can simply record a macro in Excel, then translate it into Python code.\nIf you are using xlwt, you have to resort to normal loops in python..","Q_Score":1,"Tags":"python,excel,formula","A_Id":1116782,"CreationDate":"2009-07-12T19:36:00.000","Title":"Calculating formulae in Excel with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"HI\uff0ci got a multi-threading program which all threads will operate on oracle\nDB. So, can sqlalchemy support parallel operation on oracle?\ntks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2195,"Q_Id":1117538,"Users Score":1,"Answer":"As long as each concurrent thread has it's own session you should be fine. Trying to use one shared session is where you'll get into trouble.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":1117592,"CreationDate":"2009-07-13T02:44:00.000","Title":"python sqlalchemy parallel operation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Just starting to get to grips with python and MySQLdb and was wondering\n\nWhere is the best play to put a try\/catch block for the connection to MySQL. At the MySQLdb.connect point? Also should there be one when ever i query?\nWhat exceptions should i be catching on any of these blocks?\n\nthanks for any help\nCheers\nMark","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":5063,"Q_Id":1117828,"Users Score":1,"Answer":"I think that the connections and the query can raised errors so you should have try\/excepy for both of them.","Q_Score":9,"Tags":"python,mysql,exception","A_Id":1117841,"CreationDate":"2009-07-13T05:31:00.000","Title":"Python MySQLdb exceptions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Just starting to get to grips with python and MySQLdb and was wondering\n\nWhere is the best play to put a try\/catch block for the connection to MySQL. At the MySQLdb.connect point? Also should there be one when ever i query?\nWhat exceptions should i be catching on any of these blocks?\n\nthanks for any help\nCheers\nMark","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":5063,"Q_Id":1117828,"Users Score":16,"Answer":"Catch the MySQLdb.Error, while connecting and while executing query","Q_Score":9,"Tags":"python,mysql,exception","A_Id":1118129,"CreationDate":"2009-07-13T05:31:00.000","Title":"Python MySQLdb exceptions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of \"penalty\" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine.\nI have a basic understanding of Google's data store, so please assume I will choose a column based database for my \"stand-alone\" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor.\nAlso, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture.\nThanks","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1845,"Q_Id":1118761,"Users Score":1,"Answer":"There are a few things that you can't do on the App Engine that you can do on your own server like uploading of files. On the App Engine you kinda have to upload it and store the datastore which can cause a few problems.\nOther than that it should be fine from the Presentation part. There are a number of other little things that are better on your own dedicated server but I think eventually a lot of those things will be in the App Engine","Q_Score":9,"Tags":"python,django,google-app-engine","A_Id":1118790,"CreationDate":"2009-07-13T10:40:00.000","Title":"Migrating Django Application to Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of \"penalty\" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine.\nI have a basic understanding of Google's data store, so please assume I will choose a column based database for my \"stand-alone\" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor.\nAlso, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture.\nThanks","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1845,"Q_Id":1118761,"Users Score":8,"Answer":"Most (all?) of Django is available in GAE, so your main task is to avoid basing your designs around a reliance on anything from Django or the Python standard libraries which is not available on GAE.\nYou've identified the glaring difference, which is the database, so I'll assume you're on top of that. Another difference is the tie-in to Google Accounts and hence that if you want, you can do a fair amount of access control through the app.yaml file rather than in code. You don't have to use any of that, though, so if you don't envisage switching to Google Accounts when you switch to GAE, no problem.\nI think the differences in the standard libraries can mostly be deduced from the fact that GAE has no I\/O and no C-accelerated libraries unless explicitly stated, and my experience so far is that things I've expected to be there, have been there. I don't know Django and haven't used it on GAE (apart from templates), so I can't comment on that.\nPersonally I probably wouldn't target LAMP (where P = Django) with the intention of migrating to GAE later. I'd develop for both together, and try to ensure if possible that the differences are kept to the very top (configuration) and the very bottom (data model). The GAE version doesn't necessarily have to be perfect, as long as you know how to make it perfect should you need it.\nIt's not guaranteed that this is faster than writing and then porting, but my guess is it normally will be. The easiest way to spot any differences is to run the code, rather than relying on not missing anything in the GAE docs, so you'll likely save some mistakes that need to be unpicked. The Python SDK is a fairly good approximation to the real App Engine, so all or most of your tests can be run locally most of the time.\nOf course if you eventually decide not to port then you've done unnecessary work, so you have to think about the probability of that happening, and whether you'd consider the GAE development to be a waste of your time if it's not needed.","Q_Score":9,"Tags":"python,django,google-app-engine","A_Id":1119377,"CreationDate":"2009-07-13T10:40:00.000","Title":"Migrating Django Application to Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using Python 2.6.1, MySQL4.0 in Windows platform and I have successfully installed MySQLdb.\nDo we need to set any path for my python code and MySQLdb to successful run my application? Without any setting paths (in my code I am importing MySQLdb) I am getting No module named MySQLdb error is coming and I am not able to move further.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":770,"Q_Id":1136676,"Users Score":0,"Answer":"How did you install MySQLdb? This sounds like your MySQLdb module is not within your PYTHONPATH which indicates some inconsistancy between how you installed Python itself and how you installed MySQLdb.\nOr did you perhaps install a MySQLdb binary that was not targeted for your version of Python? Modules are normally put into version-dependant folders.","Q_Score":1,"Tags":"python,mysql","A_Id":1136692,"CreationDate":"2009-07-16T10:20:00.000","Title":"is it required to give path after installation MySQL db for Python 2.6","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Has anyone used SQLAlchemy in addition to Django's ORM?\nI'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins). \nIs it possible?\nNote: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.","AnswerCount":5,"Available Count":3,"Score":0.1586485043,"is_accepted":false,"ViewCount":12511,"Q_Id":1154331,"Users Score":4,"Answer":"Jacob Kaplan-Moss admitted to typing \"import sqlalchemy\" from time to time. I may write a queryset adapter for sqlalchemy results in the not too distant future.","Q_Score":23,"Tags":"python,database,django,sqlalchemy","A_Id":1308718,"CreationDate":"2009-07-20T15:44:00.000","Title":"SQLAlchemy and django, is it production ready?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Has anyone used SQLAlchemy in addition to Django's ORM?\nI'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins). \nIs it possible?\nNote: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":12511,"Q_Id":1154331,"Users Score":19,"Answer":"What I would do,\n\nDefine the schema in Django orm, let it write the db via syncdb. You get the admin interface.\nIn view1 you need a complex join\n\n\n\n def view1(request):\n import sqlalchemy\n data = sqlalchemy.complex_join_magic(...)\n ...\n payload = {'data': data, ...}\n return render_to_response('template', payload, ...)","Q_Score":23,"Tags":"python,database,django,sqlalchemy","A_Id":1155407,"CreationDate":"2009-07-20T15:44:00.000","Title":"SQLAlchemy and django, is it production ready?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Has anyone used SQLAlchemy in addition to Django's ORM?\nI'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins). \nIs it possible?\nNote: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":12511,"Q_Id":1154331,"Users Score":7,"Answer":"I've done it before and it's fine. Use the SQLAlchemy feature where it can read in the schema so you don't need to declare your fields twice.\nYou can grab the connection settings from the settings, the only problem is stuff like the different flavours of postgres driver (e.g. with psyco and without).\nIt's worth it as the SQLAlchemy stuff is just so much nicer for stuff like joins.","Q_Score":23,"Tags":"python,database,django,sqlalchemy","A_Id":3555602,"CreationDate":"2009-07-20T15:44:00.000","Title":"SQLAlchemy and django, is it production ready?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm modeling a database relationship in django, and I'd like to have other opinions. The relationship is kind of a two-to-many relationship. For example, a patient can have two physicians: an attending and a primary. A physician obviously has many patients.\nThe application does need to know which one is which; further, there are cases where an attending physician of one patient can be the primary of another. Lastly, both attending and primary are often the same. \nAt first, I was thinking two foreign keys from the patient table into the physician table. However, I think django disallows this. Additionally, on second thought, this is really a many(two)-to-many relationship.\nTherefore, how can I model this relationship with django while maintaining the physician type as it pertains to a patient? Perhaps I will need to store the physician type on the many-to-many association table?\nThanks,\nPete","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":358,"Q_Id":1162877,"Users Score":0,"Answer":"I agree with your conclusion. I would store the physician type in the many-to-many linking table.","Q_Score":2,"Tags":"python,database,django,database-design","A_Id":1162884,"CreationDate":"2009-07-22T03:05:00.000","Title":"How would you model this database relationship?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to get the data from a User site. If I would work on their site, I would VPN and then remote into their server using username and password.\nI thought getting data into my local machine than getting into their server where my work is not secured.\nSo, I thought of using Ironpython to get data from the remote server. \nSo, I still VPN'd to their domain, but when I was using the ADO.net connection string to connect to their database, it does not work.\nconnection string:\nData Source=xx.xx.xx.xx;Initial Catalog=;User ID=;Password=;\nand the error says:\nlogin failed for \nWell, one thing to notice is: when i remote into their server, I provide username and password once. Then when i log on to SQL Server, I dont have to provide username and password. It s windows authenticated. So, in the above connection string, I used the same username and password that I use while remoting in. I hope this gives an idea to ya'll what i might be missing. \nHelp appreciated!!!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1035,"Q_Id":1169668,"Users Score":0,"Answer":"Data Source=xx.xx.xx.xx;Initial Catalog=;Integrated Security=\"SSPI\"\nHow are you connecting to SQL. Do you use sql server authentication or windows authentication? Once you know that, then if you use a DNS name or IP that will go to the server correctly, you have the instance name correct AND you have permissions on the account to access the server you can connect.\nHeres a quick test. From the system you are using to connect to your SQL Server with, can you open the SQL Server management studio and connect to the remote database. If you can, tell me what settings you needed to do that, and I'll give you a connection string that will work.","Q_Score":0,"Tags":".net,ado.net,ironpython,connection-string","A_Id":1169704,"CreationDate":"2009-07-23T04:52:00.000","Title":"need help on ADO.net connection string","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to get the data from a User site. If I would work on their site, I would VPN and then remote into their server using username and password.\nI thought getting data into my local machine than getting into their server where my work is not secured.\nSo, I thought of using Ironpython to get data from the remote server. \nSo, I still VPN'd to their domain, but when I was using the ADO.net connection string to connect to their database, it does not work.\nconnection string:\nData Source=xx.xx.xx.xx;Initial Catalog=;User ID=;Password=;\nand the error says:\nlogin failed for \nWell, one thing to notice is: when i remote into their server, I provide username and password once. Then when i log on to SQL Server, I dont have to provide username and password. It s windows authenticated. So, in the above connection string, I used the same username and password that I use while remoting in. I hope this gives an idea to ya'll what i might be missing. \nHelp appreciated!!!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1035,"Q_Id":1169668,"Users Score":0,"Answer":"Is that user granted login abilities in SQL?\nIf using SQL 2005, you go to Security->Logins\nDouble click the user, and click Status.\n------Edit ----\nCreate a file on your desktop called TEST.UDL. Double click it.\nsetup your connection until it works.\nView the UDL in notepad, there's your connection string. Though I think you take out the first part which includes provider info.","Q_Score":0,"Tags":".net,ado.net,ironpython,connection-string","A_Id":1169755,"CreationDate":"2009-07-23T04:52:00.000","Title":"need help on ADO.net connection string","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to make a web app that will manage my Mercurial repositories for me.\nI want it so that when I tell it to load repository X:\n\nConnect to a MySQL server and make sure X exists.\nCheck if the user is allowed to access the repository.\nIf above is true, get the location of X from a mysql server.\nRun a hgweb cgi script (python) containing the path of the repository.\n\nHere is the problem, I want to: take the hgweb script, modify it, and run it.\nBut I do not want to: take the hgweb script, modify it, write it to a file and redirect there.\nI am using Apache to run the httpd process.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":940,"Q_Id":1185867,"Users Score":0,"Answer":"As far as you question, no, you're not likely to get php to execute a modified script without writing it somewhere, whether that's a file on the disk, a virtual file mapped to ram, or something similar.\nIt sounds like you might be trying to pound a railroad spike with a twig. If you're to the point where you're filtering access based on user permissions stored in MySQL, have you looked at existing HG solutions to make sure there isn't something more applicable than hgweb? It's really built for doing exactly one thing well, and this is a fair bit beyond it's normal realm.\nI might suggest looking into apache's native authentication as a more convenient method for controlling access to repositories, then just serve the repo without modifying the script.","Q_Score":0,"Tags":"php,python,mercurial,cgi","A_Id":1185909,"CreationDate":"2009-07-26T23:24:00.000","Title":"How can I execute CGI files from PHP?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and\/or using the shelve module?","AnswerCount":7,"Available Count":4,"Score":0.0285636566,"is_accepted":false,"ViewCount":1116,"Q_Id":1188585,"Users Score":1,"Answer":"The potential advantages of a custom format over a pickle are:\n\nyou can selectively get individual objects, rather than having to incarnate the full set of objects\nyou can query subsets of objects by properties, and only load those objects that match your criteria\n\nWhether these advantages materialize depends on how you design the storage, of course.","Q_Score":5,"Tags":"python,database,data-structures,persistence","A_Id":1188711,"CreationDate":"2009-07-27T14:45:00.000","Title":"What are the benefits of not using cPickle to create a persistent storage for data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and\/or using the shelve module?","AnswerCount":7,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1116,"Q_Id":1188585,"Users Score":10,"Answer":"Pickling is a two-face coin.\nOn one side, you have a way to store your object in a very easy way. Just four lines of code and you pickle. You have the object exactly as it is.\nOn the other side, it can become a compatibility nightmare. You cannot unpickle objects if they are not defined in your code, exactly as they were defined when pickled. This strongly limits your ability to refactor the code, or rearrange stuff in your modules.\nAlso, not everything can be pickled, and if you are not strict on what gets pickled and the client of your code has full freedom of including any object, sooner or later it will pass something unpicklable to your system, and the system will go boom.\nBe very careful about its use. there's no better definition of quick and dirty.","Q_Score":5,"Tags":"python,database,data-structures,persistence","A_Id":1188704,"CreationDate":"2009-07-27T14:45:00.000","Title":"What are the benefits of not using cPickle to create a persistent storage for data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and\/or using the shelve module?","AnswerCount":7,"Available Count":4,"Score":0.057080742,"is_accepted":false,"ViewCount":1116,"Q_Id":1188585,"Users Score":2,"Answer":"Note that not all objects may be directly pickled - only basic types, or objects that have defined the pickle protocol.\nUsing your own binary format would allow you to potentially store any kind of object.\nJust for note, Zope Object DB (ZODB) is following that very same approach, storing objects with the Pickle format. You may be interested in getting their implementations.","Q_Score":5,"Tags":"python,database,data-structures,persistence","A_Id":1188679,"CreationDate":"2009-07-27T14:45:00.000","Title":"What are the benefits of not using cPickle to create a persistent storage for data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and\/or using the shelve module?","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1116,"Q_Id":1188585,"Users Score":0,"Answer":"Will you ever need to process data from untrusted sources? If so, you should know that the pickle format is actually a virtual machine that is capable of executing arbitrary code on behalf of the process doing the unpickling.","Q_Score":5,"Tags":"python,database,data-structures,persistence","A_Id":1189928,"CreationDate":"2009-07-27T14:45:00.000","Title":"What are the benefits of not using cPickle to create a persistent storage for data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters).\nI plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results\/graphs etc. for now, csv output based on some input parameters will suffice.\nWhat is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed\/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done.\n-Has anyone had experience going down either path and what are the pitfalls\/gotchas that I should be aware of?\n-What are the reasons why one should be chosen over another? \n-Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\n-Is there any project or framework out there to help with this type of task? \n-Edit-\nMore info:\nThe rows will all read all in order, BUT I may need to do some resampling\/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table\/memory somewhere to hold the interpolated\/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample\/interploate to a common higher length) or just regenerate it each time its needed.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":581,"Q_Id":1241758,"Users Score":0,"Answer":"Are you likely to need all rows in order or will you want only specific known rows?\nIf you need to read all the data there isn't much advantage to having it in a database.\nedit: If the code fits in memory then a simple CSV is fine. Plain text data formats are always easier to deal with than opaque ones if you can use them.","Q_Score":0,"Tags":"python,database,database-design,file-io","A_Id":1241784,"CreationDate":"2009-08-06T21:58:00.000","Title":"Store data series in file or database if I want to do row level math operations?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters).\nI plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results\/graphs etc. for now, csv output based on some input parameters will suffice.\nWhat is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed\/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done.\n-Has anyone had experience going down either path and what are the pitfalls\/gotchas that I should be aware of?\n-What are the reasons why one should be chosen over another? \n-Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\n-Is there any project or framework out there to help with this type of task? \n-Edit-\nMore info:\nThe rows will all read all in order, BUT I may need to do some resampling\/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table\/memory somewhere to hold the interpolated\/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample\/interploate to a common higher length) or just regenerate it each time its needed.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":581,"Q_Id":1241758,"Users Score":0,"Answer":"What matters most if all data will fit simultaneously into memory. From the size that you give, it seems that this is easily the case (a few megabytes at worst).\nIf so, I would discourage using a relational database, and do all operations directly in Python. Depending on what other processing you need, I would probably rather use binary pickles, than CSV.","Q_Score":0,"Tags":"python,database,database-design,file-io","A_Id":1241787,"CreationDate":"2009-08-06T21:58:00.000","Title":"Store data series in file or database if I want to do row level math operations?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters).\nI plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results\/graphs etc. for now, csv output based on some input parameters will suffice.\nWhat is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed\/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done.\n-Has anyone had experience going down either path and what are the pitfalls\/gotchas that I should be aware of?\n-What are the reasons why one should be chosen over another? \n-Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\n-Is there any project or framework out there to help with this type of task? \n-Edit-\nMore info:\nThe rows will all read all in order, BUT I may need to do some resampling\/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table\/memory somewhere to hold the interpolated\/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample\/interploate to a common higher length) or just regenerate it each time its needed.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":581,"Q_Id":1241758,"Users Score":2,"Answer":"\"I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input.\"\nThis is the standard use case for a data warehouse star-schema design. Buy Kimball's The Data Warehouse Toolkit. Read (and understand) the star schema before doing anything else.\n\"What is the best way to store the data and manipulate?\" \nA Star Schema.\nYou can implement this as flat files (CSV is fine) or RDBMS. If you use flat files, you write simple loops to do the math. If you use an RDBMS you write simple SQL and simple loops. \n\"My main concern is speed\/performance as the number of datasets grows\" \nNothing is as fast as a flat file. Period. RDBMS is slower. \nThe RDBMS value proposition stems from SQL being a relatively simple way to specify SELECT SUM(), COUNT() FROM fact JOIN dimension WHERE filter GROUP BY dimension attribute. Python isn't as terse as SQL, but it's just as fast and just as flexible. Python competes against SQL.\n\"pitfalls\/gotchas that I should be aware of?\"\nDB design. If you don't get the star schema and how to separate facts from dimensions, all approaches are doomed. Once you separate facts from dimensions, all approaches are approximately equal.\n\"What are the reasons why one should be chosen over another?\"\nRDBMS slow and flexible. Flat files fast and (sometimes) less flexible. Python levels the playing field.\n\"Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\"\nStar Schema: central fact table surrounded by dimension tables. Nothing beats it.\n\"Is there any project or framework out there to help with this type of task?\"\nNot really.","Q_Score":0,"Tags":"python,database,database-design,file-io","A_Id":1245169,"CreationDate":"2009-08-06T21:58:00.000","Title":"Store data series in file or database if I want to do row level math operations?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for the simplest way of using python and SQLAlchemy to produce some XML for a jQuery based HTTP client. Right now I'm using mod_python's CGI handler but I'm unhappy with the fact that I can't persist stuff like the SQLAlchemy session.\nThe mod_python publisher handler that is apparently capable of persisting stuff does not allow requests with XML content type (as used by jQuery's ajax stuff) so I can't use it.\nWhat other options are there?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1222,"Q_Id":1272325,"Users Score":2,"Answer":"You could always write your own handler, which is the way mod_python is normally intended to be used. You would have to set some HTTP headers (and you could have a look at the publisher handler's source code for inspiration on that), but otherwise I don't think it's much more complicated than what you've been trying to do.\nThough as long as you're at it, I would suggest trying mod_wsgi instead of mod_python, which is probably eventually going to supersede mod_python. WSGI is a Python standard for writing web applications.","Q_Score":2,"Tags":"python,cgi,mod-python","A_Id":1272579,"CreationDate":"2009-08-13T14:29:00.000","Title":"Alternatives to mod_python's CGI handler","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What I really like about Entity framework is its drag and drop way of making up the whole model layer of your application. You select the tables, it joins them and you're done. If you update the database scheda, right click -> update and you're done again.\nThis seems to me miles ahead the competiting ORMs, like the mess of XML (n)Hibernate requires or the hard-to-update Django Models.\nWithout concentrating on the fact that maybe sometimes more control over the mapping process may be good, are there similar one-click (or one-command) solutions for other (mainly open source like python or php) programming languages or frameworks?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":433,"Q_Id":1283646,"Users Score":0,"Answer":"I have heard iBattis is good. A few companies fall back to iBattis when their programmer teams are not capable of understanding Hibernate (time issue).\nPersonally, I still like Linq2Sql. Yes, the first time someone needs to delete and redrag over a table seems like too much work, but it really is not. And the time that it doesn't update your class code when you save is really a pain, but you simply control-a your tables and drag them over again. Total remakes are very quick and painless. The classes it creates are extremely simple. You can even create multiple table entities if you like with SPs for CRUD.\nLinking SPs to CRUD is similar to EF: You simply setup your SP with the same parameters as your table, then drag it over your table, and poof, it matches the data types.\nA lot of people go out of their way to take IQueryable away from the repository, but you can limit what you link in linq2Sql, so IQueryable is not too bad.\nCome to think of it, I wonder if there is a way to restrict the relations (and foreign keys).","Q_Score":2,"Tags":"php,python,entity-framework,open-source","A_Id":1325558,"CreationDate":"2009-08-16T07:03:00.000","Title":"Entity Framwework-like ORM NOT for .NET","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I installed stackless pyton 2.6.2 after reading several sites that said its fully compatible with vanilla python. After installing i found that my django applications do not work any more.\nI did reinstall django (1.1) again and now im kind of lost. The error that i get is 500:\nInternal Server Error\nThe server encountered an internal error or misconfiguration and was unable to complete your request.\nPlease contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.\nMore information about this error may be available in the server error log.\nApache\/2.2.11 (Ubuntu) DAV\/2 PHP\/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_python\/3.3.1 Python\/2.6.2 mod_ruby\/1.2.6 Ruby\/1.8.7(2008-08-11) mod_ssl\/2.2.11 OpenSSL\/0.9.8g Server at 127.0.0.1 Port 80\nWhat else, could or should i do?\nEdit: From 1st comment i understand that the problem is not in django but mod_python & apache? so i edited my question title.\nEdit2: I think something is wrong with some paths setup. I tried going from mod_python to mod_wsgi, managed to finally set it up correctly only to get next error:\n[Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\n[Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb\nAlan","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":672,"Q_Id":1283856,"Users Score":2,"Answer":"When you install a new version of Python (whether stackless or not) you also need to reinstall all of the third party modules you need -- either from sources, which you say you don't want to do, or from packages built for the new version of Python you've just installed. \nSo, check the repository from which you installed Python 2.6.2 with aptitude: does it also have versions for that specific Python of mod_python, mysqldb, django, and any other third party stuff you may need? There really is no \"silver bullet\" for package management and I know of no \"sumo distribution\" of Python bundling all the packages you could ever possibly need (if there were, it would have to be many 10s of GB;-).","Q_Score":0,"Tags":"python,mod-wsgi,mod-python,stackless,python-stackless","A_Id":1284586,"CreationDate":"2009-08-16T09:14:00.000","Title":"Stackless python stopped mod_python\/apache from working","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to write a python script that populates a database with some information. One of the columns in my table is a BLOB that I would like to save a file to for each entry.\nHow can I read the file (binary) and insert it into the DB using python? Likewise, how can I retrieve it and write that file back to some arbitrary location on the hard drive?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25537,"Q_Id":1294385,"Users Score":0,"Answer":"You can insert and read BLOBs from a DB like every other column type. From the database API's view there is nothing special about BLOBs.","Q_Score":15,"Tags":"python,mysql,file-io,blob","A_Id":1294488,"CreationDate":"2009-08-18T14:50:00.000","Title":"How to insert \/ retrieve a file stored as a BLOB in a MySQL db using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on integrating with several music players. At the moment my favorite is exaile.\nIn the new version they are migrating the database format from SQLite3 to an internal Pickle format. I wanted to know if there is a way to access pickle format files without having to reverse engineer the format by hand.\nI know there is the cPickle python module, but I am unaware if it is callable directly from C.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":27544,"Q_Id":1296162,"Users Score":3,"Answer":"You can embed a Python interpreter in a C program, but I think that the easiest solution is to write a Python script that converts \"pickles\" in another format, e.g. an SQLite database.","Q_Score":23,"Tags":"python,c","A_Id":1296188,"CreationDate":"2009-08-18T20:02:00.000","Title":"How can I read a python pickle database\/file from C?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running Django through mod_wsgi and Apache (2.2.8) on Ubuntu 8.04.\nI've been running Django on this setup for about 6 months without any problems. Yesterday, I moved my database (postgres 8.3) to its own server, and my Django site started refusing to load (the browser spinner would just keep spinning).\nIt works for about 10 mintues, then just stops. Apache is still able to serve static files. Just nothing through Django.\nI've checked the apache error logs, and I don't see any entries that could be related. I'm not sure if this is a WSGI, Django, Apache, or Postgres issue?\nAny ideas?\nThanks for your help!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":435,"Q_Id":1300213,"Users Score":0,"Answer":"Found it! I'm using eventlet in some other code and I imported one of my modules into a django model. So eventlet was taking over and putting everything to \"sleep\".","Q_Score":1,"Tags":"python,django,postgresql,apache2,mod-wsgi","A_Id":2368542,"CreationDate":"2009-08-19T14:09:00.000","Title":"Apache\/Django freezing after a few requests","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have been working on a website using mod_python, python, and SQL Alchemy when I ran into a strange problem: When I query the database for all of the records, it returns the correct result set; however, when I refresh the page, it returns me a result set with that same result set appended to it. I get more result sets \"stacked\" on top of eachother as I refresh the page more.\nFor example:\nFirst page load: 10 results\nSecond page load: 20 results (two of each)\nThird page load: 30 results (three of each)\netc...\nIs this some underlying problem with mod_python? I don't recall running into this when using mod_wsgi.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":192,"Q_Id":1301000,"Users Score":0,"Answer":"Not that I've ever heard of, but it's impossible to tell without some code to look at.\nMaybe you initialised your result set list as a global, or shared member, and then appended results to it when the application was called without resetting it to empty? A classic way of re-using lists accidentally is to put one in a default argument value to a function. \n(The same could happen in mod_wsgi of course.)","Q_Score":0,"Tags":"python,sqlalchemy,mod-python","A_Id":1301029,"CreationDate":"2009-08-19T16:08:00.000","Title":"mod_python problem?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql \"Idle In Transaction\").\nI looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck.\nI switched to manual transaction management and did the rollback manually, that worked, but still left the processes as \"Idle\".\nSo then I called connection.close(), and all is well.\nBut I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10095,"Q_Id":1303654,"Users Score":111,"Answer":"After weeks of testing and reading the Django source code, I've found the answer to my own question:\nTransactions\nDjango's default autocommit behavior still holds true for my threaded function. However, it states in the Django docs:\n\nAs soon as you perform an action that needs to write to the database, Django produces the INSERT\/UPDATE\/DELETE statements and then does the COMMIT. There\u2019s no implicit ROLLBACK.\n\nThat last sentence is very literal. It DOES NOT issue a ROLLBACK command unless something in Django has set the dirty flag. Since my function was only doing SELECT statements it never set the dirty flag and didn't trigger a COMMIT.\nThis goes against the fact that PostgreSQL thinks the transaction requires a ROLLBACK because Django issued a SET command for the timezone. In reviewing the logs, I threw myself off because I kept seeing these ROLLBACK statements and assumed Django's transaction management was the source. Turns out it's not, and that's OK.\nConnections\nThe connection management is where things do get tricky. It turns out Django uses signals.request_finished.connect(close_connection) to close the database connection it normally uses. Since nothing normally happens in Django that doesn't involve a request, you take this behavior for granted.\nIn my case, though, there was no request because the job was scheduled. No request means no signal. No signal means the database connection was never closed.\nGoing back to transactions, it turns out that simply issuing a call to connection.close() in the absence of any changes to the transaction management issues the ROLLBACK statement in the PostgreSQL log that I'd been looking for.\nSolution\nThe solution is to allow the normal Django transaction management to proceed as normal and to simply close the connection one of three ways:\n\nWrite a decorator that closes the connection and wrap the necessary functions in it.\nHook into the existing request signals to have Django close the connection.\nClose the connection manually at the end of the function.\n\nAny of those three will (and do) work.\nThis has driven me crazy for weeks. I hope this helps someone else in the future!","Q_Score":59,"Tags":"python,database,django,multithreading,transactions","A_Id":1346401,"CreationDate":"2009-08-20T02:26:00.000","Title":"Threaded Django task doesn't automatically handle transactions or db connections?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed.\nIs there any way to pull all of the number i.e. including the leading zero?","AnswerCount":5,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1789,"Q_Id":1308038,"Users Score":4,"Answer":"There's almost certainly something in either your query, your table definition, or an ORM you're using that thinks the column is numeric and is converting the results to integers. You'll have to define the column as a string (everywhere!) if you want to preserve leading zeroes.\nEdit: ZEROFILL on the server isn't going to cut it. Python treats integer columns as Python integers, and those don't have leading zeroes, period. You'll either have to change the column type to VARCHAR, use something like \"%02d\" % val in Python, or put a CAST(my_column AS VARCHAR) in the query.","Q_Score":1,"Tags":"python,mysql","A_Id":1308060,"CreationDate":"2009-08-20T18:36:00.000","Title":"Python - MYSQL - Select leading zeros","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to use the Python version of App Engine but rather than write my code specifically for the Google Data Store, I'd like to create my models with a generic Python ORM that could be attached to Big Table, or, if I prefer, a regular database at some later time. Is there any Python ORM such as SQLAlchemy that would allow this?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":5333,"Q_Id":1308376,"Users Score":2,"Answer":"Nowadays they do since Google has launched Cloud SQL","Q_Score":11,"Tags":"python,google-app-engine,sqlalchemy,orm","A_Id":11325656,"CreationDate":"2009-08-20T19:39:00.000","Title":"Do any Python ORMs (SQLAlchemy?) work with Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string.\nWhat solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2884,"Q_Id":1313000,"Users Score":3,"Answer":"Make another table and do one-to-many. Don't try to cram a programming language feature into a database as-is if you can avoid it.\nIf you absolutely need to be able to store an object down the line, your options are a bit more limited. YAML is probably the best balance of human-readable and program-readable, and it has some syntax for specifying classes you might be able to use.","Q_Score":0,"Tags":"python,mysql","A_Id":1313013,"CreationDate":"2009-08-21T16:39:00.000","Title":"Inserting python tuple in a MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string.\nWhat solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2884,"Q_Id":1313000,"Users Score":2,"Answer":"I'd look at serializing it to JSON, using the simplejson package, or the built-in json package in python 2.6.\nIt's simple to use in python, importable by practically every other language, and you don't have to make all of the \"what tag should I use? what attributes should this have?\" decisions that you might in XML.","Q_Score":0,"Tags":"python,mysql","A_Id":1313016,"CreationDate":"2009-08-21T16:39:00.000","Title":"Inserting python tuple in a MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"To deploy a site with Python\/Django\/MySQL I had to do these on the server (RedHat Linux):\n\nInstall MySQLPython\nInstall ModPython\nInstall Django (using python setup.py install)\nAdd some directives on httpd.conf file (or use .htaccess)\n\nBut, when I deployed another site with PHP (using CodeIgniter) I had to do nothing. I faced some problems while deploying a Django project on a shared server. Now, my questions are:\n\nCan the deployment process of Django project be made easier?\nAm I doing too much?\nCan some of the steps be omitted?\nWhat is the best way to deploy django site on a shared server?","AnswerCount":7,"Available Count":1,"Score":0.0285636566,"is_accepted":false,"ViewCount":1978,"Q_Id":1313989,"Users Score":1,"Answer":"You didn't have to do anything when deploying a PHP site because your hosting provider had already installed it. Web hosts which support Django typically install and configure it for you.","Q_Score":3,"Tags":"python,django","A_Id":1314005,"CreationDate":"2009-08-21T20:11:00.000","Title":"How can Django projects be deployed with minimal installation work?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject\/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell.\nRather than translate a row from the database into an object:\n\neach table is represented by a class\na row is retrieved as a dict\nan object representing a cursor provides access to a table like so:\ncursor.mytable.get_by_ids(low, high)\nremoving means setting the time_of_removal to the current time\n\nSo essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row.\nType mapping is trivial because each dict (row) being a first class object in python\/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types.\nIf you see any potential problems with going down this road, please let me know. Thanks.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":569,"Q_Id":1319585,"Users Score":8,"Answer":"That doesn't do away with the need for an ORM. That is an ORM. In which case, why reinvent the wheel?\nIs there a compelling reason you're trying to avoid using an established ORM?","Q_Score":3,"Tags":"python,sqlalchemy,sqlobject","A_Id":1319598,"CreationDate":"2009-08-23T21:15:00.000","Title":"Is this a good approach to avoid using SQLAlchemy\/SQLObject?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject\/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell.\nRather than translate a row from the database into an object:\n\neach table is represented by a class\na row is retrieved as a dict\nan object representing a cursor provides access to a table like so:\ncursor.mytable.get_by_ids(low, high)\nremoving means setting the time_of_removal to the current time\n\nSo essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row.\nType mapping is trivial because each dict (row) being a first class object in python\/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types.\nIf you see any potential problems with going down this road, please let me know. Thanks.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":569,"Q_Id":1319585,"Users Score":2,"Answer":"You will still be using SQLAlchemy. ResultProxy is actually a dictionary once you go for .fetchmany() or similar.\nUse SQLAlchemy as a tool that makes managing connections easier, as well as executing statements. Documentation is pretty much separated in sections, so you will be reading just the part that you need.","Q_Score":3,"Tags":"python,sqlalchemy,sqlobject","A_Id":1319662,"CreationDate":"2009-08-23T21:15:00.000","Title":"Is this a good approach to avoid using SQLAlchemy\/SQLObject?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a python script that will be doing some processing on text files. As part of that process, i need to import each line of the tab-separated file into a local MS SQL Server (2008) table. I am using pyodbc and I know how to do this. However, I have a question about the best way to execute it.\nI will be looping through the file, creating a cursor.execute(myInsertSQL) for each line of the file. Does anyone see any problems waiting to commit the statements until all records have been looped (i.e. doing the commit() after the loop and not inside the loop after each individual execute)? The reason I ask is that some files will have upwards of 5000 lines. I didn't know if trying to \"save them up\" and committing all 5000 at once would cause problems.\nI am fairly new to python, so I don't know all of these issues yet. \nThanks.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3467,"Q_Id":1325481,"Users Score":0,"Answer":"If I understand what you are doing, Python is not going to be a problem. Executing a statement inside a transaction does not create cumulative state in Python. It will do so only at the database server itself.\nWhen you commit you will need to make sure the commit occurred, since having a large batch commit may conflict with intervening changes in the database. If the commit fails, you will have to re-run the batch again.\nThat's the only problem that I am aware of with large batches and Python\/ODBC (and it's not even really a Python problem, since you would have that problem regardless.)\nNow, if you were creating all the SQL in memory, and then looping through the memory-representation, that might make more sense. Still, 5000 lines of text on a modern machine is really not that big of a deal. If you start needing to process two orders of magnitude more, you might need to rethink your process.","Q_Score":1,"Tags":"python,database,odbc,commit,bulkinsert","A_Id":1325524,"CreationDate":"2009-08-25T00:30:00.000","Title":"Importing a text file into SQL Server in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in monitoring some objects. I expect to get about 10000 data points every 15 minutes. (Maybe not at first, but this is the 'general ballpark'). I would also like to be able to get daily, weekly, monthly and yearly statistics. It is not critical to keep the data in the highest resolution (15 minutes) for more than two months.\nI am considering various ways to store this data, and have been looking at a classic relational database, or at a schemaless database (such as SimpleDB). \nMy question is, what is the best way to go along doing this? I would very much prefer an open-source (and free) solution to a proprietary costly one. \nSmall note: I am writing this application in Python.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":13739,"Q_Id":1334813,"Users Score":1,"Answer":"plain text files? It's not clear what your 10k data points per 15 minutes translates to in terms of bytes, but in any way text files are easier to store\/archive\/transfer\/manipulate and you can inspect the directly, just by looking at. fairly easy to work with Python, too.","Q_Score":17,"Tags":"python,database,statistics,time-series,schemaless","A_Id":1335132,"CreationDate":"2009-08-26T13:47:00.000","Title":"What is the best open source solution for storing time series data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Django 1.1 with Mysql 5.* and MyISAM tables.\nSome of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the \"Preparing\" phase locking all other queries out.\nI'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.\nHow do I avoid this? Can I set maximum query times?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4741,"Q_Id":1353206,"Users Score":0,"Answer":"You shouldn't write queries like that, at least not to run against your live database. Mysql has a \"slow queries\" pararameter which you can use to identify the queries that are killing you. Most of the time, these slow queries are either buggy or can be speeded up by defining a new index or two.","Q_Score":7,"Tags":"python,mysql,django,timeout","A_Id":1500947,"CreationDate":"2009-08-30T06:10:00.000","Title":"Django: How can you stop long queries from killing your database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Django 1.1 with Mysql 5.* and MyISAM tables.\nSome of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the \"Preparing\" phase locking all other queries out.\nI'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.\nHow do I avoid this? Can I set maximum query times?","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":4741,"Q_Id":1353206,"Users Score":1,"Answer":"Unfortunately MySQL doesn't allow you an easy way to avoid this. A common method is basically to write a script that checks all running processes every X seconds (based on what you think is \"long\") and kill ones it sees are running too long. You can at least get some basic diagnostics, however, by setting log_slow_queries in MySQL which will write all queries that take longer than 10 seconds into a log. If that's too long for what you regard as \"slow\" for your purposes, you can set long_query_time to a value other than 10 to change the threshold.","Q_Score":7,"Tags":"python,mysql,django,timeout","A_Id":1353862,"CreationDate":"2009-08-30T06:10:00.000","Title":"Django: How can you stop long queries from killing your database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Django 1.1 with Mysql 5.* and MyISAM tables.\nSome of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the \"Preparing\" phase locking all other queries out.\nI'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.\nHow do I avoid this? Can I set maximum query times?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4741,"Q_Id":1353206,"Users Score":0,"Answer":"Do you know what the queries are? Maybe you could optimise the SQL or put some indexes on your tables?","Q_Score":7,"Tags":"python,mysql,django,timeout","A_Id":1353366,"CreationDate":"2009-08-30T06:10:00.000","Title":"Django: How can you stop long queries from killing your database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a problem reading a txt file to insert in the mysql db table, te sniped of this code:\nfile contains the in first line: \"aclaraci\u00f3n\"\n\narchivo = open('file.txt',\"r\") \n for line in archivo.readlines():\n ....body = body + line\n model = MyModel(body=body)\n model.save()\n\ni get a DjangoUnicodeDecodeError:\n'utf8' codec can't decode bytes in position 8: invalid data. You passed in 'aclaraci\\xf3n' (type 'str')\nUnicode error hint\nThe string that could not be encoded\/decoded was: araci\ufffdn.\nI tried to body.decode('utf-8'), body.decode('latin-1'), body.decode('iso-8859-1') without solution.\nCan you help me please? Any hint is apreciated :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3312,"Q_Id":1355285,"Users Score":5,"Answer":"Judging from the \\xf3 code for '\u00f3', it does look like the data is encoded in ISO-8859-1 (or some close relative). So body.decode('iso-8859-1') should be a valid Unicode string (you don't specify what \"without solution\" means -- what error message do you get, and where?); if what you need is a utf-8 encoded bytestring instead, body.decode('iso-8859-1').encode('utf-8') should give you one!","Q_Score":1,"Tags":"python,django,utf-8,character-encoding","A_Id":1355303,"CreationDate":"2009-08-31T00:11:00.000","Title":"Latin letters with acute : DjangoUnicodeDecodeError","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a fairly large enterprise application made in python that on its first version will require network connection.\nI've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder.\nSome of the advantages I've thought of are:\n\nthe user can change computers keeping all its settings\nsettings can be backed up along with the rest of the systems data (not a big concern)\n\nWhat would be some of the caveats of this approach?","AnswerCount":4,"Available Count":4,"Score":0.2449186624,"is_accepted":false,"ViewCount":650,"Q_Id":1365164,"Users Score":5,"Answer":"One caveat might depend on where the user is using the application from. For example, if they use two computers with different screen resolutions, and 'selected zoom\/text size' is one of the things you associate with the user, it might not always be suitable. It depends what kind of settings you intend to allow the user to customize. My workplace still has some users trapped on tiny LCD screens with a max res of 800x600, and we have to account for those when developing.","Q_Score":2,"Tags":"python,database,settings","A_Id":1365175,"CreationDate":"2009-09-01T23:32:00.000","Title":"Is storing user configuration settings on database OK?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a fairly large enterprise application made in python that on its first version will require network connection.\nI've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder.\nSome of the advantages I've thought of are:\n\nthe user can change computers keeping all its settings\nsettings can be backed up along with the rest of the systems data (not a big concern)\n\nWhat would be some of the caveats of this approach?","AnswerCount":4,"Available Count":4,"Score":0.1488850336,"is_accepted":false,"ViewCount":650,"Q_Id":1365164,"Users Score":3,"Answer":"Do you need the database to run any part of the application? If that's the case there are no reasons not to store the config inside the DB. You already mentioned the benefits and there are no downsides.","Q_Score":2,"Tags":"python,database,settings","A_Id":1365176,"CreationDate":"2009-09-01T23:32:00.000","Title":"Is storing user configuration settings on database OK?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a fairly large enterprise application made in python that on its first version will require network connection.\nI've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder.\nSome of the advantages I've thought of are:\n\nthe user can change computers keeping all its settings\nsettings can be backed up along with the rest of the systems data (not a big concern)\n\nWhat would be some of the caveats of this approach?","AnswerCount":4,"Available Count":4,"Score":0.1488850336,"is_accepted":false,"ViewCount":650,"Q_Id":1365164,"Users Score":3,"Answer":"It's perfectly reasonable to keep user settings in the database, as long as the settings pertain to the application independent of user location. One possible advantage of a file in the user's home folder is that users can send settings to one another. You may of course regard this as an advantage or a disadvantage :-)","Q_Score":2,"Tags":"python,database,settings","A_Id":1365183,"CreationDate":"2009-09-01T23:32:00.000","Title":"Is storing user configuration settings on database OK?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a fairly large enterprise application made in python that on its first version will require network connection.\nI've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder.\nSome of the advantages I've thought of are:\n\nthe user can change computers keeping all its settings\nsettings can be backed up along with the rest of the systems data (not a big concern)\n\nWhat would be some of the caveats of this approach?","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":650,"Q_Id":1365164,"Users Score":8,"Answer":"This is pretty standard. Go for it.\nThe caveat is that when you take the database down for maintenance, no one can use the app because their profile is inaccessible. You can either solve that by making a 100%-on db solution, or, more easily, through some form of caching of profiles locally (an \"offline\" mode of operations). That would allow your app to function whether the user or the db are off the network.","Q_Score":2,"Tags":"python,database,settings","A_Id":1365178,"CreationDate":"2009-09-01T23:32:00.000","Title":"Is storing user configuration settings on database OK?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am learning Python and creating a database connection.\nWhile trying to add to the DB, I am thinking of creating tuples out of information and then add them to the DB. \nWhat I am Doing:\nI am taking information from the user and store it in variables. \nCan I add these variables into a tuple? Can you please help me with the syntax?\nAlso if there is an efficient way of doing this, please share...\nEDIT\nLet me edit this question a bit...I only need the tuple to enter info into the DB. Once the information is added to the DB, should I delete the tuple? I mean I don't need the tuple anymore.","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":728594,"Q_Id":1380860,"Users Score":9,"Answer":"\" once the info is added to the DB, should I delete the tuple? i mean i dont need the tuple anymore.\"\nNo.\nGenerally, there's no reason to delete anything. There are some special cases for deleting, but they're very, very rare.\nSimply define a narrow scope (i.e., a function definition or a method function in a class) and the objects will be garbage collected at the end of the scope.\nDon't worry about deleting anything.\n[Note. I worked with a guy who -- in addition to trying to delete objects -- was always writing \"reset\" methods to clear them out. Like he was going to save them and reuse them. Also a silly conceit. Just ignore the objects you're no longer using. If you define your functions in small-enough blocks of code, you have nothing more to think about.]","Q_Score":353,"Tags":"python,tuples","A_Id":1381304,"CreationDate":"2009-09-04T18:36:00.000","Title":"Add Variables to Tuple","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"SQLite docs specifies that the preferred format for storing datetime values in the DB is to use Julian Day (using built-in functions).\nHowever, all frameworks I saw in python (pysqlite, SQLAlchemy) store the datetime.datetime values as ISO formatted strings. Why are they doing so?\nI'm usually trying to adapt the frameworks to storing datetime as julianday, and it's quite painful. I started to doubt that is worth the efforts.\nPlease share your experience in this field with me. Does sticking with julianday make sense?","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1707,"Q_Id":1386093,"Users Score":6,"Answer":"Julian Day is handy for all sorts of date calculations, but it can's store the time part decently (with precise hours, minutes, and seconds). In the past I've used both Julian Day fields (for dates), and seconds-from-the-Epoch (for datetime instances), but only when I had specific needs for computation (of dates and respectively of times). The simplicity of ISO formatted dates and datetimes, I think, should make them the preferred choice, say about 97% of the time.","Q_Score":8,"Tags":"python,datetime,sqlite,sqlalchemy,pysqlite","A_Id":1386154,"CreationDate":"2009-09-06T16:43:00.000","Title":"Shall I bother with storing DateTime data as julianday in SQLite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"SQLite docs specifies that the preferred format for storing datetime values in the DB is to use Julian Day (using built-in functions).\nHowever, all frameworks I saw in python (pysqlite, SQLAlchemy) store the datetime.datetime values as ISO formatted strings. Why are they doing so?\nI'm usually trying to adapt the frameworks to storing datetime as julianday, and it's quite painful. I started to doubt that is worth the efforts.\nPlease share your experience in this field with me. Does sticking with julianday make sense?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1707,"Q_Id":1386093,"Users Score":0,"Answer":"Because 2010-06-22 00:45:56 is far easier for a human to read than 2455369.5318981484. Text dates are great for doing ad-hoc queries in SQLiteSpy or SQLite Manager.\nThe main drawback, of course, is that text dates require 19 bytes instead of 8.","Q_Score":8,"Tags":"python,datetime,sqlite,sqlalchemy,pysqlite","A_Id":3089486,"CreationDate":"2009-09-06T16:43:00.000","Title":"Shall I bother with storing DateTime data as julianday in SQLite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an object that is basically a Python implementation of an Oracle sequence. For a variety of reasons, we have to get the nextval of an Oracle sequence, count up manually when determining primary keys, then update the sequence once the records have been inserted.\nSo here's the steps my object does:\n\nConstruct an object, with a key_generator attribute initially set to None.\nGet the first value from the database, passing it to an itertools.count.\nReturn keys from that generator using a property next_key.\n\nI'm a little bit unsure about where to do step 2. I can think of three possibilities:\n\nSkip step 1 and do step 2 in the constructor. I find this evil because I tend to dislike doing this kind of initialization in a constructor.\nMake next_key get the starting key from the database the first time it is called. I find this evil because properties are typically assumed to be trivial.\nMake next_key into a get_next_key method. I dislike this because properties just seem more natural here.\n\nWhich is the lesser of 3 evils? I'm leaning towards #2, because only the first call to this property will result in a database query.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":190,"Q_Id":1386210,"Users Score":2,"Answer":"I agree that attribute access and everything that looks like it (i.e. properties in the Python context) should be fairly trivial. If a property is going to perform a potentially costly operation, use a method to make this explicit. I recommend a name like \"fetch_XYZ\" or \"retrieve_XYZ\", since \"get_XYZ\" is used in some languages (e.g. Java) as a convention for simple attribute access, is quite generic, and does not sound \"costly\" either.\nA good guideline is: If your property could throw an exception that is not due to a programming error, it should be a method. For example, throwing a (hypothetical) DatabaseConnectionError from a property is bad, while throwing an ObjectStateError would be okay.\nAlso, when I understood you correctly, you want to return the next key, whenever the next_key property is accessed. I recommend strongly against having side-effects (apart from caching, cheap lazy initialization, etc.) in your properties. Properties (and attributes for that matter) should be idempotent.","Q_Score":2,"Tags":"python,properties,initialization","A_Id":1386258,"CreationDate":"2009-09-06T17:35:00.000","Title":"Should properties do nontrivial initialization?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an object that is basically a Python implementation of an Oracle sequence. For a variety of reasons, we have to get the nextval of an Oracle sequence, count up manually when determining primary keys, then update the sequence once the records have been inserted.\nSo here's the steps my object does:\n\nConstruct an object, with a key_generator attribute initially set to None.\nGet the first value from the database, passing it to an itertools.count.\nReturn keys from that generator using a property next_key.\n\nI'm a little bit unsure about where to do step 2. I can think of three possibilities:\n\nSkip step 1 and do step 2 in the constructor. I find this evil because I tend to dislike doing this kind of initialization in a constructor.\nMake next_key get the starting key from the database the first time it is called. I find this evil because properties are typically assumed to be trivial.\nMake next_key into a get_next_key method. I dislike this because properties just seem more natural here.\n\nWhich is the lesser of 3 evils? I'm leaning towards #2, because only the first call to this property will result in a database query.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":190,"Q_Id":1386210,"Users Score":0,"Answer":"I've decided that the key smell in the solution I'm proposing is that the property I was creating contained the word \"next\" in it. Thus, instead of making a next_key property, I've decided to turn my DatabaseIntrospector class into a KeyCounter class and implemented the iterator protocol (ie making a plain old next method that returns the next key).","Q_Score":2,"Tags":"python,properties,initialization","A_Id":1389673,"CreationDate":"2009-09-06T17:35:00.000","Title":"Should properties do nontrivial initialization?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the best way to use an embedded database, say sqlite in Python:\n\nShould be small footprint. I'm only needing few thousands records per table. And just a handful of tables per database.\nIf it's one provided by Python default installation, then great. Must be open-source, available on Windows and Linus. \nBetter if SQL is not written directly, but no ORM is fully needed. Something that will shield me from the actual database, but not that huge of a library. Something similar to ADO will be great.\nMostly will be used through code, but if there is a GUI front end, then that is great\nNeed just a few pages to get started with. I don't want to go through pages reading what a table is and how a Select statement works. I know all of that.\nSupport for Python 3 is preferred, but 2.x is okay too.\n\nThe usage is not a web app. It's a small database to hold at most 5 tables. The data in each table is just a few string columns. Think something just larger than a pickled dictionary\nUpdate: Many thanks for the great suggestions.\nThe use-case I'm talking about is fairly simple. One you'd probably do in a day or two.\nIt's a 100ish line Python script that gathers data about a relatively large number of files (say 10k), and creates metadata files about them, and then one large metadata file about the whole files tree. I just need to avoid re-processing the files already processed, and create the metadata for the updated files, and update the main metadata file. In a way, cache the processed data, and only update it on file updates.\nIf the cache is corrupt \/ unavailable, then simply process the whole tree. It might take 20 minutes, but that's okay.\nNote that all processing is done in-memory.\nI would like to avoid any external dependencies, so that the script can easily be put on any system with just a Python installation on it. Being Windows, it is sometimes hard to get all the components installed.\nSo, In my opinion, even a database might be an overkill. \nYou probably wouldn't fire up an Office Word\/Writer to write a small post it type note, similarly I am reluctant on using something like Django for this use-case.\nWhere to start?","AnswerCount":9,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5927,"Q_Id":1407248,"Users Score":0,"Answer":"Django is perfect for this but the poster is not clear if he needs to actually make a compiled EXE or a web app. Django is only for web apps.\nI'm not sure where you really get \"heavy\" from. Django is grossly smaller in terms of lines of code than any other major web app framework.","Q_Score":7,"Tags":"python,database,sqlite,ado","A_Id":1407345,"CreationDate":"2009-09-10T19:31:00.000","Title":"python database \/ sql programming - where to start","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have started learning Python by writing a small application using Python 3.1 and py-PostgreSQL. Now I want to turn it into a web application.\nBut it seems that most frameworks such as web-py, Django, zope are still based on Python 2.x. Unfortunately, py-PostgreSQL is incompatible with Python 2.x.\nDo I have to rewrite all my classes and replace py-PostgreSQL with something supported by web-py etc., or is there a framework compatible with Python 3.1?\nOr maybe py-PostgreSQL is compatible with 2.x but I did not figure it out?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1695,"Q_Id":1423000,"Users Score":0,"Answer":"Even though it's not officially released yet, I am currently 'playing around' with CherryPy 3.2.0rc1 with Python 3.1.1 and have had no problems yet. Haven't used it with py-postgresql, but I don't see why it shouldn't work.\nHope this helps,\nAlan","Q_Score":1,"Tags":"python,web-applications,python-3.x,wsgi","A_Id":1934744,"CreationDate":"2009-09-14T17:56:00.000","Title":"web framework compatible with python 3.1 and py-postgresql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am just starting out with the MySQLdb module for python, and upon running some SELECT and UPDATE queries, the following gets output:\n\nException\n _mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL\n server during query') in bound method Cursor.del of\n MySQLdb.cursors.Cursor object at 0x8c0188c ignored\n\nThe exception is apparently getting caught (and \"ignored\") by MySQLdb itself, so I guess this is not a major issue. Also, the SELECTs generate results and the table gets modified by UPDATE.\nBut, since I am just getting my feet wet with this, I want to ask: does this message suggest I am doing something wrong? Or have you seen these warnings before in harmless situations?\nThanks for any insight,\nlara","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2148,"Q_Id":1439616,"Users Score":0,"Answer":"Ha! Just realized I was trying to use the cursor after having closed the connection! In any case, it was nice writing! : )\nl","Q_Score":0,"Tags":"python,mysql","A_Id":1439734,"CreationDate":"2009-09-17T15:33:00.000","Title":"have you seen? _mysql_exceptions.OperationalError \"Lost connection to MySQL server during query\" being ignored","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have configured pgpool-II for postgres connection pooling and I want to disable psycopg2 connection pooling. How do I do this?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1426,"Q_Id":1440245,"Users Score":6,"Answer":"psycopg2 doesn't pool connections unless you explicitely use the psycopg.pool module.","Q_Score":0,"Tags":"python,psycopg2","A_Id":1492172,"CreationDate":"2009-09-17T17:34:00.000","Title":"How do I disable psycopg2 connection pooling?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install pysqlite and have troubles with that. I found out that the most probable reason of that is missing sqlite headers and I have to install them.\nHowever, I have no ideas what these headers are (where I can find them, what they are doing and how to install them).\nCan anybody, pleas, help me with that?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":17278,"Q_Id":1462565,"Users Score":0,"Answer":"pysqlite needs to compiled\/build before you can use it. This requires C language header files (*.H) which come with the source code of sqllite itself.\ni.e. sqllite and pysqlite are two different things. Did you install sqlite prior to trying and build pysqllte ? (or maybe you did, but did you do so just with the binaries; you need the source package (or at least its headers) for pysqlite purposes.","Q_Score":12,"Tags":"python,header,pysqlite","A_Id":1462623,"CreationDate":"2009-09-22T20:57:00.000","Title":"What are sqlite development headers and how to install them?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install pysqlite and have troubles with that. I found out that the most probable reason of that is missing sqlite headers and I have to install them.\nHowever, I have no ideas what these headers are (where I can find them, what they are doing and how to install them).\nCan anybody, pleas, help me with that?","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":17278,"Q_Id":1462565,"Users Score":7,"Answer":"For me this worked (Redhat\/CentOS):\n$ sudo yum install sqlite-devel","Q_Score":12,"Tags":"python,header,pysqlite","A_Id":5671345,"CreationDate":"2009-09-22T20:57:00.000","Title":"What are sqlite development headers and how to install them?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to get started on working with Python on Django I am by profession a PHP developer and have been told to set up django and python on my current apache and mysql setup however I am having trouble getting the Mysqldb module for python to work, I must of followed about 6 different set of instructions, I am running snow leopard and have mysql installed natively it is not part of MAMP or similar. Please can some tell me where I need to start and what steps I need to follew I would be most grateful.\nThanks","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":7786,"Q_Id":1465846,"Users Score":7,"Answer":"On MAC OS X 10.6, Install the package as usual. The dynamic import error occurs because of wrong DYLD path. Export the path and open up a python terminal.\n$ sudo python setup.py clean\n$ sudo python setup.py build\n$ sudo python setup.py install\n$ export DYLD_LIBRARY_PATH=\/usr\/local\/mysql\/lib:$DYLD_LIBRARY_PATH\n$python\n\n\nimport MySQLdb\n\n\nNow import MySQLdb should work fine.\nYou may also want to manually remove the build folder, before build and install. The clean command does not do a proper task of cleaning up the build files.","Q_Score":7,"Tags":"python,mysql,django,osx-snow-leopard","A_Id":6537345,"CreationDate":"2009-09-23T13:01:00.000","Title":"Install mysqldb on snow leopard","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can Python be used to query a SAP database?","AnswerCount":7,"Available Count":2,"Score":0.1137907297,"is_accepted":false,"ViewCount":48208,"Q_Id":1466917,"Users Score":4,"Answer":"Sap is NOT a database server.\nBut with the Python SAP RFC module you can query most table quite easily. It is using some sap unsupported function ( that all the world is using). And this function has some limitation on field size and datatypes.","Q_Score":36,"Tags":"python,abap,sap-basis,pyrfc","A_Id":1467921,"CreationDate":"2009-09-23T15:55:00.000","Title":"Query SAP database from Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can Python be used to query a SAP database?","AnswerCount":7,"Available Count":2,"Score":0.0285636566,"is_accepted":false,"ViewCount":48208,"Q_Id":1466917,"Users Score":1,"Answer":"Python is one of the most used object-oriented programming languages which is very easy to code and understand.\nIn order to use Python with SAP, we need to install Python SAP RFC module which is known as PyRFC. One of its available methods is RFC_READ_TABLE which can be called to read data from a table in SAP database.\nAlso, the PyRFC package provides various bindings which can be utilized to make calls either way. We can use to make calls either from ABAP modules to Python modules or the other way round. One can define equivalent SAP data types which are used in data exchange.\nAlso, we can create Web Service in Python which can be used for inter-communication. SAP NetWeaver is fully compatible with web services either state full or stateless.","Q_Score":36,"Tags":"python,abap,sap-basis,pyrfc","A_Id":59210473,"CreationDate":"2009-09-23T15:55:00.000","Title":"Query SAP database from Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a python equivalent of phpMyAdmin?\nHere's why I'm looking for a python version of phpmyadmin: While I agree that phpmyadmin really rocks, I don't want to run php on my server. I'd like to move from apache2-prefork to apache2-mpm-worker. Worker blows the doors off of prefork for performance, but php5 doesn't work with worker. (Technically it does, but it's far more complicated.) The extra memory and performance penalty for having php on this server is large to me.","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":23600,"Q_Id":1480453,"Users Score":12,"Answer":"You can use phpMyAdmin for python project, because phpMyAdmin is meant for MySQL databases. If you are using MySQL, then regardless of whether you are using PHP or python, you can use phpMyAdmin.","Q_Score":33,"Tags":"python,phpmyadmin","A_Id":1480549,"CreationDate":"2009-09-26T04:51:00.000","Title":"phpMyAdmin equivalent in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using SQLAlchemy and I can create tables that I have defined in \/model\/__init__.py but I have defined my classes, tables and their mappings in other files found in the \/model directory. \nFor example I have a profile class and a profile table which are defined and mapped in \/model\/profile.py\nTo create the tables I run: \npaster setup-app development.ini\nBut my problem is that the tables that I have defined in \/model\/__init__.py are created properly but the table definitions found in \/model\/profile.py are not created. How can I execute the table definitions found in the \/model\/profile.py so that all my tables can be created?\nThanks for the help!","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1155,"Q_Id":1482627,"Users Score":0,"Answer":"Just import your other table's modules in your init.py, and use metadata object from models.meta in other files. Pylons default setup_app function creates all tables found in metadata object from model.meta after importing it.","Q_Score":3,"Tags":"python,sqlalchemy,pylons","A_Id":1483061,"CreationDate":"2009-09-27T02:19:00.000","Title":"Creating tables with pylons and SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using SQLAlchemy and I can create tables that I have defined in \/model\/__init__.py but I have defined my classes, tables and their mappings in other files found in the \/model directory. \nFor example I have a profile class and a profile table which are defined and mapped in \/model\/profile.py\nTo create the tables I run: \npaster setup-app development.ini\nBut my problem is that the tables that I have defined in \/model\/__init__.py are created properly but the table definitions found in \/model\/profile.py are not created. How can I execute the table definitions found in the \/model\/profile.py so that all my tables can be created?\nThanks for the help!","AnswerCount":3,"Available Count":3,"Score":0.3215127375,"is_accepted":false,"ViewCount":1155,"Q_Id":1482627,"Users Score":5,"Answer":"I ran into the same problem with my first real Pylons project. The solution that worked for me was this:\n\nDefine tables and classes in your profile.py file\nIn your __init__.py add from profile import * after your def init_model\nI then added all of my mapper definitions afterwards. Keeping them all in the init file solved some problems I was having relating between tables defined in different files.\n\nAlso, I've since created projects using the declarative method and didn't need to define the mapping in the init file.","Q_Score":3,"Tags":"python,sqlalchemy,pylons","A_Id":1528312,"CreationDate":"2009-09-27T02:19:00.000","Title":"Creating tables with pylons and SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using SQLAlchemy and I can create tables that I have defined in \/model\/__init__.py but I have defined my classes, tables and their mappings in other files found in the \/model directory. \nFor example I have a profile class and a profile table which are defined and mapped in \/model\/profile.py\nTo create the tables I run: \npaster setup-app development.ini\nBut my problem is that the tables that I have defined in \/model\/__init__.py are created properly but the table definitions found in \/model\/profile.py are not created. How can I execute the table definitions found in the \/model\/profile.py so that all my tables can be created?\nThanks for the help!","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1155,"Q_Id":1482627,"Users Score":0,"Answer":"If you are using declarative style, be sure to use Base.meta for tables generation.","Q_Score":3,"Tags":"python,sqlalchemy,pylons","A_Id":1485719,"CreationDate":"2009-09-27T02:19:00.000","Title":"Creating tables with pylons and SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed? \nMySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":432,"Q_Id":1483024,"Users Score":1,"Answer":"What it needs is the client library and headers that come with the server, since it just a Python wrapper (which sits in _mysql.c; and DB-API interface to that wrapper in MySQLdb package) over original C MySQL API.","Q_Score":1,"Tags":"python,mysql","A_Id":1483154,"CreationDate":"2009-09-27T07:31:00.000","Title":"Why MySQLdb for Mac has to have MySQL installed to install?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed? \nMySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":432,"Q_Id":1483024,"Users Score":1,"Answer":"I'm not sure about the specifics of MySQLdb, but most likely it needs header information to compile\/install. It uses the location of mysql_config to know where the appropriate headers would be. The MySQL Gem for Ruby on Rails requires the same thing, even though it simply connects to the MySQL server.","Q_Score":1,"Tags":"python,mysql","A_Id":1483030,"CreationDate":"2009-09-27T07:31:00.000","Title":"Why MySQLdb for Mac has to have MySQL installed to install?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed? \nMySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":432,"Q_Id":1483024,"Users Score":1,"Answer":"Just to clarify what the other answerers have said: you don't need to install a MySQL server, but you do need to install the MySQL client libraries. However, for whatever reasons, MySQL don't make a separate download available for just the client libraries, as they do for Linux.","Q_Score":1,"Tags":"python,mysql","A_Id":1483305,"CreationDate":"2009-09-27T07:31:00.000","Title":"Why MySQLdb for Mac has to have MySQL installed to install?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just upgraded the default Python 2.5 on Leopard to 2.6 via the installer on www.python.org. Upon doing so, the MySQLdb I had installed was no longer found. So I tried reinstalling it via port install py-mysql, and it succeeded, but MySQLdb was still not importable. So then I tried to python install python26 with python_select python26 and it succeeded, but it doesn't appear that it is getting precedence over the python.org install:\n$ which python\n\/Library\/Frameworks\/Python.framework\/Versions\/2.6\/bin\/python\nWhen I would expect it to be something like \/opt\/local\/bin\/python\nMy path environment is: \/Library\/Frameworks\/Python.framework\/Versions\/2.6\/bin:\/usr\/local\/mysql\/bin\/:\/opt\/local\/bin:\/opt\/local\/sbin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin:\/usr\/local\/bin:\/usr\/X11\/bin:\/usr\/local\/mysql\/bin:\/Users\/bsr\/bin\nAnyway, when I try port install py-mysql but how does it know where to install the Python MySQL library?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1051,"Q_Id":1499572,"Users Score":1,"Answer":"You also need python_select (or is it select_python?) to change the default python used.","Q_Score":0,"Tags":"python,mysql,macos","A_Id":2302542,"CreationDate":"2009-09-30T17:32:00.000","Title":"With multiple Python installs, how does MacPorts know which one to install MySQLdb for?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"A quick SQLAlchemy question... \nI have a class \"Document\" with attributes \"Number\" and \"Date\". I need to ensure that there's no duplicated number for the same year, is\nthere a way to have a UniqueConstraint on \"Number + year(Date)\"? Should I use a unique Index instead? How would I declare the functional part?\n(SQLAlchemy 0.5.5, PostgreSQL 8.3.4)\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":890,"Q_Id":1510018,"Users Score":-1,"Answer":"I'm pretty sure that unique constraints can only be applied on columns that already have data in them, and not on runtime-calculated expressions. Hence, you would need to create an extra column which contains the year part of your date, over which you could create a unique constraint together with number. To best use this approach, maybe you should store your date split up in three separate columns containing the day, month and year part. This could be done using default constraints in the table definition.","Q_Score":3,"Tags":"python,sqlalchemy,constraints","A_Id":1510137,"CreationDate":"2009-10-02T14:52:00.000","Title":"Compound UniqueConstraint with a function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":614,"Q_Id":1510084,"Users Score":1,"Answer":"Python 3 isn't ready for web applications right now. The WSGI 1.0 specification isn't suitable for Py3k and the related standard libraries are 2to3 hacks that don't work consistently faced with bytes vs. unicode. It's a real mess.\nWEB-SIG are bashing out proposals for a WSGI revision; hopefully it can move forward soon, because although Python 3 isn't mainstream yet it's certainly heading that way, and the brokenness of webdev is rather embarrassing.","Q_Score":1,"Tags":"python,ruby-on-rails,frameworks,python-3.x","A_Id":1510491,"CreationDate":"2009-10-02T15:01:00.000","Title":"Is there any framework like RoR on Python 3000?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":614,"Q_Id":1510084,"Users Score":0,"Answer":"Python 3 is not ready for practical use, because there is not yet enough libraries that have been updated to support Python 3. So the answer is: No.\nBut there are LOADS of them on Python 2. Tens, at least.\nDjango, Turbogears, BFG and of course the old man of the game: Zope. To tell which is best for you, you need to expand your requirements a lot.","Q_Score":1,"Tags":"python,ruby-on-rails,frameworks,python-3.x","A_Id":1510218,"CreationDate":"2009-10-02T15:01:00.000","Title":"Is there any framework like RoR on Python 3000?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":614,"Q_Id":1510084,"Users Score":2,"Answer":"I believe CherryPy is on the verge of being released for Python 3.X.","Q_Score":1,"Tags":"python,ruby-on-rails,frameworks,python-3.x","A_Id":1512245,"CreationDate":"2009-10-02T15:01:00.000","Title":"Is there any framework like RoR on Python 3000?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want 3 columns to have 9 different values, like a list in Python.\nIs it possible? If not in SQLite, then on another database engine?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":25012,"Q_Id":1517771,"Users Score":12,"Answer":"Generally, you do this by stringifying the list (with repr()), and then saving the string. On reading the string from the database, use eval() to re-create the list. Be careful, though that you are certain no user-generated data can get into the column, or the eval() is a security risk.","Q_Score":12,"Tags":"python,sqlite","A_Id":1517795,"CreationDate":"2009-10-05T00:26:00.000","Title":"Is it possible to save a list of values into a SQLite column?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I want some help in building a Phone book application on python and put it on google app engine. I am running a huge db of 2 million user lists and their contacts in phonebook. I want to upload all that data from my servers directly onto the google servers and then use a UI to retrieve the phone book contacts of each user based on his name.\nI am using MS SQL sever 2005 as my DB.\nPlease help in putting together this application.\nYour inputs are much appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":420,"Q_Id":1518725,"Users Score":0,"Answer":"I think you're going to need to be more specific as to what problem you're having. As far as bulk loading goes, there's lots of bulkloader documentation around; or are you asking about model design? If so, we need to know more about how you plan to search for users. Do you need partial string matches? Sorting? Fuzzy matching?","Q_Score":0,"Tags":"python,google-app-engine,bulk-load","A_Id":1519020,"CreationDate":"2009-10-05T07:50:00.000","Title":"Need help in designing a phone book application on python running on google app engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi.\nHistory:\n\nPHP + MySQL years ago\nPHP + Python 2.x + MySQL recently and current\nPython + PostgreSQL working on it\n\nWe use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous.\nWe typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python).\nQuestion 1:\nIn reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for.\nQuestion 2:\nIs there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1.\nIf there is not a version, what would it take to upgrade it to work on Python 3.1?\nNote: I've run 2to3 on the Werkzeug source code, and it does python-compile without \nEdit:\nThe project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there?\nThoughts appreciated!","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":2259,"Q_Id":1523706,"Users Score":1,"Answer":"I can only answer question one:\nI started using it for some small webstuff but now moved on to rework larger apps with it. Why Werkzeug? The modular concept is really helpful. You can hook in modules as you like, make stuff easily context aware and you get good request file handling for free which is able to cope with 300mb+ files by not storing it in memory.\nDisadvantages... Well sometimes modularity needs some upfront thought (django f.ex. gives you everything all at once, stripping stuff out is hard to do there though) but for me it works fine.","Q_Score":2,"Tags":"python,python-3.x,werkzeug","A_Id":1622505,"CreationDate":"2009-10-06T05:13:00.000","Title":"Werkzeug in General, and in Python 3.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi.\nHistory:\n\nPHP + MySQL years ago\nPHP + Python 2.x + MySQL recently and current\nPython + PostgreSQL working on it\n\nWe use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous.\nWe typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python).\nQuestion 1:\nIn reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for.\nQuestion 2:\nIs there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1.\nIf there is not a version, what would it take to upgrade it to work on Python 3.1?\nNote: I've run 2to3 on the Werkzeug source code, and it does python-compile without \nEdit:\nThe project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there?\nThoughts appreciated!","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":2259,"Q_Id":1523706,"Users Score":1,"Answer":"I haven't used Werkzeug, so I can only answer question 2:\nNo, Werkzeug does not work on Python 3. In fact, very little works on Python 3 as of today. Porting is not difficult, but you can't port until all your third-party libraries have been ported, so progress is slow.\nOne big stopper has been setuptools, which is a very popular package to use. Setuptools is unmaintained, but there is a maintained fork called Distribute. Distribute was released with Python 3 support just a week or two ago. I hope package support for Python 3 will pick up now. But it will still be a long time, at least months probably a year or so, before any major project like Werkzeug will be ported to Python 3.","Q_Score":2,"Tags":"python,python-3.x,werkzeug","A_Id":1523934,"CreationDate":"2009-10-06T05:13:00.000","Title":"Werkzeug in General, and in Python 3.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi.\nHistory:\n\nPHP + MySQL years ago\nPHP + Python 2.x + MySQL recently and current\nPython + PostgreSQL working on it\n\nWe use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous.\nWe typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python).\nQuestion 1:\nIn reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for.\nQuestion 2:\nIs there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1.\nIf there is not a version, what would it take to upgrade it to work on Python 3.1?\nNote: I've run 2to3 on the Werkzeug source code, and it does python-compile without \nEdit:\nThe project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there?\nThoughts appreciated!","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":2259,"Q_Id":1523706,"Users Score":3,"Answer":"mod_wsgi for Python 3.x is also not ready. There is no satisfactory definition of WSGI for Python 3.x yet; the WEB-SIG are still bashing out the issues. mod_wsgi targets a guess at what might be in it, but there are very likely to be changes to both the spec and to standard libraries. Any web application you write today in Python 3.1 is likely to break in the future.\nIt's a bit of a shambles. Today, for webapps you can only realistically use Python 2.x.","Q_Score":2,"Tags":"python,python-3.x,werkzeug","A_Id":1525943,"CreationDate":"2009-10-06T05:13:00.000","Title":"Werkzeug in General, and in Python 3.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a web service with Django Framework.\nMy friend's project is a WIN32 program and also a MS-sql server.\nThe Win32 program currently has a login system that talks to a MS-sql for authentication.\nHowever, we would like to INTEGRATE this login system as one.\nPlease answer the 2 things:\n\nI want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)?\nIf not, what is the best way of combining the authentication?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":203,"Q_Id":1529128,"Users Score":0,"Answer":"If the only thing the WIN32 app uses the MS-SQL Server for is Authentication\/Authorization then you could write a new Authentication\/Authorization provider that uses a set of Web Services (that you would have to create) that expose the Django provider.","Q_Score":1,"Tags":"python,windows,django,authentication,frameworks","A_Id":1529146,"CreationDate":"2009-10-07T01:59:00.000","Title":"Can a WIN32 program authenticate into Django authentication system, using MYSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web service with Django Framework.\nMy friend's project is a WIN32 program and also a MS-sql server.\nThe Win32 program currently has a login system that talks to a MS-sql for authentication.\nHowever, we would like to INTEGRATE this login system as one.\nPlease answer the 2 things:\n\nI want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)?\nIf not, what is the best way of combining the authentication?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":103,"Q_Id":1533259,"Users Score":1,"Answer":"The Win32 client can act like a web client to pass the user's credentials to the server. You will want to store the session cookie you get once you are authenticated and use that cookie in all following requests","Q_Score":0,"Tags":"python,mysql,windows,django","A_Id":1581622,"CreationDate":"2009-10-07T02:00:00.000","Title":"Can a WIN32 program authenticate into Django authentication system, using MYSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm coding a small piece of server software for the personal use of several users. Not hundreds, not thousands, but perhaps 3-10 at a time.\nSince it's a threaded server, SQLite doesn't work. It complains about threads like this:\n\nProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140735085562848 and this is thread id 4301299712\n\nBesides, they say SQLite isn't great for concurrency anyhow.\nNow since I started working with Python 3 (and would rather continue using it) I can't seem to get the MySQL module to work properly and others seem equally frustrated.\nIn that case, is there any other DB option for Python 3 that I could consider?","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":3552,"Q_Id":1547365,"Users Score":1,"Answer":"pymongo works with Python 3 now.","Q_Score":3,"Tags":"python,database,python-3.x","A_Id":10863434,"CreationDate":"2009-10-10T08:09:00.000","Title":"A database for python 3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm coding a small piece of server software for the personal use of several users. Not hundreds, not thousands, but perhaps 3-10 at a time.\nSince it's a threaded server, SQLite doesn't work. It complains about threads like this:\n\nProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140735085562848 and this is thread id 4301299712\n\nBesides, they say SQLite isn't great for concurrency anyhow.\nNow since I started working with Python 3 (and would rather continue using it) I can't seem to get the MySQL module to work properly and others seem equally frustrated.\nIn that case, is there any other DB option for Python 3 that I could consider?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3552,"Q_Id":1547365,"Users Score":0,"Answer":"You could create a new sqlite object in each thread, each using the same database file. For such a small number of users you might not come across the problems with concurrency, unless they are all writing to it very heavily.","Q_Score":3,"Tags":"python,database,python-3.x","A_Id":1547384,"CreationDate":"2009-10-10T08:09:00.000","Title":"A database for python 3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm coding a small piece of server software for the personal use of several users. Not hundreds, not thousands, but perhaps 3-10 at a time.\nSince it's a threaded server, SQLite doesn't work. It complains about threads like this:\n\nProgrammingError: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140735085562848 and this is thread id 4301299712\n\nBesides, they say SQLite isn't great for concurrency anyhow.\nNow since I started working with Python 3 (and would rather continue using it) I can't seem to get the MySQL module to work properly and others seem equally frustrated.\nIn that case, is there any other DB option for Python 3 that I could consider?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3552,"Q_Id":1547365,"Users Score":0,"Answer":"Surely a pragmatic option is to just use one SQLite connection per thread.","Q_Score":3,"Tags":"python,database,python-3.x","A_Id":1550870,"CreationDate":"2009-10-10T08:09:00.000","Title":"A database for python 3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing some pylons work in a virtual python enviorment, I want to use MySQL with SQLalchemy but I can't install the MySQLdb module on my virtual enviorment, I can't use easyinstall because I am using a version that was compiled for python 2.6 in a .exe format, I tried running the install from inside the virtual enviorment but that did not work, any sugestions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":442,"Q_Id":1557972,"Users Score":0,"Answer":"Ok Got it all figured out, After I installed the module on my normal python 2.6 install I went into my Python26 folder and low and behold I happened to find a file called MySQL-python-wininst which happened to be a list of all of the installed module files. Basicly it was two folders called MySQLdb and another called MySQL_python-1.2.2-py2.6.egg-info as well as three other files: _mysql.pyd, _mysql_exceptions.py, _mysql_exceptions.pyc. So I went into the folder where they were located (Python26\/Lib\/site-packages) and copied them to virtualenv's site-packages folder (env\/Lib\/site-packages) and the module was fully functional!\nNote: All paths are the defaults","Q_Score":0,"Tags":"python,mysql,pylons,module,virtualenv","A_Id":1563869,"CreationDate":"2009-10-13T02:43:00.000","Title":"Install custom modules in a python virtual enviroment","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've produced a few Django sites but up until now I have been mapping individual views and URLs in urls.py.\nNow I've tried to create a small custom CMS but I'm having trouble with the URLs. I have a database table (SQLite3) which contains code for the pages like a column for header, one for right menu, one for content.... so on, so on. I also have a column for the URL. How do I get Django to call the information in the database table from the URL stored in the column rather than having to code a view and the URL for every page (which obviously defeats the purpose of a CMS)?\nIf someone can just point me at the right part of the docs or a site which explains this it would help a lot.\nThanks all.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3215,"Q_Id":1563088,"Users Score":1,"Answer":"Your question is a little bit twisted, but I think what you're asking for is something similar to how django.contrib.flatpages handles this. Basically it uses middleware to catch the 404 error and then looks to see if any of the flatpages have a URL field that matches.\nWe did this on one site where all of the URLs were made \"search engine friendly\". We overrode the save() method, munged the title into this_is_the_title.html (or whatever) and then stored that in a separate table that had a URL => object class\/id mapping.ng (this means it is listed before flatpages in the middleware list).","Q_Score":1,"Tags":"python,database,django,url,content-management-system","A_Id":1563359,"CreationDate":"2009-10-13T21:43:00.000","Title":"URLs stored in database for Django site","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names\/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?\nEdit: I need to generate a file with sql statements for execution later","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":50856,"Q_Id":1563967,"Users Score":1,"Answer":"Quoting parameters manually in general is a bad idea. What if there is a mistake in escaping rules? What if escape doesn't match used version of DB? What if you just forget to escape some parameter or erroneously assumed it can't contain data requiring escaping? That all may cause SQL injection vulnerability. Also, DB can have some restrictions on SQL statement length while you need to pass large data chunk for LOB column. That's why Python DB API and most databases (Python DB API module will transparently escape parameters, if database doesn't support this, as early MySQLdb did) allow passing parameters separated from statement:\n\n.execute(operation[,parameters])","Q_Score":17,"Tags":"python,sql,postgresql,psycopg2","A_Id":1564226,"CreationDate":"2009-10-14T02:31:00.000","Title":"Generate SQL statements with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names\/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?\nEdit: I need to generate a file with sql statements for execution later","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":50856,"Q_Id":1563967,"Users Score":2,"Answer":"For robustness, I recommend using prepared statements to send user-entered values, no matter what language you use. :-)","Q_Score":17,"Tags":"python,sql,postgresql,psycopg2","A_Id":1563981,"CreationDate":"2009-10-14T02:31:00.000","Title":"Generate SQL statements with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names\/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?\nEdit: I need to generate a file with sql statements for execution later","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":50856,"Q_Id":1563967,"Users Score":13,"Answer":"SQLAlchemy provides a robust expression language for generating SQL from Python.\nLike every other well-designed abstraction layer, however, the queries it generates insert data through bind variables rather than through attempting to mix the query language and the data being inserted into a single string. This approach avoids massive security vulnerabilities and is otherwise The Right Thing.","Q_Score":17,"Tags":"python,sql,postgresql,psycopg2","A_Id":1564224,"CreationDate":"2009-10-14T02:31:00.000","Title":"Generate SQL statements with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetchall() to get a description of the table, store it in memory, and then match the column names from cursor.description to that overall table description?","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":3955,"Q_Id":1583350,"Users Score":5,"Answer":"No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.\nsqlite3 only uses sqlite3_column_decltype and sqlite3_column_type in one place, each, and neither are accessible to the Python application - so their is no \"direct\" way that you may have been looking for.","Q_Score":5,"Tags":"python,sqlite,python-db-api","A_Id":1583379,"CreationDate":"2009-10-17T22:11:00.000","Title":"sqlite3 and cursor.description","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.\nI firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.\nJust a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures\/Packages where it makes sense. And those are different for every RDBMS.\nBy using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.\nSo getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?\nI am especially interested hearing how you separate\/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?\nLetting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":369,"Q_Id":1586008,"Users Score":0,"Answer":"It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can.","Q_Score":2,"Tags":"php,python,ruby-on-rails,database","A_Id":1586035,"CreationDate":"2009-10-18T20:56:00.000","Title":"PHP, Python, Ruby application with multiple RDBMS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.\nI firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.\nJust a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures\/Packages where it makes sense. And those are different for every RDBMS.\nBy using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.\nSo getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?\nI am especially interested hearing how you separate\/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?\nLetting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":369,"Q_Id":1586008,"Users Score":2,"Answer":"If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide. \nYou'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific.\nAs for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are. \nA good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends.","Q_Score":2,"Tags":"php,python,ruby-on-rails,database","A_Id":1586105,"CreationDate":"2009-10-18T20:56:00.000","Title":"PHP, Python, Ruby application with multiple RDBMS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.\nI firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.\nJust a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures\/Packages where it makes sense. And those are different for every RDBMS.\nBy using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.\nSo getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?\nI am especially interested hearing how you separate\/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?\nLetting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":369,"Q_Id":1586008,"Users Score":2,"Answer":"You cannot eat a cake and have it, choose on of the following options.\n\nUse your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial.\nUse the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor.\n\nIn other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice.","Q_Score":2,"Tags":"php,python,ruby-on-rails,database","A_Id":1587887,"CreationDate":"2009-10-18T20:56:00.000","Title":"PHP, Python, Ruby application with multiple RDBMS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM. \nAnyway, what use cases suit non relational datastores best?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":587,"Q_Id":1588708,"Users Score":2,"Answer":"Consider the situation where you have many entity types but few instances of each entity. In this case you will have many tables each with a few records so a relational approach is not suitable.","Q_Score":3,"Tags":"python,google-app-engine,couchdb","A_Id":1588748,"CreationDate":"2009-10-19T13:36:00.000","Title":"What are the use cases for non relational datastores?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM. \nAnyway, what use cases suit non relational datastores best?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":587,"Q_Id":1588708,"Users Score":0,"Answer":"In some cases that are simply nice. ZODB is a Python-only object database, that is so well-integrated with Python that you can simply forget that it's there. You don't have to bother about it, most of the time.","Q_Score":3,"Tags":"python,google-app-engine,couchdb","A_Id":1589186,"CreationDate":"2009-10-19T13:36:00.000","Title":"What are the use cases for non relational datastores?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.0855049882,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":3,"Answer":"If you are I\/O bound, the best way I have found to optimize is to read or write the entire file into\/out of memory at once, then operate out of RAM from there on.\nWith extensive testing I found that my runtime eded up bound not by the amount of data I read from\/wrote to disk, but by the number of I\/O operations I used to do it. That is what you need to optimize.\nI don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I\/O for each byte, that's what you need to do.\nOf course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1594704,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.0285636566,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":1,"Answer":"Use buffered writes for step 4.\nWrite a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.\nYou would have one buffer per file, so that most \"writes\" won't actually hit the disk.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1595358,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":2504,"Q_Id":1594604,"Users Score":3,"Answer":"Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode.\nIf the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB\/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to\/from the disks.\nAlso useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1595626,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.057080742,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":2,"Answer":"Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1597062,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.0285636566,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":1,"Answer":"Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them? \nThis would remove the save to and load from the disk that step 4 entails.\nIf the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1597281,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Two libraries for Mysql.\nI've always used _mysql because it's simpler. \nCan anyone tell me the difference, and why I should use which one in certain occasions?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":4941,"Q_Id":1620575,"Users Score":5,"Answer":"_mysql is the one-to-one mapping of the rough mysql API. On top of it, the DB-API is built, handling things using cursors and so on. \nIf you are used to the low-level mysql API provided by libmysqlclient, then the _mysql module is what you need, but as another answer says, there's no real need to go so low-level. You can work with the DB-API and behave just fine, with the added benefit that the DB-API is backend-independent.","Q_Score":11,"Tags":"python,mysql","A_Id":1620642,"CreationDate":"2009-10-25T10:37:00.000","Title":"Python: advantages and disvantages of _mysql vs MySQLdb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On my website I store user pictures in a simple manner such as:\n\"image\/user_1.jpg\".\nI don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com\/images\/user_2.jpg, www.mydomain.com\/images\/user_3.jpg, so on...)\nSo far I have three solutions in mind:\n\nI tried using .htaccess to password protect the \"images\" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method.\nI can start converting my user_id's to an md5 hash with some salt. The images would be named as: \/image\/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated.\nor I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image.\n\nI'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this? \nBasically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site.\nThanks a lot,\nHaluk","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":4621,"Q_Id":1623311,"Users Score":6,"Answer":"Any method you choose to determine the source of a request is only as reliable as the HTTP_REFERER information that is sent by the user's browser, which is not very. Requiring authentication is the only good way to protect content.","Q_Score":2,"Tags":"php,python,linux,perl","A_Id":1623338,"CreationDate":"2009-10-26T06:06:00.000","Title":"Restrict access to images on my website except through my own htmls","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"On my website I store user pictures in a simple manner such as:\n\"image\/user_1.jpg\".\nI don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com\/images\/user_2.jpg, www.mydomain.com\/images\/user_3.jpg, so on...)\nSo far I have three solutions in mind:\n\nI tried using .htaccess to password protect the \"images\" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method.\nI can start converting my user_id's to an md5 hash with some salt. The images would be named as: \/image\/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated.\nor I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image.\n\nI'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this? \nBasically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site.\nThanks a lot,\nHaluk","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":4621,"Q_Id":1623311,"Users Score":2,"Answer":"You are right considering option #3. Use service script that would validate user and readfile() an image. Be sure to set correct Content-Type HTTP header via header() function prior to serving an image. For better isolation images should be put above web root directory, or protected by well written .htaccess rules - there is definitely a way of protecting files and\/or directories this way.","Q_Score":2,"Tags":"php,python,linux,perl","A_Id":1623325,"CreationDate":"2009-10-26T06:06:00.000","Title":"Restrict access to images on my website except through my own htmls","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there an easy way to reset a django database (i.e. drop all data\/tables, create new tables and create indexes) without loading fixture data afterwords? What I want to have is just an empty database because all data is loaded from another source (a kind of a post-processed backup).\nI know that this could be achieved by piping the output of the manage sql... commands to manage dbshell, but this relies on manage dbshelland is kind of hacky...\nAre there any other ways to do this?\nEdit:\nmanage reset will do it, but is there a command like reset that doesn't need the application names as parameters?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1667,"Q_Id":1645310,"Users Score":2,"Answer":"As far as I know, the fixtures (in initial_data file) are automatically loaded after manage.py syndcb and not after reset. So, if you do a manage.py reset yourapp it should not load the fixtures. Hmm?","Q_Score":1,"Tags":"python,database,django,fixtures","A_Id":1645519,"CreationDate":"2009-10-29T17:26:00.000","Title":"Django db reset without loading fixtures","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.\nGiven the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?\nNote that no temporary tables are being created\/deleted during the process: just inserts into existing tables.\nAnd to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.\nIf it could help, I could preprocess in sqlite.\nEdit:\nJust to add some further information (some already listed in my comments):\n\nThe data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc\nAll processing is happening in Python: all the mdb file is doing is storing the data\nAll of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)\n\nGiven the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and\/or removing then reinstating indexes. Thanks for the suggestions.\nAny further suggestions still welcome.","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":3989,"Q_Id":1650856,"Users Score":1,"Answer":"Is your script executing a single INSERT statement per row of data? If so, pre-processing the data into a text file of many rows that could then be inserted with a single INSERT statement might improve the efficiency and cut down on the accumulating temporary crud that's causing it to bloat. \nYou might also make sure the INSERT is being executed without transactions. Whether or not that happens implicitly depends on the Jet version and the data interface library you're using to accomplish the task. By explicitly making sure it's off, you could improve the situation.\nAnother possibility is to drop the indexes before the insert, compact, run the insert, compact, re-instate the indexes, and run a final compact.","Q_Score":1,"Tags":"python,ms-access","A_Id":1652783,"CreationDate":"2009-10-30T16:21:00.000","Title":"MS-Access Database getting very large during inserts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.\nGiven the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?\nNote that no temporary tables are being created\/deleted during the process: just inserts into existing tables.\nAnd to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.\nIf it could help, I could preprocess in sqlite.\nEdit:\nJust to add some further information (some already listed in my comments):\n\nThe data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc\nAll processing is happening in Python: all the mdb file is doing is storing the data\nAll of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)\n\nGiven the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and\/or removing then reinstating indexes. Thanks for the suggestions.\nAny further suggestions still welcome.","AnswerCount":6,"Available Count":4,"Score":-0.0333209931,"is_accepted":false,"ViewCount":3989,"Q_Id":1650856,"Users Score":-1,"Answer":"File --> Options --> Current Database -> Check below options\n* Use the Cache format that is compatible with Microsoft Access 2010 and later\n * Clear Cache on Close\nThen, you file will be saved by compacting to the original size.","Q_Score":1,"Tags":"python,ms-access","A_Id":31059064,"CreationDate":"2009-10-30T16:21:00.000","Title":"MS-Access Database getting very large during inserts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.\nGiven the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?\nNote that no temporary tables are being created\/deleted during the process: just inserts into existing tables.\nAnd to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.\nIf it could help, I could preprocess in sqlite.\nEdit:\nJust to add some further information (some already listed in my comments):\n\nThe data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc\nAll processing is happening in Python: all the mdb file is doing is storing the data\nAll of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)\n\nGiven the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and\/or removing then reinstating indexes. Thanks for the suggestions.\nAny further suggestions still welcome.","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":3989,"Q_Id":1650856,"Users Score":3,"Answer":"A common trick, if feasible with regard to the schema and semantics of the application, is to have several MDB files with Linked tables.\nAlso, the way the insertions take place matters with regards to the way the file size balloons... For example: batched, vs. one\/few records at a time, sorted (relative to particular index(es)), number of indexes (as you mentioned readily dropping some during the insert phase)...\nTentatively a pre-processing approach with say storing of new rows to a separate linked table, heap fashion (no indexes), then sorting\/indexing this data is a minimal fashion, and \"bulk loading\" it to its real destination. Similar pre-processing in SQLite (has hinted in question) would serve the serve purpose. Keeping it \"ALL MDB\" is maybe easier (fewer languages\/processes to learn, fewer inter-op issues [hopefuly ;-)]...)\nEDIT: on why inserting records in a sorted\/bulk fashion may slow down the MDB file's growth (question from Tony Toews)\nOne of the reasons for MDB files' propensity to grow more quickly than the rate at which text\/data added to them (and their counterpart ability to be easily compacted back down) is that as information is added, some of the nodes that constitute the indexes have to be re-arranged (for overflowing \/ rebalancing etc.). Such management of the nodes seems to be implemented in a fashion which favors speed over disk space and harmony, and this approach typically serves simple applications \/ small data rather well. I do not know the specific logic in use for such management but I suspect that in several cases, node operations cause a particular node (or much of it) to be copied anew, and the old location simply being marked as free\/unused but not deleted\/compacted\/reused. I do have \"clinical\" (if only a bit outdated) evidence that by performing inserts in bulk we essentially limit the number of opportunities for such duplication to occur and hence we slow the growth.\nEDIT again: After reading and discussing things from Tony Toews and Albert Kallal it appears that a possibly more significant source of bloat, in particular in Jet Engine 4.0, is the way locking is implemented. It is therefore important to set the database in single user mode to avoid this. (Read Tony's and Albert's response for more details.","Q_Score":1,"Tags":"python,ms-access","A_Id":1650897,"CreationDate":"2009-10-30T16:21:00.000","Title":"MS-Access Database getting very large during inserts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.\nGiven the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?\nNote that no temporary tables are being created\/deleted during the process: just inserts into existing tables.\nAnd to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.\nIf it could help, I could preprocess in sqlite.\nEdit:\nJust to add some further information (some already listed in my comments):\n\nThe data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc\nAll processing is happening in Python: all the mdb file is doing is storing the data\nAll of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)\n\nGiven the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and\/or removing then reinstating indexes. Thanks for the suggestions.\nAny further suggestions still welcome.","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":3989,"Q_Id":1650856,"Users Score":3,"Answer":"One thing to watch out for is records which are present in the append queries but aren't inserted into the data due to duplicate key values, null required fields, etc. Access will allocate the space taken by the records which aren't inserted.\nAbout the only significant thing I'm aware of is to ensure you have exclusive access to the database file. Which might be impossible if doing this during the day. I noticed a change in behavior from Jet 3.51 (used in Access 97) to Jet 4.0 (used in Access 2000) when the Access MDBs started getting a lot larger when doing record appends. I think that if the MDB is being used by multiple folks then records are inserted once per 4k page rather than as many as can be stuffed into a page. Likely because this made index insert\/update operations faster. \nNow compacting does indeed put as many records in the same 4k page as possible but that isn't of help to you.","Q_Score":1,"Tags":"python,ms-access","A_Id":1651412,"CreationDate":"2009-10-30T16:21:00.000","Title":"MS-Access Database getting very large during inserts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am after a Python module for Google App Engine that abstracts away limitations of the GQL.\nSpecifically I want to store big files (> 1MB) and retrieve all records for a model (> 1000). I have my own code that handles this at present but would prefer to build on existing work, if available.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":1658829,"Users Score":1,"Answer":"I'm not aware of any libraries that do that. You may want to reconsider what you're doing, at least in terms of retrieving more than 1000 results - those operations are not available because they're expensive, and needing to evade them is usually (though not always) a sign that you need to rearchitect your app to do less work at read time.","Q_Score":0,"Tags":"python,google-app-engine,gql","A_Id":1660404,"CreationDate":"2009-11-01T23:54:00.000","Title":"module to abstract limitations of GQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.\nThe main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)\nSo I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.\nIn hindsight it's not all together surprising, but it seems that this can't be good.\n...even if only a dozen or so of the queries take 1ms+\nSo I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":1620,"Q_Id":1689031,"Users Score":1,"Answer":"There is always overhead in database calls, in your case the overhead is not that bad because the application and database are on the same machine so there is no network latency but there is still a significant cost.\nWhen you make a request to the database it has to prepare to service that request by doing a number of things including: \n\nAllocating resources (memory buffers, temp tables etc) to the database server connection\/thread that will handle the request, \nDe-serializing the sql and parameters (this is necessary even on one machine as this is an inter-process request unless you are using an embeded database)\nChecking whether the query exists in the query cache if not optimise it and put it in the cache.\n\n\nNote also that if your queries are not parametrised (that is the values are not separated from the SQL) this may result in cache misses for statements that should be the same meaning that each request results in the query being analysed and optimized each time.\n\nProcess the query.\nPrepare and return the results to the client.\n\nThis is just an overview of the kinds of things the most database management systems do to process an SQL request. You incur this overhead 500 times even if the the query itself runs relatively quickly. Bottom line database interactions even to local database are not as cheap as you might expect.","Q_Score":3,"Tags":"python,mysql,django,overhead","A_Id":1689143,"CreationDate":"2009-11-06T17:18:00.000","Title":"Overhead of a Round-trip to MySql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.\nThe main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)\nSo I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.\nIn hindsight it's not all together surprising, but it seems that this can't be good.\n...even if only a dozen or so of the queries take 1ms+\nSo I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.","AnswerCount":4,"Available Count":4,"Score":0.1488850336,"is_accepted":false,"ViewCount":1620,"Q_Id":1689031,"Users Score":3,"Answer":"The overhead of each queries is only part of the picture. The actual round trip time between your Django and Mysql servers is probably very small since most of your queries are coming back in less than a one millisecond. The bigger problem is that the number of queries issued to your database can quickly overwhelm it. 500 queries for a page is way to much, even 50 seems like a lot to me. If ten users view complicated pages you're now up to 5000 queries.\nThe round trip time to the database server is more of a factor when the caller is accessing the database from a Wide Area Network, where roundtrips can easily be between 20ms and 100ms.\nI would definitely look into using some kind of caching.","Q_Score":3,"Tags":"python,mysql,django,overhead","A_Id":1689146,"CreationDate":"2009-11-06T17:18:00.000","Title":"Overhead of a Round-trip to MySql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.\nThe main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)\nSo I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.\nIn hindsight it's not all together surprising, but it seems that this can't be good.\n...even if only a dozen or so of the queries take 1ms+\nSo I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.","AnswerCount":4,"Available Count":4,"Score":0.1973753202,"is_accepted":false,"ViewCount":1620,"Q_Id":1689031,"Users Score":4,"Answer":"Just because you are using an ORM doesn't mean that you shouldn't do performance tuning. \nI had - like you - a home page of one of my applications that had low performance. I saw that I was doing hundreds of queries to display that page. I went looking at my code and realized that with some careful use of select_related() my queries would bring more of the data I needed - I went from hundreds of queries to tens.\nYou can also run a SQL profiler and see if there aren't indices that would help your most common queries - you know, standard database stuff.\nCaching is also your friend, I would think. If a lot of a page is not changing, do you need to query the database every single time?\nIf all else fails, remember: the ORM is great, and yes - you should try to use it because it is the Django philosophy; but you are not married to it.\nIf you really have a usecase where studying and tuning the ORM navigation didn't help, if you are sure that you could do it much better with a standard query: use raw sql for that case.","Q_Score":3,"Tags":"python,mysql,django,overhead","A_Id":1689452,"CreationDate":"2009-11-06T17:18:00.000","Title":"Overhead of a Round-trip to MySql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.\nThe main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)\nSo I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.\nIn hindsight it's not all together surprising, but it seems that this can't be good.\n...even if only a dozen or so of the queries take 1ms+\nSo I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1620,"Q_Id":1689031,"Users Score":2,"Answer":"There are some ways to reduce the query volume.\n\nUse .filter() and .all() to get a bunch of things; pick and choose in the view function (or template via {%if%}). Python can process a batch of rows faster than MySQL. \n\"But I could send too much to the template\". True, but you'll execute fewer SQL requests. Measure to see which is better.\nThis is what you used to do when you wrote SQL. It's not wrong -- it doesn't break the ORM -- but it optimizes the underlying DB work and puts the processing into the view function and the template.\nAvoid query navigation in the template. When you do {{foo.bar.baz.quux}}, SQL is used to get the bar associated with foo, then the baz associated with the bar, then the quux associated with baz. You may be able to reduce this query business with some careful .filter() and Python processing to assemble a useful tuple in the view function.\nAgain, this was something you used to do when you hand-crafted SQL. In this case, you gather larger batches of ORM-managed objects in the view function and do your filtering in Python instead of via a lot of individual ORM requests. \nThis doesn't break the ORM. It changes the usage profile from lots of little queries to a few bigger queries.","Q_Score":3,"Tags":"python,mysql,django,overhead","A_Id":1689330,"CreationDate":"2009-11-06T17:18:00.000","Title":"Overhead of a Round-trip to MySql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})\nAs this is a client-side app, I don't want to use a database server, I just want the info stored into files.\nI want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game.\nI am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key\/value pairs.\nIs my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":666,"Q_Id":1697153,"Users Score":2,"Answer":"BerkeleyDB is good, also look at the *DBM incarnations (e.g. GDBM). The big question though is: for what do you need to search? Do you need to search by that URL, by a range of URLs or the dates you list? \nIt is also quite possible to keep groups of records as simple files in the local filesystem, grouped by dates or search terms, &c.\nAnswering the \"search\" question is the biggest start.\nAs for the key\/value thingy, what you need to ensure is that the KEY itself is well defined as for your lookups. If for example you need to lookup by dates sometimes and others by title, you will need to maintain a \"record\" row, and then possibly 2 or more \"index\" rows making reference to the original record. You can model nearly anything in a key\/value store.","Q_Score":5,"Tags":"c++,python,database,persistence","A_Id":1697185,"CreationDate":"2009-11-08T17:01:00.000","Title":"Which database should I use to store records, and how should I use it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})\nAs this is a client-side app, I don't want to use a database server, I just want the info stored into files.\nI want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game.\nI am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key\/value pairs.\nIs my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":666,"Q_Id":1697153,"Users Score":0,"Answer":"Ok, so you say just storing the data..? You really only need a DB for retrieval, lookup, summarising, etc. So, for storing, just use simple text files and append lines. Compress the data if you need to, use delims between fields - just about any language will be able to read such files. If you do want to retrieve, then focus on your retrieval needs, by date, by key, which keys, etc. If you want simple client side, then you need simple client db. SQLite is far easier than BDB, but look at things like Sybase Advantage (very fast and free for local clients but not open-source) or VistaDB or firebird... but all will require local config\/setup\/maintenance. If you go local XML for a 'sizable' number of records will give you some unnecessarily bloated file-sizes..!","Q_Score":5,"Tags":"c++,python,database,persistence","A_Id":1698109,"CreationDate":"2009-11-08T17:01:00.000","Title":"Which database should I use to store records, and how should I use it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})\nAs this is a client-side app, I don't want to use a database server, I just want the info stored into files.\nI want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game.\nI am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key\/value pairs.\nIs my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution?","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":666,"Q_Id":1697153,"Users Score":2,"Answer":"Personally I would use sqlite anyway. It has always just worked for me (and for others I work with). When your app grows and you suddenly do want to do something a little more sophisticated, you won't have to rewrite.\nOn the other hand, I've seen various comments on the Python dev list about Berkely DB that suggest it's less than wonderful; you only get dict-style access (what if you want to select certain date ranges or titles instead of URLs); and it's not even in Python 3's standard set of libraries.","Q_Score":5,"Tags":"c++,python,database,persistence","A_Id":1697239,"CreationDate":"2009-11-08T17:01:00.000","Title":"Which database should I use to store records, and how should I use it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":601,"Q_Id":1719279,"Users Score":4,"Answer":"Follow the design pattern that Django uses.\n\nCreate a disposable copy of the database. Use SQLite3 in-memory, for example.\nCreate the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.\nLoad the test data fixture into the database. \nRun your unit test case in a database with a known, defined state.\nDispose of the database.\n\nIf you use SQLite3 in-memory, this procedure can be reasonably fast.","Q_Score":4,"Tags":"python,database,testing,sqlalchemy","A_Id":1719347,"CreationDate":"2009-11-12T01:27:00.000","Title":"Are there database testing tools for python (like sqlunit)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"But, they were unable to be found!?\nHow do I install both of them?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":730,"Q_Id":1720867,"Users Score":2,"Answer":"Have you installed python-mysqldb? If not install it using apt-get install python-mysqldb. And how are you importing mysql.Is it import MySQLdb? Python is case sensitive.","Q_Score":1,"Tags":"python,linux,unix,installation","A_Id":1720904,"CreationDate":"2009-11-12T09:01:00.000","Title":"I just installed a Ubuntu Hardy server. In Python, I tried to import _mysql and MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an application in Python with Postgresql 8.3 which runs on several machines on a local network.\nAll machines\n1) fetch huge amount of data from the database server ( lets say database gets 100 different queries from a machine with in 2 seconds time) and there are about 10 or 11 machines doing that.\n2) After processing data machines have to update certain tables (about 3 or 4 update\/insert queries per machine per 1.5 seconds).\nWhat I have noticed is that database goes down some times by giving server aborted process abnormally or freezes the server machine (requiring a hard reset). \nBy the way all machines maintain a constant connection to the database at all times i.e. once a connection is made using Psycopg2 (in Python) it remains active until processing finishes (which could last hours).\nWhat's the best \/ optimal way for handling large number of connections in the application, should they be destroyed after each query ?\nSecondly should I increase max_connections ?\nWould greatly appreciate any advice on this matter.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1607,"Q_Id":1728350,"Users Score":1,"Answer":"This sounds a bit like your DB server might have some problems, especially if your database server literally crashes. I'd start by trying to figure out from logs what is the root cause of the problems. It could be something like running out of memory, but it could also happen because of faulty hardware.\nIf you're opening all the connections at start and keep them open, max_connections isn't the culprit. The way you're handling the DB connections should be just fine and your server shouldn't do that no matter how it's configured.","Q_Score":2,"Tags":"python,linux,performance,postgresql,out-of-memory","A_Id":1729623,"CreationDate":"2009-11-13T10:18:00.000","Title":"Optimal \/ best pratice to maintain continuos connection between Python and Postgresql using Psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i recently switched to mac. first and foremost i installed xampp.\nthen for django-python-mysql connectivity, i \"somehow\" ended up installing a seperate MySQL.\nnow the seperate mysql installation is active all the time and the Xampp one doesnt switch on unless i kill the other one.\nwhat i wanted to know is it possible to make xampp work with the seperate mysql installation? because that way i wouldnt have to tinker around with the mysqlDB adapter for python?\nany help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":1734918,"Users Score":1,"Answer":"You could change the listening port of one of the installations and they shouldn't conflict anymore with each other.\nUpdate: You need to find the mysql configuration file my.cnf of the server which should get a new port (the one from xampp should be somewhere in the xampp folder). Find the line port=3306 in the [mysqld] section. You could change it to something like 3307.\nYou will also need to specify the new port when connecting to the server from your applications.","Q_Score":0,"Tags":"python,mysql,django,macos,xampp","A_Id":1734939,"CreationDate":"2009-11-14T17:22:00.000","Title":"2 mysql instances in MAC","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do I load data from an Excel sheet into my Django application? I'm using database PosgreSQL as the database.\nI want to do this programmatically. A client wants to load two different lists onto the website weekly and they don't want to do it in the admin section, they just want the lists loaded from an Excel sheet. Please help because I'm kind of new here.","AnswerCount":9,"Available Count":1,"Score":-0.022218565,"is_accepted":false,"ViewCount":7027,"Q_Id":1747501,"Users Score":-1,"Answer":"Just started using XLRD and it looks very easy and simple to use.\nBeware that it does not support Excel 2007 yet, so keep in mind to save your excel at 2003 format.","Q_Score":3,"Tags":"python,django,excel,postgresql","A_Id":11293612,"CreationDate":"2009-11-17T09:07:00.000","Title":"Getting data from an Excel sheet","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Greetings, everybody.\nI'm trying to import the following libraries in python: cx_Oracle and kinterbasdb.\nBut, when I try, I get a very similar message error.\n*for cx_Oracle:\nTraceback (most recent call last):\n File \"\", line 1, in \nImportError: DLL load failed: N\u00e3o foi poss\u00edvel encontrar o procedimento especificado.\n(translation: It was not possible to find the specified procedure)\n*for kinterbasdb:\nTraceback (most recent call last):\n File \"C:\\\", line 1, in \n File \"c:\\Python26\\Lib\\site-packages\\kinterbasdb__init__.py\", line 119, in \n import _kinterbasdb as _k\nImportError: DLL load failed: N\u00e3o foi poss\u00edvel encontrar o m\u00f3dulo especificado.\n(translation: It was not possible to find the specified procedure)\nI'm using python 2.6.4 in windows XP. cx_Oracle's version is 5.0.2. kinterbasdb's version is 3.3.0.\nEdit: I've solved it for cx_Oracle, it was a wrong version problem. But I believe I'm using the correct version, and I downloaded it from the Firebird site ( kinterbasdb-3.3.0.win32-setup-py2.6.exe ). Still need assistance with this, please.\nCan anyone lend me a hand here?\nMany Thanks\nDante","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":767,"Q_Id":1799475,"Users Score":-1,"Answer":"oracle is a complete pain. i don't know the details for windows, but for unix you need ORACLE_HOME and LD_LIBRARY_PATH to both be defined before cx_oracle will work. in windows this would be your environment variables, i guess. so check those.\nalso, check that they are defined in the environment in which the program runs (again, i don't know windows specific details, but in unix it's possible for everything to work when you run it from your account by hand, but still not work when run as a batch job because the environment is different).","Q_Score":2,"Tags":"python,cx-oracle,kinterbasdb","A_Id":1803407,"CreationDate":"2009-11-25T19:43:00.000","Title":"importing cx_Oracle and kinterbasdb returns error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a financial app and it seems my floats in sqlite are floating around. Sometimes a 4.0 will be a 4.000009, and a 6.0 will be a 6.00006, things like that. How can I make these more exact and not affect my financial calculations?\nValues are coming from Python if that matters. Not sure which area the messed up numbers are coming from.","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":3672,"Q_Id":1801307,"Users Score":1,"Answer":"Most people would probably use Decimal for this, however if this doesn't map onto a database type you may take a performance hit.\nIf performance is important you might want to consider using Integers to represent an appropriate currency unit - often cents or tenths of cents is ok.\nThere should be business rules about how amounts are to be rounded in various situations and you should have tests covering each scenario.","Q_Score":4,"Tags":"python,sqlite,floating-point","A_Id":1801521,"CreationDate":"2009-11-26T02:55:00.000","Title":"How to deal with rounding errors of floating types for financial calculations in Python SQLite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded mysqlDb, and while installing it I am getting errors like:\n\nC:\\Documents and Settings\\naresh\\Desktop\\MySQL-python-1.2.3c1>setup.py build\nTraceback (most recent call last):\nFile \"C:\\Documents and Settings\\naresh\\Desktop\\MySQL-python-1.2.3c1 \n\\setup.py\",line15, in \nmetadata, options = get_config()\nFile \"C:\\Documents and Settings\\naresh\\Desktop\\MySQL-python-1.2.3c1\n\\setup_windows.py\", line 7, in get_config\nserverKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, options['registry_key'])\nWindowsError: [Error 2] The system cannot find the file specified\n\nWhat can I do to address this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6706,"Q_Id":1803233,"Users Score":0,"Answer":"You need to fire up regedit and make \nHKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Python\\PythonCore\\2.7\\InstallPath\nand HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Python\\PythonCore\\2.7\\InstallPath\\InstallGroup\nto look like HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.7\\InstallPath\\InstallGroup.","Q_Score":7,"Tags":"python,mysql","A_Id":6616901,"CreationDate":"2009-11-26T11:46:00.000","Title":"How to install mysql connector","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.\nDo you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the impression that NHibernate is glue code between C# and the DB, so can not be exported to other languages.\nAlternative question: can somebody recommend a good python compatible replacement of NHibernate ? The backend DB is Oracle something.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1382,"Q_Id":1809201,"Users Score":0,"Answer":"Check out Django. They have a nice ORM and I believe it has tools to attempt a reverse-engineer from the DB schema.","Q_Score":0,"Tags":"python,nhibernate,orm","A_Id":1809219,"CreationDate":"2009-11-27T14:50:00.000","Title":"NHibernate and python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.\nDo you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the impression that NHibernate is glue code between C# and the DB, so can not be exported to other languages.\nAlternative question: can somebody recommend a good python compatible replacement of NHibernate ? The backend DB is Oracle something.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":1382,"Q_Id":1809201,"Users Score":2,"Answer":"What about running your project under Mono on Linux? Mono seems to support NHibernate, which means you may be able to get away with out rewriting large chunks of your application.\nAlso, if you really wanted to get Python in on the action, you could use IronPython along with Mono.","Q_Score":0,"Tags":"python,nhibernate,orm","A_Id":1809238,"CreationDate":"2009-11-27T14:50:00.000","Title":"NHibernate and python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.\nDo you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the impression that NHibernate is glue code between C# and the DB, so can not be exported to other languages.\nAlternative question: can somebody recommend a good python compatible replacement of NHibernate ? The backend DB is Oracle something.","AnswerCount":4,"Available Count":3,"Score":0.2449186624,"is_accepted":false,"ViewCount":1382,"Q_Id":1809201,"Users Score":5,"Answer":"NHibernate is not specific to C#, but it is specific to .NET.\nIronPython is a .NET language from which you could use NHibernate.\n.NET and NHibernate can run on Linux through Mono. I'm not sure how good Mono's support is for WPF.\nI'm not sure if IronPython runs on Linux, but that would seem to be the closest thing to what you are looking for.\nThere is a Java version of NHibernate (said tongue in cheek) called Hibernate and there are integration points between Java and Python where Linux is very much supported.\nI know the Python community has its own ORMs, but as far as I'm aware, those options are not as mature and feature rich as Hibernate\/NHibernate.\nI would imagine that almost all of the options available to you would support Oracle.","Q_Score":0,"Tags":"python,nhibernate,orm","A_Id":1809266,"CreationDate":"2009-11-27T14:50:00.000","Title":"NHibernate and python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently created a script that parses several web proxy logs into a tidy sqlite3 db file that is working great for me... with one snag. the file size. I have been pressed to use this format (a sqlite3 db) and python handles it natively like a champ, so my question is this... what is the best form of string compression that I can use for db entries when file size is the sole concern. zlib? base-n? Klingon?\nAny advice would help me loads, again just string compression for characters that are compliant for URLs.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2957,"Q_Id":1829256,"Users Score":0,"Answer":"what sort of parsing do you do before you put it in the database? I get the impression that it is fairly simple with a single table holding each entry - if not then my apologies. \nCompression is all about removing duplication, and in a log file most of the duplication is between entries rather than within each entry so compressing each entry individually is not going to be a huge win.\nThis is off the top of my head so feel free to shoot it down in flames, but I would consider breaking the table into a set of smaller tables holding the individual parts of the entry. A log entry would then mostly consist of a timestamp (as DATE type rather than a string) plus a set of indexes into other tables (e.g. requesting IP, request type, requested URL, browser type etc.)\nThis would have a trade-off of course, since it would make the database a lot more complex to maintain, but on the other hand it would enable meaningful queries such as \"show me all the unique IPs that requested page X in the last week\".","Q_Score":0,"Tags":"python,sqlite,compression","A_Id":1829601,"CreationDate":"2009-12-01T22:02:00.000","Title":"Python 3: Best string compression method to minimize the size of a sqlite3 db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently created a script that parses several web proxy logs into a tidy sqlite3 db file that is working great for me... with one snag. the file size. I have been pressed to use this format (a sqlite3 db) and python handles it natively like a champ, so my question is this... what is the best form of string compression that I can use for db entries when file size is the sole concern. zlib? base-n? Klingon?\nAny advice would help me loads, again just string compression for characters that are compliant for URLs.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2957,"Q_Id":1829256,"Users Score":0,"Answer":"Instead of inserting compression\/decompression code into your program, you could store the table itself on a compressed drive.","Q_Score":0,"Tags":"python,sqlite,compression","A_Id":1832688,"CreationDate":"2009-12-01T22:02:00.000","Title":"Python 3: Best string compression method to minimize the size of a sqlite3 db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the sqlite3 module in Python 2.6.4 to store a datetime in a SQLite database. Inserting it is very easy, because sqlite automatically converts the date to a string. The problem is, when reading it it comes back as a string, but I need to reconstruct the original datetime object. How do I do this?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":58748,"Q_Id":1829872,"Users Score":1,"Answer":"Note: In Python3, I had to change the SQL to something like:\nSELECT jobid, startedTime as \"st [timestamp]\" FROM job\n(I had to explicitly name the column.)","Q_Score":84,"Tags":"python,datetime,sqlite","A_Id":48429766,"CreationDate":"2009-12-02T00:15:00.000","Title":"How to read datetime back from sqlite as a datetime instead of string in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use Python-MySQLDB library on Mac so I have compiled the source code to get the _mysql.so under Mac10.5 with my Intel iMac (i386)\nThis _mysql.co works in 2 of my iMacs and another Macbook. But that's it, it doesn't work in any other Macs.\nDoes this mean some machine specific info got compiled into the file?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":106,"Q_Id":1831979,"Users Score":2,"Answer":"If you've only built one architecture (i386 \/ PPC) then it won't work on Macs with the opposite architecture. Are the machines that don't work PPC machines, by any chance?\nSometimes build configurations are set up to build only the current architecture by default - I haven't build Python-MySQLDB so I'm not sure if this is the case here, but it's worth checking.\nYou can find out which architectures have been built with the 'file' command in Terminal.\n(Incidentally do you mean \".so\"? I'm not familiar with \".co\" files.)","Q_Score":0,"Tags":"python,compilation,mysql","A_Id":1832065,"CreationDate":"2009-12-02T10:21:00.000","Title":"Why _mysql.co that compiled on one Mac doesn't work on another?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does there exist, or is there an intention to create, a universal database frontend for Python like Perl's DBI? I am aware of Python's DB-API, but all the separate packages are leaving me somewhat aggravated.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1158,"Q_Id":1836061,"Users Score":2,"Answer":"Well...DBAPI is that frontend:\n\nThis API has been defined to encourage similarity between the\n Python modules that are used to access databases. By doing this,\n we hope to achieve a consistency leading to more easily understood\n modules, code that is generally more portable across databases,\n and a broader reach of database connectivity from Python.\n\nIt has always worked great for me atleast, care to elaborate the problems you are facing?","Q_Score":1,"Tags":"python,database","A_Id":1836125,"CreationDate":"2009-12-02T21:49:00.000","Title":"Python universal database interface?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an Excel spreadsheet with calculations I would like to use in a Django web application. I do not need to present the spreadsheet as it appears in Excel. I only want to use the formulae embedded in it. What is the best way to do this?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1592,"Q_Id":1883098,"Users Score":0,"Answer":"You need to use Excel to calculate the results? I mean, maybe you could run the Excel sheet from OpenOffice and use a pyUNO macro, which is somehow \"native\" python. \nA different approach will be to create a macro to generate some more friendly code to python, if you want Excel to perform the calculation is easy you end up with a very slow process.","Q_Score":2,"Tags":"python,django,excel","A_Id":1937261,"CreationDate":"2009-12-10T18:39:00.000","Title":"Importing Excel sheets, including formulae, into Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"PHP provides mysql_connect() and mysql_pconnect() which allow creating both temporary and persistent database connections.\nIs there a similar functionality in Python? The environment on which this will be used is lighttpd server with FastCGI.\nThank you!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3649,"Q_Id":1895089,"Users Score":0,"Answer":"Note: Persistent connections can have a very negative effect on your system performance. If you have a large number of web server processes all holding persistent connections to your DB server you may exhaust the DB server's limit on connections. This is one of those areas where you need to test it under heavy simulated loads to make sure you won't hit the wall at 100MPH.","Q_Score":2,"Tags":"python,mysql,web-services","A_Id":1895731,"CreationDate":"2009-12-12T23:49:00.000","Title":"Persistent MySQL connections in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:\n\nconnect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.\nAgain using the DB handle above, iam performing a \"select\" statement one some different table using the cursor way as described above.\n\nI was able to delete few records using Step1, but step2 select is not working. It simply gives no records for step2 though there are some records available under DB.\nBut, when i comment step1 and execute step2, i could see that step2 works fine. Why this is so?\nThough there are records, why the above sequence is failing to do so?\nAny ideas would be appreciated.\nThanks!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":626,"Q_Id":1922623,"Users Score":0,"Answer":"With no code, I can only make a guess: try not closing the cursor until you are done with that connection. I think that calling cursor() again after calling cursor.close() will just give you a reference to the same cursor, which can no longer be used for queries.\nI am not 100% sure if that is the intended behavior, but I haven't seen any MySQLDB examples of cursors being opened and closed within the same connection.","Q_Score":0,"Tags":"python,mysql","A_Id":1922710,"CreationDate":"2009-12-17T15:42:00.000","Title":"MYSQLDB python module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:\n\nconnect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.\nAgain using the DB handle above, iam performing a \"select\" statement one some different table using the cursor way as described above.\n\nI was able to delete few records using Step1, but step2 select is not working. It simply gives no records for step2 though there are some records available under DB.\nBut, when i comment step1 and execute step2, i could see that step2 works fine. Why this is so?\nThough there are records, why the above sequence is failing to do so?\nAny ideas would be appreciated.\nThanks!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":626,"Q_Id":1922623,"Users Score":0,"Answer":"It sounds as though the first cursor is being returned back to the second step.","Q_Score":0,"Tags":"python,mysql","A_Id":1924766,"CreationDate":"2009-12-17T15:42:00.000","Title":"MYSQLDB python module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have both, django and mysql set to work with UTF-8.\nMy base.html set utf-8 in head.\nrow on my db :\n\n\n+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+\n| id | psn_id | name | publisher | developer | release_date |\n+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+\n| 1 | 10945- | \u307e\u3044\u306b\u3061\u3044\u3063\u3057\u3087 | Sony Computer Entertainment | Sony Computer Entertainment | 2006-11-11 00:00:00 |\n+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+\n\n\nthe source code generated looks like :\n\nまいにちいっしょ\n\nand this is wat is displayed :\/\nwhy they are not showing the chars the way in this database?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":312,"Q_Id":1928087,"Users Score":0,"Answer":"As Dominic has said, the generated HTML source code is correct (these are your Japanese characters translated into HTML entities), but we're not sure, if you see the same code rendered in the page (in this case, you have probably set content-type to \"text\/plain\" instead of \"text\/html\" - do you use render_to_response() or HttpResponse() in the corresponding view.py method?), or your Japanese is rendered correctly but you just don't like the entities in the source code.\nSince we don't know your Django settings and how do you render and return the page, it's difficult to provide you the solution.","Q_Score":0,"Tags":"python,django,unicode","A_Id":1931067,"CreationDate":"2009-12-18T13:04:00.000","Title":"django + mysql + UTF-8 - Chars are not displayed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'd like to get busy with a winter programming project and am contemplating writing an online word game (with a server load of up to, say, 500 users simultaneously). I would prefer it to be platform independent. I intend to use Python, which I have some experience with. For user data storage, after previous experience with MySQL, a flat database design would be preferable but not essential. Okay, now the questions:\nIs it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?\nAre there any great advantages in using Python 3 for my particular project? Would I be better off looking at using other languages instead, such as Erlang?\nIs there any great advantage in using a relational database within a game server?\nAre there any open source game servers' source code out there that are worthy of study before starting?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":1430,"Q_Id":1937286,"Users Score":2,"Answer":"Is it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?\n\ndepends on which modules do you want to use. twisted is a \"swiss knife\" for the network programming and could be a choice for your project but unfortunately it does not support python3 yet.\n\nAre there any great advantages in using Python 3 for my particular project? Would I be better off looking at using other languages instead, such as Erlang?\n\nonly you can answer your question because only you know your knowledge. Using python3 instead of python2 you get all the advantages of new features the python3 brings with him and the disadvantage that non all libraries support python3 at the moment.\nnote that python2.6 should implements most (if not all) of the features of python3 while it should be compatible with python2.5 but i did not investigated a lot in this way.\nboth python and erlang are candidates for your needs, use what you know best and what you like most.\n\nIs there any great advantage in using a relational database within a game server?\n\nyou get all the advantages and disadvantage of having a ACID storage system.","Q_Score":2,"Tags":"python","A_Id":1937342,"CreationDate":"2009-12-20T22:18:00.000","Title":"Word game server in Python, design pros and cons?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to get busy with a winter programming project and am contemplating writing an online word game (with a server load of up to, say, 500 users simultaneously). I would prefer it to be platform independent. I intend to use Python, which I have some experience with. For user data storage, after previous experience with MySQL, a flat database design would be preferable but not essential. Okay, now the questions:\nIs it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?\nAre there any great advantages in using Python 3 for my particular project? Would I be better off looking at using other languages instead, such as Erlang?\nIs there any great advantage in using a relational database within a game server?\nAre there any open source game servers' source code out there that are worthy of study before starting?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":1430,"Q_Id":1937286,"Users Score":1,"Answer":"Related to your database choice, I'd seriously look at using Postgres instead of MySQL. In my experiance with the two Postgres has shown to be faster on most write operations while MySQL is slightly faster on the reads. \nHowever, MySQL also has many issues some of which are:\n\nLive backups are difficult at best, and impossible at worse, mostly you have to take the db offline or let it lock during the backups.\nIn the event of having to kill the server forcefully, either by kill -9, or due to power outage, postgres generally has better resilience to table corruption. \nFull support for ACID compliance, and other relational db features that support for, again imho and experiance, are weak or lacking in MySQL.\n\nYou can use a library such as SQLAlchemy to abstract away the db access though. This would let you test against both to see which you prefer dealing with.\nAs far as the language choice. \nIf you go with Python:\n\nMore librarys support Python 2.x rather than Python 3.x at this time, so I'd likely stick to 2.x. \nBeware multi-threading gotchas with Python's GIL. Utilizing Twisted can get around this. \n\nIf you go with Erlang:\n\nErlang's syntax and idioms can be very foreign to someone who's never used it.\nIf well written it not only scales, it SCALES. \nErlang has it's own highly concurrent web server named Yaws.\nErlang also has it's own highly scalable DBMS named Mnesia (Note it's not relational).\n\nSo I guess your choices could be really boiled down to how much you're willing to learn to do this project.","Q_Score":2,"Tags":"python","A_Id":1937370,"CreationDate":"2009-12-20T22:18:00.000","Title":"Word game server in Python, design pros and cons?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I worked on a PHP project earlier where prepared statements made the SELECT queries 20% faster.\nI'm wondering if it works on Python? I can't seem to find anything that specifically says it does or does NOT.","AnswerCount":7,"Available Count":1,"Score":0.1418931938,"is_accepted":false,"ViewCount":52874,"Q_Id":1947750,"Users Score":5,"Answer":"Using the SQL Interface as suggested by Amit can work if you're only concerned about performance. However, you then lose the protection against SQL injection that a native Python support for prepared statements could bring. Python 3 has modules that provide prepared statement support for PostgreSQL. For MySQL, \"oursql\" seems to provide true prepared statement support (not faked as in the other modules).","Q_Score":51,"Tags":"python,mysql,prepared-statement","A_Id":2539467,"CreationDate":"2009-12-22T17:06:00.000","Title":"Does Python support MySQL prepared statements?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python 2.6 + xlwt module to generate excel files.\nIs it possible to include an autofilter in the first row with xlwt or pyExcelerator or anything else besides COM? \nThanks","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":6820,"Q_Id":1948224,"Users Score":2,"Answer":"I have the same issue, running a linux server.\ni'm going to check creating an ODS or XLSX file with auto-filter by other means, and then convert them with a libreoffice command line to \"xls\".","Q_Score":6,"Tags":"python,excel,xlwt,pyexcelerator","A_Id":20838509,"CreationDate":"2009-12-22T18:21:00.000","Title":"How to create an excel file with an autofilter in the first row with xlwt?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a documentation system for a PHP project and I wanted it to be able to integrate external documentation (use cases, project scope etc.) with the documentation generated from code comments. It seems that phpDocumentor has exactly the right feature set, but external documentation must be written in DocBook which is too complex for our team.\nIf it were in python, sphinx would be just about perfect for this job (ReST is definitely simpler than docbook). Is there any way I can integrate external ReST documentation with the docs extracted from phpdoc? Should I just separate the external documentation (eg. use ReST for external and phpdoc for internal)? Or do you have a better suggestion for managing the external documentation?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":853,"Q_Id":1957787,"Users Score":2,"Answer":"You can convert ReST to DocBook using pandoc.","Q_Score":0,"Tags":"php,phpdoc,docbook,restructuredtext,python-sphinx","A_Id":2035342,"CreationDate":"2009-12-24T10:37:00.000","Title":"External documentation for PHP, no DocBook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to \/tmp) MySQL database?\nMy application is in Python, and I'm using unittest on Ubuntu 9.10.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":287,"Q_Id":1960155,"Users Score":0,"Answer":"You can try the Blackhole and Memory table types in MySQL.","Q_Score":2,"Tags":"python,mysql,unit-testing,ubuntu","A_Id":1960164,"CreationDate":"2009-12-25T00:25:00.000","Title":"Start a \"throwaway\" MySQL session for testing code?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to \/tmp) MySQL database?\nMy application is in Python, and I'm using unittest on Ubuntu 9.10.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":287,"Q_Id":1960155,"Users Score":1,"Answer":"--datadir for just the data or --basedir","Q_Score":2,"Tags":"python,mysql,unit-testing,ubuntu","A_Id":1960160,"CreationDate":"2009-12-25T00:25:00.000","Title":"Start a \"throwaway\" MySQL session for testing code?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.\nPreferably, this instument should be written in python, but any other language is ok.","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":5990,"Q_Id":1961013,"Users Score":19,"Answer":"Since a nosql database can contain huge amounts of data you can not migrate it in the regular rdbms sence. Actually you can't do it for rdbms as well as soon as your data passes some size threshold. It is impractical to bring your site down for a day to add a field to an existing table, and so with rdbms you end up doing ugly patches like adding new tables just for the field and doing joins to get to the data.\nIn nosql world you can do several things.\n\nAs others suggested you can write your code so that it will handle different 'versions' of the possible schema. this is usually simpler then it looks. Many kinds of schema changes are trivial to code around. for example if you want to add a new field to the schema, you just add it to all new records and it will be empty on the all old records (you will not get \"field doesn't exist\" errors or anything ;). if you need a 'default' value for the field in the old records it is too trivially done in code.\nAnother option and actually the only sane option going forward with non-trivial schema changes like field renames and structural changes is to store schema_version in EACH record, and to have code to migrate data from any version to the next on READ. i.e. if your current schema version is 10 and you read a record from the database with the version of 7, then your db layer should call migrate_8, migrate_9, and migrate_10. This way the data that is accessed will be gradually migrated to the new version. and if it is not accessed, then who cares which version is it;)","Q_Score":27,"Tags":"python,mongodb,couchdb,database,nosql","A_Id":3007620,"CreationDate":"2009-12-25T11:23:00.000","Title":"Are there any tools for schema migration for NoSQL databases?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.\nPreferably, this instument should be written in python, but any other language is ok.","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":5990,"Q_Id":1961013,"Users Score":2,"Answer":"One of the supposed benefits of these databases is that they are schemaless, and therefore don't need schema migration tools. Instead, you write your data handling code to deal with the variety of data stored in the db.","Q_Score":27,"Tags":"python,mongodb,couchdb,database,nosql","A_Id":1961090,"CreationDate":"2009-12-25T11:23:00.000","Title":"Are there any tools for schema migration for NoSQL databases?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.\nPreferably, this instument should be written in python, but any other language is ok.","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":5990,"Q_Id":1961013,"Users Score":2,"Answer":"If your data are sufficiently big, you will probably find that you cannot EVER migrate the data, or that it is not beneficial to do so. This means that when you do a schema change, the code needs to continue to be backwards compatible with the old formats forever.\nOf course if your data \"age\" and eventually expire anyway, this can do schema migration for you - simply change the format for newly added data, then wait for all data in the old format to expire - you can then retire the backward-compatibility code.","Q_Score":27,"Tags":"python,mongodb,couchdb,database,nosql","A_Id":1966375,"CreationDate":"2009-12-25T11:23:00.000","Title":"Are there any tools for schema migration for NoSQL databases?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.\nPreferably, this instument should be written in python, but any other language is ok.","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":5990,"Q_Id":1961013,"Users Score":1,"Answer":"When a project has a need for a schema migration in regards to a NoSQL database makes me think that you are still thinking in a Relational database manner, but using a NoSQL database.\nIf anybody is going to start working with NoSQL databases, you need to realize that most of the 'rules' for a RDBMS (i.e. MySQL) need to go out the window too. Things like strict schemas, normalization, using many relationships between objects. NoSQL exists to solve problems that don't need all the extra 'features' provided by a RDBMS.\nI would urge you to write your code in a manner that doesn't expect or need a hard schema for your NoSQL database - you should support an old schema and convert a document record on the fly when you access if if you really want more schema fields on that record.\nPlease keep in mind that NoSQL storage works best when you think and design differently compared to when using a RDBMS","Q_Score":27,"Tags":"python,mongodb,couchdb,database,nosql","A_Id":3007685,"CreationDate":"2009-12-25T11:23:00.000","Title":"Are there any tools for schema migration for NoSQL databases?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python --> SQLite --> ASP.NET C#\nI am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.\nI want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.\nNext, another ASP.NET server must be able to connect to this in memory database via TCP\/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays \"live\" game data.\nI'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?\nThanks!!!","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":309,"Q_Id":1962130,"Users Score":1,"Answer":"This sounds like a premature optimization (apologizes if you've already done the profiling). What I would suggest is go ahead and write the system in the simplest, cleanest way, but put a bit of abstraction around the database bits so they can easily by swapped out. Then profile it and find your bottleneck.\nIf it turns out it is the database, optimize the database in the usual way (indexes, query optimizations, etc...). If its still too slow, most databases support an in-memory table format. Or you can mount a RAM disk and mount individual tables or the whole database on it.","Q_Score":0,"Tags":"asp.net,python,sqlite,networking,udp","A_Id":1977499,"CreationDate":"2009-12-25T22:47:00.000","Title":"In memory database with socket capability","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python --> SQLite --> ASP.NET C#\nI am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.\nI want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.\nNext, another ASP.NET server must be able to connect to this in memory database via TCP\/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays \"live\" game data.\nI'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?\nThanks!!!","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":309,"Q_Id":1962130,"Users Score":0,"Answer":"The application of SQlite depends on your data complexity. \nIf you need to perform complex queries on relational data, then it might be a viable option. If your data is flat (i.e. not relational) and processed as a whole, then some python-internal data structures might be applicable.","Q_Score":0,"Tags":"asp.net,python,sqlite,networking,udp","A_Id":1962162,"CreationDate":"2009-12-25T22:47:00.000","Title":"In memory database with socket capability","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install the module mySQLdb on a windows vista 64 (amd) machine.\nI've installed python on a different folder other than suggested by Python installer.\nWhen I try to install the .exe mySQLdb installer, it can't find python 2.5 and it halts the installation.\nIs there anyway to supply the installer with the correct python location (even thou the registry and path are right)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":431,"Q_Id":1980454,"Users Score":0,"Answer":"did you use an egg?\nif so, python might not be able to find it.\nimport os,sys\nos.environ['PYTHON_EGG_CACHE'] = 'C:\/temp'\nsys.path.append('C:\/path\/to\/MySQLdb.egg')","Q_Score":0,"Tags":"python,windows-installer,mysql","A_Id":2179175,"CreationDate":"2009-12-30T14:24:00.000","Title":"Problem installing MySQLdb on windows - Can't find python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using cherrypy's standalone server (cherrypy.quickstart()) and sqlite3 for a database.\nI was wondering how one would do ajax\/jquery asynchronous calls to the database while using cherrypy?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3741,"Q_Id":2015065,"Users Score":2,"Answer":"The same way you would do them using any other webserver - by getting your javascript to call a URL which is handled by the server-side application.","Q_Score":1,"Tags":"jquery,python,ajax,asynchronous,cherrypy","A_Id":2015344,"CreationDate":"2010-01-06T17:57:00.000","Title":"How does one do async ajax calls using cherrypy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm making a trivia webapp that will feature both standalone questions, and 5+ question quizzes. I'm looking for suggestions for designing this model.\nShould a quiz and its questions be stored in separate tables\/objects, with a key to tie them together, or am I better off creating the quiz as a standalone entity, with lists stored for each of a question's characteristics? Or perhaps someone has another idea...\nThank you in advance. It would probably help to say that I am using Google App Engine, which typically frowns upon relational db models, but I'm willing to go my own route if it makes sense.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":642,"Q_Id":2017930,"Users Score":1,"Answer":"My first cut (I assumed the questions were multiple choice):\n\nI'd have a table of Questions, with ID_Question as the PK, the question text, and a category (if you want).\nI'd have a table of Answers, with ID_Answer as the PK, QuestionID as a FK back to the Questions table, the answer text, and a flag as to whether it's the correct answer or not.\nI'd have a table of Quizzes, with ID_Quiz as the PK, and a description of the quiz, and a category (if you want).\nI'd have a table of QuizQuestions, with ID_QuizQuestion as the PK, QuizID as a FK back to the Quizzes table, and QuestionID as a FK back to the Questions table.\n\nThis model lets you:\n\nUse questions standalone or in quizzes\nLets you have as many or few questions in a quiz as you want\nLets you have as many of few choices for questions as you want (or even multiple correct answers)\nUse questions in several different quizzes","Q_Score":1,"Tags":"python,database-design,google-app-engine,schema","A_Id":2017958,"CreationDate":"2010-01-07T02:56:00.000","Title":"Database Design Inquiry","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm making a trivia webapp that will feature both standalone questions, and 5+ question quizzes. I'm looking for suggestions for designing this model.\nShould a quiz and its questions be stored in separate tables\/objects, with a key to tie them together, or am I better off creating the quiz as a standalone entity, with lists stored for each of a question's characteristics? Or perhaps someone has another idea...\nThank you in advance. It would probably help to say that I am using Google App Engine, which typically frowns upon relational db models, but I'm willing to go my own route if it makes sense.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":642,"Q_Id":2017930,"Users Score":0,"Answer":"Have a table of questions, a table of quizzes and a mapping table between them. That will give you the most flexibility. This is simple enough that you wouldn't even necessarily need a whole relational database management system. I think people tend to forget that relations are pretty simple mathematical\/logical concepts. An RDBMS just handles a lot of the messy book keeping for you.","Q_Score":1,"Tags":"python,database-design,google-app-engine,schema","A_Id":2017943,"CreationDate":"2010-01-07T02:56:00.000","Title":"Database Design Inquiry","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Two questions:\n\ni want to generate a View in my PostGIS-DB. How do i add this View to my geometry_columns Table?\nWhat i have to do, to use a View with SQLAlchemy? Is there a difference between a Table and View to SQLAlchemy or could i use the same way to use a View as i do to use a Table?\n\nsorry for my poor english.\nIf there a questions about my question, please feel free to ask so i can try to explain it in another way maybe :)\nNico","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1758,"Q_Id":2026475,"Users Score":4,"Answer":"Table objects in SQLAlchemy have two roles. They can be used to issue DDL commands to create the table in the database. But their main purpose is to describe the columns and types of tabular data that can be selected from and inserted to.\nIf you only want to select, then a view looks to SQLAlchemy exactly like a regular table. It's enough to describe the view as a Table with the columns that interest you (you don't even need to describe all of the columns). If you want to use the ORM you'll need to declare for SQLAlchemy that some combination of the columns can be used as the primary key (anything that's unique will do). Declaring some columns as foreign keys will also make it easier to set up any relations. If you don't issue create for that Table object, then it is just metadata for SQLAlchemy to know how to query the database.\nIf you also want to insert to the view, then you'll need to create PostgreSQL rules or triggers on the view that redirect the writes to the correct location. I'm not aware of a good usage recipe to redirect writes on the Python side.","Q_Score":2,"Tags":"python,postgresql,sqlalchemy,postgis","A_Id":2027143,"CreationDate":"2010-01-08T09:05:00.000","Title":"Work with Postgres\/PostGIS View in SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"hmm, is there any reason why sa tries to add Nones to for varchar columns that have defaults set in in database schema ?, it doesnt do that for floats or ints (im using reflection).\nso when i try to add new row :\nlike\nu = User()\nu.foo = 'a'\nu.bar = 'b'\nsa issues a query that has a lot more cols with None values assigned to those, and db obviously bards and doesnt perform default substitution.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":900,"Q_Id":2036996,"Users Score":0,"Answer":"I've found its a bug in sa, this happens only for string fields, they dont get server_default property for some unknow reason, filed a ticket for this already","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":2037291,"CreationDate":"2010-01-10T12:28:00.000","Title":"Problem with sqlalchemy, reflected table and defaults for string fields","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have been in the RDBMS world for many years now but wish to explore the whole nosql movement. so here's my first question:\nis it bad practice to have the possibility of duplicate keys? for example, an address book keyed off of last name (most probably search item?) could have multiple entities. is it bad practice to use the last name then? is the key supposed to be the most \"searchable\" definition of the entity? are there any resources for \"best practices\" in this whole new world (for me)?\ni'm intrigued by tokyo cabinet (and specifically the tc interface) but don't know how to iterate through different entities that have the same key (e.g. see above). i can only get the first entity. anyway, thanks in advance for the help","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":597,"Q_Id":2068473,"Users Score":1,"Answer":"This depend on no-sql implementation. Cassandra, for example, allows range queries, so you could model data to do queries on last name, or with full name (starting with last name, then first name).\nBeyond this, many simpler key-value stores would indeed require you to store a list structure (or such) for multi-valued entries. Whether this is feasible or not depends on expected number of \"duplicates\" -- with last name, number could be rather high I presume, so it does not sound like an ideal model for many cases.","Q_Score":1,"Tags":"python,tokyo-cabinet","A_Id":2384015,"CreationDate":"2010-01-15T00:04:00.000","Title":"key\/value (general) and tokyo cabinet (python tc-specific) question","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading\/writing to disk.\nNow I have a script that runs at basically the same speed but CPU utilization for that script never goes over 10%.\nProcessExplorer shows that mysqld is also not taking almost any CPU time or reading\/writing a lot to disk.\nWhat steps would you take to figure out where the bottleneck is?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":917,"Q_Id":2076582,"Users Score":1,"Answer":"It is \"well known\", so to speak, that svn update waits up to a whole second after it has finished running, so that file modification timestamps get \"in the past\" (since many filesystems don't have a timestamp granularity finer than one second). You can find more information about it by Googling for \"svn sleep_for_timestamps\".\nI don't have any obvious solution to suggest. If this is really performance critical you could either: 1) not update as often as you are doing 2) try to use a lower-level Subversion API (good luck).","Q_Score":2,"Tags":"python,mysql,performance,svn","A_Id":2077129,"CreationDate":"2010-01-16T07:40:00.000","Title":"Finding the performance bottleneck in a Python and MySQL script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading\/writing to disk.\nNow I have a script that runs at basically the same speed but CPU utilization for that script never goes over 10%.\nProcessExplorer shows that mysqld is also not taking almost any CPU time or reading\/writing a lot to disk.\nWhat steps would you take to figure out where the bottleneck is?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":917,"Q_Id":2076582,"Users Score":4,"Answer":"Doing SQL queries in a for loop 15k times is a bottleneck in every language.. \nIs there any reason you query every time again ? If you do a single query before the for loop and then loop over the resultset and the SVN part, you will see a dramatic increase in speed.\nBut I doubt that you will get a higher CPU usage. The reason is that you are not doing calculations, but mostly IO. \nBtw, you can't measure that in mysqld cpu usage, as it's in the actual code not complexity of the queries, but their count and the latency of the server engine to answer. So you will see only very short, not expensive queries, that do sum up in time, though.","Q_Score":2,"Tags":"python,mysql,performance,svn","A_Id":2076639,"CreationDate":"2010-01-16T07:40:00.000","Title":"Finding the performance bottleneck in a Python and MySQL script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading\/writing to disk.\nNow I have a script that runs at basically the same speed but CPU utilization for that script never goes over 10%.\nProcessExplorer shows that mysqld is also not taking almost any CPU time or reading\/writing a lot to disk.\nWhat steps would you take to figure out where the bottleneck is?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":917,"Q_Id":2076582,"Users Score":1,"Answer":"Profile your Python code. That will show you how long each function\/method call takes. If that's the method call querying the MySQL database, you'll have a clue where to look. But it also may be something else. In any case, profiling is the usual approach to solve such problems.","Q_Score":2,"Tags":"python,mysql,performance,svn","A_Id":2076590,"CreationDate":"2010-01-16T07:40:00.000","Title":"Finding the performance bottleneck in a Python and MySQL script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently joined a new company and the development team was in the progress of a project to rebuild the database categories structure as follows:\nif we have category and subcategory for items, like food category and italian food category in food category.\nThey were building a table for each category, instead of having one table and a link to the category id.\nNow we have a table called food\nand another table called food_italian\nand both tables contain the same fields.\nI have asked around and it seems that some DBA prefers this design. I would like to know why? and how this design can improve the performance?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":155,"Q_Id":2077522,"Users Score":2,"Answer":"First, the most obvious answer is that you should ask them, not us, since I can tell you this, that design seems bogus deluxe.\nThe only reason I can come up with is that you have inexperienced DBA's that does not know how to performance-tune a database, and seems to think that a table with less rows will always vastly outperform a table with more rows.\nWith good indices, that need not be the case.","Q_Score":2,"Tags":"python,mysql,database,django,performance","A_Id":2077536,"CreationDate":"2010-01-16T13:58:00.000","Title":"DB a table for the category and another table for the subcategory with similar fields, why?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question related to some guidances to solve a problem. I have with me an xml file, I have to populate it into a database system (whatever, it might be sqlite, mysql) using scripting language: Python.\nDoes anyone have any idea on how to proceed?\n\nWhich technologies I need to read further?\nWhich environments I have to install?\nAny tutorials on the same topic?\n\nI already tried to parse xml using both by tree-based and sax method in other language, but to start with Python, I don't know where to start. I already know how to design the database I need. \nAnother question, is Python alone possible of executing database ddl queries?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":15042,"Q_Id":2085430,"Users Score":1,"Answer":"If you are accustomed to DOM (tree) access to xml from other language, you may find useful these standard library modules (and their respective docs):\n\nxml.dom \nxml.dom.minidom\n\nTo save tha data to DB, you can use standard module sqlite3 or look for binding to mysql. Or you may wish to use something more abstract, like SQLAlchemy or Django's ORM.","Q_Score":7,"Tags":"python,xml,database,sqlite,parsing","A_Id":2085657,"CreationDate":"2010-01-18T10:55:00.000","Title":"populating data from xml file to a sqlite database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a written a Python module which due to its specifics needs to have a MySQL database connection. Right now, details of this connection (host, database, username and password to connect with) are stored in \/etc\/mymodule.conf in plaintext, which is obviously not a good idea.\nSupposedly, the \/etc\/mymodule.conf file is edited by the root user after the module is installed, since the module and its database may be used by all users of a Unix system.\nHow should I securely store the password instead?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1241,"Q_Id":2087920,"Users Score":4,"Answer":"Your constraints set a very difficult problem: every user on the system must be able to access that password (since that's the only way for users to access that database)... yet they must not (except when running that script, and presumably only when running it without e.g. a python -i session that would let them set a breakpoint just before the connect call and look all through memory, so definitely able to look at the password).\nYou could write a daemon process that runs as root (so can read mymodule.conf, which you'd make readable only by root) and accepts requests, somehow validates that the request comes from a \"good\" process (one that's running the exact module in question and not interactive), and only then supplies the password. That's fragile, mostly because of the need to determine whether a process may or may not have a breakpoint set at the crucial point of execution.\nAlternatively, you could further raise the technological stakes by having the daemon return, not the password, but rather the open socket ready to be wrapped in a DB-API compliant wrapper; some Unix systems allow open file descriptors to be sent between unrelated processes (a prereq for this approach) -- and of course you'd have to substantially rework the MySQL-based DB API to allow opening a connection around an already-open socket rather than a freshly made one. Note that a validated requesting process that happens to be interactive would still be able to get the connection object, once built, and send totally arbitrary requests -- they wouldn't be able to see the password, technically, but that's not much consolation. So it's unlikely that the large effort required here is warranted.\nSo the next possible architecture is to mediate all db interaction via the validating daemon: a process would \"log into\" the daemon, get validated, and, if all's OK, gain a proxy connection to (e.g.) an XMLRPC server exposing the DB connection and functionality (the daemon would probably fork each such proxy process, right after reading the password from the root-only-readable file, and drop privileges immediately, just on general security ground).\nThe plus wrt the previous alternative, in addition to probably easier implementation, is that the proxy would also get a look at every SQL request that's about to be sent to the MySQL db, and be able to validate and censor those requests as well (presumably on a default-deny basis, again for general security principles), thus seriously limiting the amount of damage a \"rogue\" client process (running interactively with a debugger) can do... one hopes;-).\nYes, no easy solutions here -- but then, the problem your constraints pose is so far from easy that it borders on a self-contradictory impossibility;-). BTW, the problem's not particularly Python-related, it's essentially about choosing a secure architecture that comes close to \"squaring the circle\"-hard contradictory constraints on access permissions!-)","Q_Score":0,"Tags":"python,security","A_Id":2088188,"CreationDate":"2010-01-18T17:30:00.000","Title":"Storing system-wide DB connection password for a Python module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to add a field to an existing mapped class, how would I update the sql table automatically. Does sqlalchemy provide a method to update the database with a new column, if a field is added to the class.","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":11257,"Q_Id":2103274,"Users Score":0,"Answer":"You can install 'DB Browser (SQLite)' and open your current database file and simple add\/edit table in your database and save it, and run your app\n(add script in your model after save above process)","Q_Score":15,"Tags":"python,sqlalchemy","A_Id":65265231,"CreationDate":"2010-01-20T17:01:00.000","Title":"SqlAlchemy add new Field to class and create corresponding column in table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available \/ searchable. (Some options include RDBMS, NoSQL stuff, using the grep\/awk and friends, etc.). \nProposal\nIn particular, one idea that appeals to me is \"indexing\" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. \nI realize this is a little ill-formed, and solutions are welcome.\nAdditional Details\n\nfiles long, not wide\n\n\nmillions of lines per hour, spread over 100 files per hour\ntab seperated, not many columns (~10) \nfields are short (say < 50 chars per field)\n\nqueries are on fields, combinations of fields, and can be historical\n\nDrawbacks to various solutions:\n(All of these are based on my observations and tests, but I'm open to correction)\nBDB\n\nhas problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)\nsingle writer (if it's possible to get around this, I want to see code!)\nhard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). \nsince it only stores strings, there is a serialize \/ deserialize step\n\nRDBMSes\nWins:\n\nflat table model is excellent for querying, indexing\n\nLosses:\n\nIn my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2801,"Q_Id":2110843,"Users Score":1,"Answer":"If the data is already organized in fields, it doesn't sound like a text searching\/indexing problem. It sounds like tabular data that would be well-served by a database.\nScript the file data into a database, index as you see fit, and query the data in any complex way the database supports.\nThat is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme.","Q_Score":4,"Tags":"python,algorithm,indexing,binary-tree","A_Id":2111067,"CreationDate":"2010-01-21T16:22:00.000","Title":"File indexing (using Binary trees?) in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available \/ searchable. (Some options include RDBMS, NoSQL stuff, using the grep\/awk and friends, etc.). \nProposal\nIn particular, one idea that appeals to me is \"indexing\" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. \nI realize this is a little ill-formed, and solutions are welcome.\nAdditional Details\n\nfiles long, not wide\n\n\nmillions of lines per hour, spread over 100 files per hour\ntab seperated, not many columns (~10) \nfields are short (say < 50 chars per field)\n\nqueries are on fields, combinations of fields, and can be historical\n\nDrawbacks to various solutions:\n(All of these are based on my observations and tests, but I'm open to correction)\nBDB\n\nhas problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)\nsingle writer (if it's possible to get around this, I want to see code!)\nhard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). \nsince it only stores strings, there is a serialize \/ deserialize step\n\nRDBMSes\nWins:\n\nflat table model is excellent for querying, indexing\n\nLosses:\n\nIn my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2801,"Q_Id":2110843,"Users Score":1,"Answer":"The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.\nTo reduce the time spent waiting for I\/O, your best bet is compression.\nCreate a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I\/O time, however, will dominate your processing, so reduce I\/O time by zipping everything.","Q_Score":4,"Tags":"python,algorithm,indexing,binary-tree","A_Id":2110912,"CreationDate":"2010-01-21T16:22:00.000","Title":"File indexing (using Binary trees?) in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available \/ searchable. (Some options include RDBMS, NoSQL stuff, using the grep\/awk and friends, etc.). \nProposal\nIn particular, one idea that appeals to me is \"indexing\" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. \nI realize this is a little ill-formed, and solutions are welcome.\nAdditional Details\n\nfiles long, not wide\n\n\nmillions of lines per hour, spread over 100 files per hour\ntab seperated, not many columns (~10) \nfields are short (say < 50 chars per field)\n\nqueries are on fields, combinations of fields, and can be historical\n\nDrawbacks to various solutions:\n(All of these are based on my observations and tests, but I'm open to correction)\nBDB\n\nhas problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)\nsingle writer (if it's possible to get around this, I want to see code!)\nhard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). \nsince it only stores strings, there is a serialize \/ deserialize step\n\nRDBMSes\nWins:\n\nflat table model is excellent for querying, indexing\n\nLosses:\n\nIn my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2801,"Q_Id":2110843,"Users Score":1,"Answer":"sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system.","Q_Score":4,"Tags":"python,algorithm,indexing,binary-tree","A_Id":12805622,"CreationDate":"2010-01-21T16:22:00.000","Title":"File indexing (using Binary trees?) in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Table structure - Data present for 5 min. slots -\ndata_point | point_date\n12 | 00:00\n14 | 00:05\n23 | 00:10\n10 | 00:15\n43 | 00:25\n10 | 00:40 \nWhen I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp). Simple Query -\nselect data_point\nfrom some_table\nwhere point_date >= start_date\nAND point_date < end_date\norder by point_date\nNow when I don't have an entry for a particular time slot (e.g. time slot 00:20 is missing), I want the \"data_point\" to be returned as 0\nThe REPLACE, IF, IFNULL, ISNULL don't work when there no rows returned.\nI thought Union with a default value would work, but it failed too or maybe I didn't use it correctly.\nIs there a way to get this done via sql only ?\nNote : Python 2.6 & mysql version 5.1","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1466,"Q_Id":2119153,"Users Score":0,"Answer":"You cannot query data you do not have.\nYou (as a thinking person) can claim that the 00:20 data is missing; but there's no easy way to define \"missing\" in some more formal SQL sense.\nThe best you can do is create a table with all of the expected times.\nThen you can do an outer join between expected times (including a 0 for 00:20) and actual times (missing the 00:20 sample) and you'll get kind of result you're expecting.","Q_Score":1,"Tags":"python,mysql,null","A_Id":2119402,"CreationDate":"2010-01-22T17:28:00.000","Title":"python : mysql : Return 0 when no rows found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Table structure - Data present for 5 min. slots -\ndata_point | point_date\n12 | 00:00\n14 | 00:05\n23 | 00:10\n10 | 00:15\n43 | 00:25\n10 | 00:40 \nWhen I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp). Simple Query -\nselect data_point\nfrom some_table\nwhere point_date >= start_date\nAND point_date < end_date\norder by point_date\nNow when I don't have an entry for a particular time slot (e.g. time slot 00:20 is missing), I want the \"data_point\" to be returned as 0\nThe REPLACE, IF, IFNULL, ISNULL don't work when there no rows returned.\nI thought Union with a default value would work, but it failed too or maybe I didn't use it correctly.\nIs there a way to get this done via sql only ?\nNote : Python 2.6 & mysql version 5.1","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1466,"Q_Id":2119153,"Users Score":0,"Answer":"I see no easy way to create non-existing records out of thin air, but you could create yourself a point_dates table containing all the timestamps you're interested in, and left join it on your data:\nselect pd.slot, IFNULL(data_point, 0)\nfrom point_dates pd\nleft join some_table st on st.point_date=pd.slot\nwhere point_date >= start_date\nAND point_date < end_date\norder by point_date","Q_Score":1,"Tags":"python,mysql,null","A_Id":2119384,"CreationDate":"2010-01-22T17:28:00.000","Title":"python : mysql : Return 0 when no rows found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database. \nI just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational? \nBasically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?\nI hope that's clear... any guidance is great.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1021,"Q_Id":2124688,"Users Score":2,"Answer":"True, Google App Engine is a very cool product, but the datastore is a different beast than a regular mySQL database. That's not to say that what you need can't be done with the GAE datastore; however it may take some reworking on your end. \nThe most prominent different that you notice right off the start is that GAE uses an object-relational mapping for its data storage scheme. Essentially object graphs are persisted in the database, maintaining there attributes and relationships to other objects. In many cases ORM (object relational mappings) map fairly well on top of a relational database (this is how Hibernate works). The mapping is not perfect though and you will find that you need to make alterations to persist your data. Also, GAE has some unique contraints that complicate things a bit. One contraint that bothers me a lot is not being able to query for attribute paths: e.g. \"select ... where dog.owner.name = 'bob' \". It is these rules that force you to read and understand how GAE data store works before you jump in. \nI think GAE could work well in your situation. It just may take some time to understand ORM persistence in general, and GAE datastore in specifics.","Q_Score":2,"Tags":"iphone,python,google-app-engine,gql","A_Id":2124718,"CreationDate":"2010-01-23T20:55:00.000","Title":"iPhone app with Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database. \nI just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational? \nBasically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?\nI hope that's clear... any guidance is great.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1021,"Q_Id":2124688,"Users Score":1,"Answer":"That's a pretty generic question :)\nShort answer: yes. It's going to involve some rethinking of your data model, but yes, changes are you can support it with the GAE Datastore API.\nWhen you create your Python models (think of these as tables), you can certainly define references to other models (so now we have a foreign key). When you select this model, you'll get back the referencing models (pretty much like a join).\nIt'll most likely work, but it's not a drop in replacement for a mySQL server.","Q_Score":2,"Tags":"iphone,python,google-app-engine,gql","A_Id":2124705,"CreationDate":"2010-01-23T20:55:00.000","Title":"iPhone app with Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database. \nI just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational? \nBasically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?\nI hope that's clear... any guidance is great.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":1021,"Q_Id":2124688,"Users Score":2,"Answer":"GQL offers almost no functionality at all; it's only used for SELECT queries, and it only exists to make writing SELECT queries easier for SQL programmers. Behind the scenes, it converts your queries to db.Query objects.\nThe App Engine datastore isn't a relational database at all. You can do some stuff that looks relational, but my advice for anyone coming from an SQL background is to avoid GQL at all costs to avoid the trap of thinking the datastore is anything at all like an RDBMS, and to forget everything you know about database design. Specifically, if you're normalizing anything, you'll soon wish you hadn't.","Q_Score":2,"Tags":"iphone,python,google-app-engine,gql","A_Id":2125297,"CreationDate":"2010-01-23T20:55:00.000","Title":"iPhone app with Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API. \nDjango 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).\nI would like to use the Django models (so as to not have other developers learn yet-another-ORM), but do not want to include 3.6 megabytes of stuff most of which are not needed. (FYI - the application, final executable that is, actually bundles the install_requires from setup.py)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":191,"Q_Id":2126433,"Users Score":1,"Answer":"The Django ORM is usable on its own - you can use \"settings.configure()\" to set up the database settings. That said, you'll have to do the stripping down and repackaging yourself, and you'll have to experiment with how much you can actually strip away. I'm sure you can ditch contrib\/, forms\/, template\/, and probably several other unrelated pieces. The ORM definitely relies on conf\/, and quite likely on core\/ and util\/ as well. A few quick greps through db\/* should make it clear what's imported.","Q_Score":0,"Tags":"python,django,deployment,size,sqlalchemy","A_Id":2127512,"CreationDate":"2010-01-24T08:27:00.000","Title":"Using Django's Model API without having to *include* the full Django stack","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API. \nDjango 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).\nI would like to use the Django models (so as to not have other developers learn yet-another-ORM), but do not want to include 3.6 megabytes of stuff most of which are not needed. (FYI - the application, final executable that is, actually bundles the install_requires from setup.py)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":191,"Q_Id":2126433,"Users Score":1,"Answer":"You may be able to get a good idea of what is safe to strip out by checking which files don't have their access time updated when you run your application.","Q_Score":0,"Tags":"python,django,deployment,size,sqlalchemy","A_Id":2130014,"CreationDate":"2010-01-24T08:27:00.000","Title":"Using Django's Model API without having to *include* the full Django stack","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?\nWhich one should I be using?","AnswerCount":5,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":221892,"Q_Id":2128505,"Users Score":133,"Answer":"We actually had these merged together originally, i.e. there was a \"filter\"-like method that accepted *args and **kwargs, where you could pass a SQL expression or keyword arguments (or both). I actually find that a lot more convenient, but people were always confused by it, since they're usually still getting over the difference between column == expression and keyword = expression. So we split them up.","Q_Score":380,"Tags":"python,sqlalchemy","A_Id":2157930,"CreationDate":"2010-01-24T19:49:00.000","Title":"Difference between filter and filter_by in SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?\nWhich one should I be using?","AnswerCount":5,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":221892,"Q_Id":2128505,"Users Score":40,"Answer":"filter_by uses keyword arguments, whereas filter allows pythonic filtering arguments like filter(User.name==\"john\")","Q_Score":380,"Tags":"python,sqlalchemy","A_Id":2128567,"CreationDate":"2010-01-24T19:49:00.000","Title":"Difference between filter and filter_by in SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?\nWhich one should I be using?","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":221892,"Q_Id":2128505,"Users Score":494,"Answer":"filter_by is used for simple queries on the column names using regular kwargs, like\ndb.users.filter_by(name='Joe')\nThe same can be accomplished with filter, not using kwargs, but instead using the '==' equality operator, which has been overloaded on the db.users.name object:\ndb.users.filter(db.users.name=='Joe')\nYou can also write more powerful queries using filter, such as expressions like:\ndb.users.filter(or_(db.users.name=='Ryan', db.users.country=='England'))","Q_Score":380,"Tags":"python,sqlalchemy","A_Id":2128558,"CreationDate":"2010-01-24T19:49:00.000","Title":"Difference between filter and filter_by in SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?\nWhich one should I be using?","AnswerCount":5,"Available Count":4,"Score":0.1586485043,"is_accepted":false,"ViewCount":221892,"Q_Id":2128505,"Users Score":4,"Answer":"Apart from all the technical information posted before, there is a significant difference between filter() and filter_by() in its usability.\nThe second one, filter_by(), may be used only for filtering by something specifically stated - a string or some number value. So it's usable only for category filtering, not for expression filtering.\nOn the other hand filter() allows using comparison expressions (==, <, >, etc.) so it's helpful e.g. when 'less\/more than' filtering is needed. But can be used like filter_by() as well (when == used).\nJust to remember both functions have different syntax for argument typing.","Q_Score":380,"Tags":"python,sqlalchemy","A_Id":68331326,"CreationDate":"2010-01-24T19:49:00.000","Title":"Difference between filter and filter_by in SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"we are still pretty new to Postgres and came from Microsoft Sql Server.\nWe are wanting to write some stored procedures now. Well, after struggling to get something more complicated than a hello world to work in pl\/pgsql, we decided it's better if we are going to learn a new language we might as well learn Python because we got the same query working in it in about 15 minutes(note, none of us actually know python). \nSo I have some questions about it in comparison to pl\/psql. \n\nIs pl\/Pythonu slower than pl\/pgsql? \nIs there any kind of \"good\" reference for how to write good stored procedures using it? Five short pages in the Postgres documentation doesn't really tell us enough.\nWhat about query preparation? Should it always be used? \nIf we use the SD and GD arrays for a lot of query plans, will it ever get too full or have a negative impact on the server? Will it automatically delete old values if it gets too full?\nIs there any hope of it becoming a trusted language? \n\nAlso, our stored procedure usage is extremely light. Right now we only have 4, but we are still trying to convert little bits of code over from Sql Server specific syntax(such as variables, which can't be used in Postgres outside of stored procedures)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5869,"Q_Id":2141589,"Users Score":9,"Answer":"Depends on what operations you're doing.\nWell, combine that with a general Python documentation, and that's about what you have.\nNo. Again, depends on what you're doing. If you're only going to run a query once, no point in preparing it separately.\nIf you are using persistent connections, it might. But they get cleared out whenever a connection is closed.\nNot likely. Sandboxing is broken in Python and AFAIK nobody is really interested in fixing it. I heard someone say that python-on-parrot may be the most viable way, once we have pl\/parrot (which we don't yet).\n\nBottom line though - if your stored procedures are going to do database work, use pl\/pgsql. Only use pl\/python if you are going to do non-database stuff, such as talking to external libraries.","Q_Score":11,"Tags":"python,postgresql,stored-procedures,plpgsql","A_Id":2142128,"CreationDate":"2010-01-26T18:19:00.000","Title":"Stored Procedures in Python for PostgreSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a queryset with a few million records. I need to update a Boolean Value, fundamentally toggle it, so that in the database table the values are reset. What's the fastest way to do that?\nI tried traversing the queryset and updating and saving each record, that obviously takes ages? We need to do this very fast, any suggestions?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1711,"Q_Id":2141769,"Users Score":0,"Answer":"Actually, that didn't work out for me.\nThe following did:\nEntry.objects.all().update(value=(F('value')==False))","Q_Score":7,"Tags":"python,database,django,django-queryset","A_Id":4230081,"CreationDate":"2010-01-26T18:50:00.000","Title":"Fastest Way to Update a bunch of records in queryset in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Why is _mysql in the MySQLdb module a C file? When the module tries to import it, I get an import error. What should I do?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1242,"Q_Id":2169449,"Users Score":0,"Answer":"It's the adaptor that sits between the Python MySQLdb module and the C libmysqlclient library. One of the most common reasons for it not loading is that the appropriate libmysqlclient library is not in place.","Q_Score":1,"Tags":"python,mysql,c","A_Id":2169464,"CreationDate":"2010-01-30T21:12:00.000","Title":"Importing _mysql in MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to execute the following query via the mysqldb module in python:\nfor i in self.p.parameter_type:\n cursor.execute(\"\"\"UPDATE parameters SET %s = %s WHERE parameter_set_name = %s\"\"\" % (i,\n float(getattr(self.p, i)), self.list_box_parameter.GetStringSelection()))\nI keep getting the error: \"Unknown column 'M1' in 'where clause'\". I want to update columns i with the value getattr(self.p, i), but only in rows that have the column parameter_set_name equal to self.list_box_parameter.GetStringSelection(). The error suggests that my query is looking for columns by the name 'M1' in the WHERE clause. Why is the above query incorrect and how can I correct it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":375,"Q_Id":2171072,"Users Score":0,"Answer":"It looks like query is formed with wrong syntax.\nCould you display string parameter of cursor.execute?","Q_Score":0,"Tags":"python,mysql","A_Id":2171104,"CreationDate":"2010-01-31T08:45:00.000","Title":"Trouble with MySQL UPDATE syntax with the module mysqldb in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to implement the proper architecture for multiple databases under Python + Pylons. I can't put everything in the config files since one of the database connections requires the connection info from a previous database connection (sharding). \nWhat's the best way to implement such an infrastructure?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":988,"Q_Id":2205047,"Users Score":1,"Answer":"Pylons's template configures the database in config\/environment.py, probably with the engine_from_config method. It finds all the config settings with a particular prefix and passes them as keyword arguments to create_engine.\nYou can just replace that with a few calls to sqlalchemy.create_engine() with the per-engine url, and common username, and password from your config file.","Q_Score":2,"Tags":"python,pylons","A_Id":2224250,"CreationDate":"2010-02-05T04:29:00.000","Title":"Multiple database connections with Python + Pylons + SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"TypeError: unsupported operand type(s) for \/: 'tuple' and 'tuple'\nI'm getting above error , while I fetched a record using query \"select max(rowid) from table\"\nand assigned it to variable and while performing \/ operation is throws above message.\nHow to resolve this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1106,"Q_Id":2220099,"Users Score":4,"Answer":"Sql query select max(rowid) would return Tuple data like records=(1000,)\nYou may need to do like numerator \/ records[0]","Q_Score":1,"Tags":"python,tuples","A_Id":2220107,"CreationDate":"2010-02-08T07:18:00.000","Title":"python tuple division","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose that I have a table Articles, which has fields article_id, content and it contains one article with id 1.\nI also have a table Categories, which has fields category_id (primary key), category_name, and it contains one category with id 10.\nNow suppose that I have a table ArticleProperties, that adds properties to Articles. This table has fields article_id, property_name, property_value.\nSuppose that I want to create a mapping from Categories to Articles via ArticleProperties table.\nI do this by inserting the following values in the ArticleProperties table: (article_id=1, property_name=\"category\", property_value=10).\nIs there any way in SQLAlchemy to express that rows in table ArticleProperties with property_name \"category\" are actually FOREIGN KEYS of table Articles to table Categories?\nThis is a complicated problem and I haven't found an answer myself.\nAny help appreciated!\nThanks, Boda Cydo.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":928,"Q_Id":2234030,"Users Score":1,"Answer":"Assuming I understand you question correctly, then No, you can't model that relationship as you have suggested. (It would help if you described your desired result, rather than your perceived solution)\nWhat I think you may want is a many-to-many mapping table called ArticleCategories, consisting of 2 int columns, ArticleID and CategoryID (with respective FKs)","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":2248806,"CreationDate":"2010-02-10T02:30:00.000","Title":"SQLAlchemy ForeignKey relation via an intermediate table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My Django app, deployed in mod_wsgi under Apache using Django's standard WSGIHandler, authenticates users via form login on the Django side. So to Apache, the user is anonymous. This makes the Apache access log less useful.\nIs there a way to pass the username back through the WSGI wrapper to Apache after handling the request, so that it appears in the Apache access log?\n(Versions: Django 1.1.1, mod_wsgi 2.5, Apache 2.2.9)","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":2209,"Q_Id":2244244,"Users Score":1,"Answer":"This probably isn't what you're expecting, but you could use the username in your URL scheme. That way the user will be in the path section of your apache logs.\nYou'd need to modify your authentication so that auth-required responses are obvious in the apache logs, otherwise when viewing the logs you may attribute unauthenticated requests to authenticated users. E.g. return a temporary redirect to the login page if the request isn't authenticated.","Q_Score":9,"Tags":"python,django,apache,authentication,mod-wsgi","A_Id":2244295,"CreationDate":"2010-02-11T12:03:00.000","Title":"WSGI\/Django: pass username back to Apache for access log","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My Django app, deployed in mod_wsgi under Apache using Django's standard WSGIHandler, authenticates users via form login on the Django side. So to Apache, the user is anonymous. This makes the Apache access log less useful.\nIs there a way to pass the username back through the WSGI wrapper to Apache after handling the request, so that it appears in the Apache access log?\n(Versions: Django 1.1.1, mod_wsgi 2.5, Apache 2.2.9)","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":2209,"Q_Id":2244244,"Users Score":1,"Answer":"Correct me if I'm wrong, but what's stopping you from creating some custom middleware that sets a cookie equal to the display name of the current user logged in. This middleware will run on every view, so even though technically the user could spoof his username to display whatever he wants it to display, it'll just be reset anyway and it's not like its a security risk because the username itself is just for log purposes, not at all related to the actual user logged in. This seems like a simple enough solution, and then Apache log can access cookies so that gives you easiest access. I know some people wouldn't like the idea of a given user spoofing his own username, but i think this is the most trivial solution that gets the job done. Especially, in my case, when it's an iPhone app and the user doesn't have any direct access to a javascript console or the cookies itself.","Q_Score":9,"Tags":"python,django,apache,authentication,mod-wsgi","A_Id":10406967,"CreationDate":"2010-02-11T12:03:00.000","Title":"WSGI\/Django: pass username back to Apache for access log","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using PostgreSQL 8.4. I really like the new unnest() and array_agg() features; it is about time they realize the dynamic processing potential of their Arrays!\nAnyway, I am working on web server back ends that uses long Arrays a lot. Their will be two successive processes which will each occur on a different physical machine. Each such process is a light python application which ''manage'' SQL queries to the database on each of their machines as well as requests from the front ends. \nThe first process will generate an Array which will be buffered into an SQL Table. Each such generated Array is accessible via a Primary Key. When its done the first python app sends the key to the second python app. Then the second python app, which is running on a different machine, uses it to go get the referenced Array found in the first machine. It then sends it to it's own db for generating a final result.\nThe reason why I send a key is because I am hopping that this will make the two processes go faster. But really what I would like is for a way to have the second database send a query to the first database in the hope of minimizing serialization delay and such.\nAny help\/advice would be appreciated.\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":785,"Q_Id":2263132,"Users Score":0,"Answer":"I am thinking either listen\/notify or something with a cache such as memcache. You would send the key to memcache and have the second python app retrieve it from there. You could even do it with listen\/notify... e.g; send the key and notify your second app that the key is in memcache waiting to be retrieved.","Q_Score":3,"Tags":"python,arrays,postgresql,database-connection","A_Id":2277362,"CreationDate":"2010-02-14T22:40:00.000","Title":"Inter-database communications in PostgreSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run:\n\nsome CREATE \/ UPDATE \/ DELETE tests against a temporary database that is thrown away at the end of each test case, and \nsome report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change.\n\nSome of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices:\n\nonly put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing.\nfill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long.\n\nI'm not sure which approach to take. Any advice?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":2273414,"Users Score":1,"Answer":"I'd do both. Run against the small set first to make sure all the code works, then run against the large dataset for things which need to be tested for time, this would be selects, searches and reports especially. If you are doing inserts or deletes or updates on multiple row sets, I'd test those as well against the large set. It is unlikely that simple single row action queries will take too long, but if they involve a lot alot of joins, I'd test them as well. If the queries won't run on prod within the timeout limits, that's a fail and far, far better to know as soon as possible so you can fix before you bring prod to it's knees.","Q_Score":1,"Tags":"python,sql,mysql,tdd,automated-tests","A_Id":2273471,"CreationDate":"2010-02-16T14:06:00.000","Title":"Should pre-commit tests use a big data set and fail if queries take too long, or use a small test database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run:\n\nsome CREATE \/ UPDATE \/ DELETE tests against a temporary database that is thrown away at the end of each test case, and \nsome report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change.\n\nSome of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices:\n\nonly put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing.\nfill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long.\n\nI'm not sure which approach to take. Any advice?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":107,"Q_Id":2273414,"Users Score":1,"Answer":"The problem with testing against real data is that it contains lots of duplicate values, and not enough edge cases. It is also difficult to know what the expected values ought to be (especially if your live database is very big). Oh, and depending on what the live application does, it can be illegal to use the data for the purposes of testing or development. \nGenerally the best thing is to write the test data to go with the tests. This is labourious and boring, which is why so many TDD practitioners abhor databases. But if you have a live data set (which you can use for testing) then take a very cut-down sub-set of data for your tests. If you can write valid assertions against a dataset of thirty records, running your tests against a data set of thirty thousand is just a waste of time.\nBut definitely, once you have got the queries returning the correct results put the queries through some performance tests.","Q_Score":1,"Tags":"python,sql,mysql,tdd,automated-tests","A_Id":2273476,"CreationDate":"2010-02-16T14:06:00.000","Title":"Should pre-commit tests use a big data set and fail if queries take too long, or use a small test database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm going to write my first non-Access project, and I need advice on choosing the platform. I will be installing it on multiple friends' and family's computers, so (since I'm sure many, many platforms would suffice just fine for my app), my highest priority has two parts: 1) ease of install for the non-technical user and, 2) minimizing compatibility problems. I want to be able to fix bugs and make changes and roll them out without having to troubleshoot OS and program conflicts on their computers (or at least keeping those things to the absolute minimum-this is why these concerns are my highest priority in choosing a platform.)\nI have narrowed it down to Python or Java. I like Java's use of the JVM, which seems like would serve to protect against incompatibilities on individual computers nicely. And I've heard a lot of good things about Python, but I don't know how much more prone to incompatibilities it is vs Java. In case it is important, I know the app will definitely use some flavor of a free server-enabled SQL db (server-enabled because I want to be able to run the app from multiple computers), but I don't know which to use yet. I thought I could decide that next.\nMy experience level: I've taken a C++ (console app only) class and done some VBA in Access, but mostly I'm going to have to jump in and learn as I go. So of course I don't know much about all of this. I'm not in the computer field, this is just a hobby.\nSo, which would be better for this app, Java or Python? \n(In case it comes up, I don't want to make it browser-based at all. I've dealt with individual computers' browser settings breaking programs, and that goes against part 2 of my top priority - maximum compatibility.)\nThank you.\nUpdate: It will need a gui, and I'd like to be able to do a little bit of customization on it (or use a non-standard, or maybe a non-built-in one) to make it pop a little.\nUpdate 2: Truthfully, I really am only concerned with Windows computers. I am considering Java only for its reliability as a platform.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":594,"Q_Id":2282360,"Users Score":1,"Answer":"The largest issue I can think of is the need to install an interpreter.\nWith Java, a lot of people will already have that interpreter installed, although you won't necessarily know which version. It may be wise to include the installer for Java with the program.\nWith Python, you're going to have to install the interpreter on each computer, too.\nOne commenter mentioned .NET. .NET 2.0 has a fairly high likelyhood of being installed than either Java or Python on Windows machines. The catch is that you can't (easily) install it on OSX or Linux.","Q_Score":0,"Tags":"java,python","A_Id":2282470,"CreationDate":"2010-02-17T16:20:00.000","Title":"Help for novice choosing between Java and Python for app with sql db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm going to write my first non-Access project, and I need advice on choosing the platform. I will be installing it on multiple friends' and family's computers, so (since I'm sure many, many platforms would suffice just fine for my app), my highest priority has two parts: 1) ease of install for the non-technical user and, 2) minimizing compatibility problems. I want to be able to fix bugs and make changes and roll them out without having to troubleshoot OS and program conflicts on their computers (or at least keeping those things to the absolute minimum-this is why these concerns are my highest priority in choosing a platform.)\nI have narrowed it down to Python or Java. I like Java's use of the JVM, which seems like would serve to protect against incompatibilities on individual computers nicely. And I've heard a lot of good things about Python, but I don't know how much more prone to incompatibilities it is vs Java. In case it is important, I know the app will definitely use some flavor of a free server-enabled SQL db (server-enabled because I want to be able to run the app from multiple computers), but I don't know which to use yet. I thought I could decide that next.\nMy experience level: I've taken a C++ (console app only) class and done some VBA in Access, but mostly I'm going to have to jump in and learn as I go. So of course I don't know much about all of this. I'm not in the computer field, this is just a hobby.\nSo, which would be better for this app, Java or Python? \n(In case it comes up, I don't want to make it browser-based at all. I've dealt with individual computers' browser settings breaking programs, and that goes against part 2 of my top priority - maximum compatibility.)\nThank you.\nUpdate: It will need a gui, and I'd like to be able to do a little bit of customization on it (or use a non-standard, or maybe a non-built-in one) to make it pop a little.\nUpdate 2: Truthfully, I really am only concerned with Windows computers. I am considering Java only for its reliability as a platform.","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":594,"Q_Id":2282360,"Users Score":1,"Answer":"If you're going to install only (or mostly) on Windows, I'd go with .Net. \nIf you have experience with C++, then C# would be natural to you, but if you're comfortable with VBA, you can try VB.NET, but if you prefer Python, then there is IronPython or can give a try to IronRuby, but the best of all is you can mix them all as they apply to different parts of your project.\nIn the database area you'll have excellent integration with SQL Server Express, and in the GUI area, Swing can't beat the ease of use of WinForms nor the sophistication of WPF\/Silverlight.\nAs an added bonus, you can have your application automatically updated with ClickOnce.","Q_Score":0,"Tags":"java,python","A_Id":2283347,"CreationDate":"2010-02-17T16:20:00.000","Title":"Help for novice choosing between Java and Python for app with sql db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do I completely reset my Django (1.2 alpha) DB (dropping all tables, rather than just clearing them)?\nmanage.py flush does too little (won't work if there are schema changes) and manage.py reset requires me to specify all apps (and appears to take a format that is different from just \" \".join(INSTALLED_APPS)). I can obviously achieve this in a DB specific way, but I figured there must be a sane, DB backend agnostic way to do this.\n[Edit: I'm looking for something that I can call from a script, e.g. a Makefile and that continues to work if I change the backend DB or add to settings.INSTALLED_APPS]","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":24412,"Q_Id":2289187,"Users Score":0,"Answer":"Hm, maybe you lie to manage.py, pretending to make fixtures, but only to look for apps:\n\napps=$(python manage.py makefixture 2>&1 | egrep -v '(^Error|^django)'|awk -F . '{print $2}'|uniq); for i in $apps; do python manage.py sqlreset $i; done| grep DROP\n\nThat prints out a list of DROP TABLE statements for all apps tables of your project, excluding django tables itself. If you want to include them, remove the |^django pattern vom egrep.\nBut how to feed the correct database backend? sed\/awk-ing through settings.conf? Or better by utilizing a little settings.conf-reading python script itself.","Q_Score":20,"Tags":"python,django","A_Id":2289931,"CreationDate":"2010-02-18T14:16:00.000","Title":"Complete django DB reset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do I completely reset my Django (1.2 alpha) DB (dropping all tables, rather than just clearing them)?\nmanage.py flush does too little (won't work if there are schema changes) and manage.py reset requires me to specify all apps (and appears to take a format that is different from just \" \".join(INSTALLED_APPS)). I can obviously achieve this in a DB specific way, but I figured there must be a sane, DB backend agnostic way to do this.\n[Edit: I'm looking for something that I can call from a script, e.g. a Makefile and that continues to work if I change the backend DB or add to settings.INSTALLED_APPS]","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":24412,"Q_Id":2289187,"Users Score":0,"Answer":"Just assign a new database and drop this db from the db console. Seems to me to be the simplest.","Q_Score":20,"Tags":"python,django","A_Id":2289445,"CreationDate":"2010-02-18T14:16:00.000","Title":"Complete django DB reset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do I completely reset my Django (1.2 alpha) DB (dropping all tables, rather than just clearing them)?\nmanage.py flush does too little (won't work if there are schema changes) and manage.py reset requires me to specify all apps (and appears to take a format that is different from just \" \".join(INSTALLED_APPS)). I can obviously achieve this in a DB specific way, but I figured there must be a sane, DB backend agnostic way to do this.\n[Edit: I'm looking for something that I can call from a script, e.g. a Makefile and that continues to work if I change the backend DB or add to settings.INSTALLED_APPS]","AnswerCount":7,"Available Count":3,"Score":-0.057080742,"is_accepted":false,"ViewCount":24412,"Q_Id":2289187,"Users Score":-2,"Answer":"take a look at reset command in django's code, and write your own which drops\/creates DB first.","Q_Score":20,"Tags":"python,django","A_Id":2289727,"CreationDate":"2010-02-18T14:16:00.000","Title":"Complete django DB reset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown:\n\nException _mysql_exceptions.ProgrammingError: (2014, \"Commands out of sync; you can't run this command now\") in > ignored\n\nI have many \"try\" and \"exception\" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown.\nI'm very puzzled, can someone help me out?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":6125,"Q_Id":2291714,"Users Score":0,"Answer":"The exceptions in object destructors (__del__) are ignored, which this message indicates. If you execute some MySQL command without fetching results from the cursor (e.g. 'create procedure' or 'insert') then the exception is unnoticed until the cursor is destroyed.\nIf you want to raise and catch an exception, call explicitly cursor.close() somewhere before going out of the scope.","Q_Score":8,"Tags":"python,mysql,django,exception","A_Id":55394190,"CreationDate":"2010-02-18T19:50:00.000","Title":"Who is throwing (and catching) this MySQL Exception?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown:\n\nException _mysql_exceptions.ProgrammingError: (2014, \"Commands out of sync; you can't run this command now\") in > ignored\n\nI have many \"try\" and \"exception\" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown.\nI'm very puzzled, can someone help me out?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":6125,"Q_Id":2291714,"Users Score":2,"Answer":"After printing out a bunch of stuff and debugging, I figured out the problem I think. One of the libraries that I used didn't close the connection or the cursor. But this problem only shows up if I iterate through a large amount of data. The problem is also very intermittent and I still don't know who's throwing the \"command out of sync\" exception. But now that we closed both the connection and cursor, I don't see the errors anymore.","Q_Score":8,"Tags":"python,mysql,django,exception","A_Id":2300154,"CreationDate":"2010-02-18T19:50:00.000","Title":"Who is throwing (and catching) this MySQL Exception?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm using Python with MySQL and Django. I keep seeing this error and I can't figure out where the exception is being thrown:\n\nException _mysql_exceptions.ProgrammingError: (2014, \"Commands out of sync; you can't run this command now\") in > ignored\n\nI have many \"try\" and \"exception\" blocks in my code--if the exception occurred within one of those, then I would see my own debugging messages. The above Exception is obviously being caught somewhere since my program does not abort when the Exception is thrown.\nI'm very puzzled, can someone help me out?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":6125,"Q_Id":2291714,"Users Score":2,"Answer":"I believe this error can occur if you are using the same connection\/cursor from multiple threads. \nHowever, I dont think the creators of Django has made such a mistake, but if you are doing something by yourself it can easily happen.","Q_Score":8,"Tags":"python,mysql,django,exception","A_Id":2292145,"CreationDate":"2010-02-18T19:50:00.000","Title":"Who is throwing (and catching) this MySQL Exception?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm planning on building a Django log-viewing app with powerful filters. I'd like to enable the user to finely filter the results with some custom (possibly DB-specific) SELECT queries.\nHowever, I dislike giving the user write access to the database. Is there a way to make sure a query doesn't change anything in the database? Like a 'dry run' flag? Or is there a way to filter SELECT queries so that they can't be harmful in any way?\nI thought about running the queries as a separate MySQL user but I'd rather avoid the hassle. I also thought about using Google App Engine's GQL 'language', but if there is a cleaner solution, I'd certainly like to hear it :)\nThanks.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":543,"Q_Id":2305353,"Users Score":1,"Answer":"Create and use non-modifiable views.","Q_Score":2,"Tags":"python,sql,django,security,sql-injection","A_Id":2305359,"CreationDate":"2010-02-21T08:48:00.000","Title":"How can I limit an SQL query to be nondestructive?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm planning on building a Django log-viewing app with powerful filters. I'd like to enable the user to finely filter the results with some custom (possibly DB-specific) SELECT queries.\nHowever, I dislike giving the user write access to the database. Is there a way to make sure a query doesn't change anything in the database? Like a 'dry run' flag? Or is there a way to filter SELECT queries so that they can't be harmful in any way?\nI thought about running the queries as a separate MySQL user but I'd rather avoid the hassle. I also thought about using Google App Engine's GQL 'language', but if there is a cleaner solution, I'd certainly like to hear it :)\nThanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":543,"Q_Id":2305353,"Users Score":14,"Answer":"Connect with a user that has only been granted SELECT permissions. Situations like this is why permissions exist in the first place.","Q_Score":2,"Tags":"python,sql,django,security,sql-injection","A_Id":2305379,"CreationDate":"2010-02-21T08:48:00.000","Title":"How can I limit an SQL query to be nondestructive?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?","AnswerCount":6,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":4303,"Q_Id":2306048,"Users Score":24,"Answer":"I'd highly recommend using the same database backend in production as in development, and all stages in between. Django will abstract the database stuff, but having different environments will leave you open to horrible internationalisation, configuration issues, and nasty tiny inconsistencies that won't even show up until you push it live.\nPersonally, I'd stick to mysql, but I never got on with postgres :)","Q_Score":20,"Tags":"python,mysql,django,sqlite,dev-to-production","A_Id":2306070,"CreationDate":"2010-02-21T13:45:00.000","Title":"Django: sqlite for dev, mysql for prod?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?","AnswerCount":6,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":4303,"Q_Id":2306048,"Users Score":7,"Answer":"Use the same database in all environments.\nAs much as the ORM tries to abstract the differences between databases, there will always be certain features that behave differently based on the database. Database portability is a complete myth.\nPlus, it seems pretty insane to test and develop against code paths that you will never use in production, doesn't it?","Q_Score":20,"Tags":"python,mysql,django,sqlite,dev-to-production","A_Id":9401789,"CreationDate":"2010-02-21T13:45:00.000","Title":"Django: sqlite for dev, mysql for prod?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":4303,"Q_Id":2306048,"Users Score":3,"Answer":"In short, no; unless you want to unnecessarily double development time.","Q_Score":20,"Tags":"python,mysql,django,sqlite,dev-to-production","A_Id":2306069,"CreationDate":"2010-02-21T13:45:00.000","Title":"Django: sqlite for dev, mysql for prod?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Quick question: is it a good idea to use sqlite while developing a Django project, and use MySQL on the production server?","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":4303,"Q_Id":2306048,"Users Score":3,"Answer":"Just made this major mistake starting off with sqlite and when i try to deploy on production server with mysql, things didn't work as smooth as i expected. I tried dumpdata\/loaddata with various switches but somehow keep getting errors thrown one after another. Do yourself a big favor and use the same db for both production and development.","Q_Score":20,"Tags":"python,mysql,django,sqlite,dev-to-production","A_Id":12684980,"CreationDate":"2010-02-21T13:45:00.000","Title":"Django: sqlite for dev, mysql for prod?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there another way to connect to a MySQL database with what came included in the version of Python (2.5.1) that is bundled with Mac OS 10.5.x? I unfortunately cannot add the the MySQLdb module to the client machines I am working with...I need to work with the stock version of Python that shipped with Leopard.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":3698,"Q_Id":2313307,"Users Score":1,"Answer":"If the problem is the inability, as so many people have mentioned, that the msqldb module is a problem a simpler way is\n1. install the mysql db\n2. install the pyodbc module\n3. Load and configure the odbc mysql driver\n4. perform sql manipulations with pyodbc, which is very mature and full functional.\nhope this helps","Q_Score":3,"Tags":"python,mysql,macos","A_Id":9170459,"CreationDate":"2010-02-22T18:54:00.000","Title":"Python: Access a MySQL db without MySQLdb module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the best way to access sql server from python is it DB-API ?\nAlso could someone provide a such code using the DB-API how to connect to sql server from python and excute query ?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":28195,"Q_Id":2314178,"Users Score":1,"Answer":"ODBC + freetds + a python wrapper library for ODBC.","Q_Score":26,"Tags":"python,sql,sql-server","A_Id":2314282,"CreationDate":"2010-02-22T21:04:00.000","Title":"Python & sql server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm seeking a way to let the python logger module to log to database and falls back to file system when the db is down.\nSo basically 2 things: How to let the logger log to database and how to make it fall to file logging when the db is down.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":48780,"Q_Id":2314307,"Users Score":2,"Answer":"Old question, but dropping this for others. If you want to use python logging, you can add two handlers. One for writing to file, a rotating file handler. This is robust, and can be done regardless if the dB is up or not. \nThe other one can write to another service\/module, like a pymongo integration. \nLook up logging.config on how to setup your handlers from code or json.","Q_Score":48,"Tags":"python,database,logging","A_Id":46617613,"CreationDate":"2010-02-22T21:22:00.000","Title":"python logging to database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a Python web app using sqlalchemy to communicate with mysql database. So far I have mostly been using sqlalchemy's ORM layer to speak with the database. The greatest benefit to me of ORM has been the speed of development, not having to write all these sql queries and then map them to models.\nRecently, however, I've been required to change my design to communicate with the database through stored procedures. Does any one know if there is any way to use sqlalchemy ORM layer to work with my models through the stored procedures? Is there another Python library which would allow me to do this?\nThe way I see it I should be able to write my own select, insert, update and delete statements, attach them to the model and let the library do the rest. I've gone through sqlalchemy's documentation multiple times but can't seem to find a way to do this.\nAny help with this would be great!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1754,"Q_Id":2330278,"Users Score":3,"Answer":"SQLAlchemy doesn't have any good way to convert inserts, updates and deletes to stored procedure calls. It probably wouldn't be that hard to add the capability to have instead_{update,insert,delete} extensions on mappers, but no one has bothered yet. I consider the requirement to have simple DML statements go through stored procedures rather silly. It really doesn't offer anything that you couldn't do with triggers.\nIf you can't avoid the silliness, there are some ways that you can use SQLAlchemy to go along with it. You'll lose some of the ORM functionality though. You can build ORM objects from stored procedure results using query(Obj).from_statement(text(\"...\")), just have the column labels in the statement match the column names that you told SQLAlchemy to map.\nOne option to cope with DML statements is to turn autoflush off and instead of flushing go through the sessions .new, .dirty and .deleted attributes to see what has changed, issue corresponding statements as stored procedure calls and expunge the objects before committing. \nOr you can just forgo SQLAlchemy state tracking and issue the stored procedure calls directly.","Q_Score":4,"Tags":"python,mysql,database,stored-procedures,sqlalchemy","A_Id":2338360,"CreationDate":"2010-02-24T22:51:00.000","Title":"Keeping ORM with stored procedures","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is the scenario. In your function you're executing statements using a cursor, but one of them fails and an exception is thrown. Your program exits out of the function before closing the cursor it was working with. Will the cursor float around taking up space? Do I have to close the cursor?\nAdditionally, the Python documentation has an example of cursor use and says: \"We can also close the cursor if we are done with it.\" The keyword being \"can,\" not \"must.\" What do they mean precisely by this?","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":27863,"Q_Id":2330344,"Users Score":13,"Answer":"You're not obliged to call close() on the cursor; it can be garbage collected like any other object.\nBut even if waiting for garbage collection sounds OK, I think it would be good style still to ensure that a resource such as a database cursor gets closed whether or not there is an exception.","Q_Score":48,"Tags":"python,sqlite","A_Id":2330380,"CreationDate":"2010-02-24T23:02:00.000","Title":"In Python with sqlite is it necessary to close a cursor?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is the scenario. In your function you're executing statements using a cursor, but one of them fails and an exception is thrown. Your program exits out of the function before closing the cursor it was working with. Will the cursor float around taking up space? Do I have to close the cursor?\nAdditionally, the Python documentation has an example of cursor use and says: \"We can also close the cursor if we are done with it.\" The keyword being \"can,\" not \"must.\" What do they mean precisely by this?","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":27863,"Q_Id":2330344,"Users Score":7,"Answer":"I haven't seen any effect for the sqlite3.Cursor.close() operation yet. \nAfter closing, you can still call fetch(all|one|many) which will return the remaining results from the previous execute statement. Even running Cursor.execute() still works ...","Q_Score":48,"Tags":"python,sqlite","A_Id":2416354,"CreationDate":"2010-02-24T23:02:00.000","Title":"In Python with sqlite is it necessary to close a cursor?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is the scenario. In your function you're executing statements using a cursor, but one of them fails and an exception is thrown. Your program exits out of the function before closing the cursor it was working with. Will the cursor float around taking up space? Do I have to close the cursor?\nAdditionally, the Python documentation has an example of cursor use and says: \"We can also close the cursor if we are done with it.\" The keyword being \"can,\" not \"must.\" What do they mean precisely by this?","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":27863,"Q_Id":2330344,"Users Score":0,"Answer":"Yes, we should close our cursor. I once encountered an error when I used my cursor to configure my connection object: 'PRAGMA synchronous=off' and 'PRAGMA journal_mode=off' for faster insertion. Once I closed the cursor, the error went away. I forgot what type of error I encountered.","Q_Score":48,"Tags":"python,sqlite","A_Id":71683829,"CreationDate":"2010-02-24T23:02:00.000","Title":"In Python with sqlite is it necessary to close a cursor?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a python script to select, insert, update, and delete data in SimpleDB.\nI've been using the simpledb module written by sixapart so far, and it's working pretty well.\nI've found one potential bug\/feature that is problematic for me when running select queries with \"limit\", and I'm thinking of trying it with the boto module to see if it works better.\nHas anyone used these two modules? Care to offer an opinion on which is better?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":833,"Q_Id":2336822,"Users Score":3,"Answer":"I've found boto to be effective and straight forward and I've never had any trouble with queries with limits. Although I've never used the sixapart module.","Q_Score":4,"Tags":"python,amazon-simpledb","A_Id":2336902,"CreationDate":"2010-02-25T19:10:00.000","Title":"What's the best module to access SimpleDB in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"SQLAlchemy seems really heavyweight if all I use is MySQL.\nWhy are convincing reasons for\/against the use of SQLAlchemy in an application that only uses MySQL.","AnswerCount":3,"Available Count":3,"Score":0.2605204458,"is_accepted":false,"ViewCount":275,"Q_Id":2358822,"Users Score":4,"Answer":"I don't think performance should be much of a factor in your choice. The layer that an ORM adds will be insignificant compared to the speed of the database. Databases always end up being a bottleneck.\nUsing an ORM may allow you to develop faster with less bugs. You can still access the DB directly if you have a query that doesn't work well with the ORM layer.","Q_Score":3,"Tags":"python,mysql,sqlalchemy,pylons","A_Id":2359697,"CreationDate":"2010-03-01T20:24:00.000","Title":"If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"SQLAlchemy seems really heavyweight if all I use is MySQL.\nWhy are convincing reasons for\/against the use of SQLAlchemy in an application that only uses MySQL.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":275,"Q_Id":2358822,"Users Score":0,"Answer":"sqlalchemy provides more than just an orm, you can select\/insert\/update\/delete from table objects, join them etc.... the benefit of using those things over building strings with sql in them is guarding against sql injection attacks for one. You also get some decent connection management that you don't have to write yourself. \nThe orm part may not be appropriate for your application, but rolling your own sql handling and connection handling would be really really stupid in my opinion.","Q_Score":3,"Tags":"python,mysql,sqlalchemy,pylons","A_Id":2359777,"CreationDate":"2010-03-01T20:24:00.000","Title":"If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"SQLAlchemy seems really heavyweight if all I use is MySQL.\nWhy are convincing reasons for\/against the use of SQLAlchemy in an application that only uses MySQL.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":275,"Q_Id":2358822,"Users Score":7,"Answer":"ORM means that your OO application actually makes sense when interpreted as the interaction of objects.\nNo ORM means that you must wallow in the impedance mismatch between SQL and Objects. Working without an ORM means lots of redundant code to map between SQL query result sets, individual SQL statements and objects. \nSQLAchemy partitions your application cleanly into objects that interact and a persistence mechanism that (today) happens to be a relational database. \nWith SQLAlchemy you stand a fighting chance of separating the core model and processing from the odd limitations and quirks of a SQL RDBMS.","Q_Score":3,"Tags":"python,mysql,sqlalchemy,pylons","A_Id":2358852,"CreationDate":"2010-03-01T20:24:00.000","Title":"If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m trying to bulk insert data to SQL server express database. When doing bcp from Windows XP command prompt, I get the following error:\nC:\\temp>bcp in -T -f -S \n\nStarting copy...\nSQLState = S1000, NativeError = 0\nError = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file\n\n0 rows copied.\nNetwork packet size (bytes): 4096\nClock Time (ms.) Total : 4391\nSo, there is a problem with EOF. How to append a correct EOF character to this file using Perl or Python?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3283,"Q_Id":2371645,"Users Score":1,"Answer":"This is not a problem with missing EOF, but with EOF that is there and is not expected by bcp.\nI am not a bcp tool expert, but it looks like there is some problem with format of your data files.","Q_Score":0,"Tags":"python,sql-server,perl,bcp","A_Id":2371680,"CreationDate":"2010-03-03T13:35:00.000","Title":"How to append EOF to file using Perl or Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m trying to bulk insert data to SQL server express database. When doing bcp from Windows XP command prompt, I get the following error:\nC:\\temp>bcp in -T -f -S \n\nStarting copy...\nSQLState = S1000, NativeError = 0\nError = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file\n\n0 rows copied.\nNetwork packet size (bytes): 4096\nClock Time (ms.) Total : 4391\nSo, there is a problem with EOF. How to append a correct EOF character to this file using Perl or Python?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3283,"Q_Id":2371645,"Users Score":3,"Answer":"EOF is End Of File. What probably occurred is that the file is not complete; the software expects data, but there is none to be had anymore.\nThese kinds of things happen when:\n\nthe export is interrupted (quit dump software while dumping)\nwhile copying the dumpfile aborting the copy\ndisk full during dump\n\nthese kinds of things. \nBy the way, though EOF is usually just an end of file, there does exist an EOF character. This is used because terminal (command line) input doesn't really end like a file does, but it sometimes is necessary to pass an EOF to such a utility. I don't think it's used in real files, at least not to indicate an end of file. The file system knows perfectly well when the file has ended, it doesn't need an indicator to find that out.\nEDIT shamelessly copied from a comment provided by John Machin\nIt can happen (uninentionally) in real files. All it needs is (1) a data-entry user to type Ctrl-Z by mistake, see nothing on the screen, type the intended Shift-Z, and keep going and (2) validation software (written by e.g. the company president's nephew) which happily accepts Ctrl-anykey in text fields and your database has a little bomb in it, just waiting for someone to produce a query to a flat file.","Q_Score":0,"Tags":"python,sql-server,perl,bcp","A_Id":2371725,"CreationDate":"2010-03-03T13:35:00.000","Title":"How to append EOF to file using Perl or Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a big DBF file (~700MB). I'd like to select only a few lines from it using a python script. I've seen that dbfpy is a nice module that allows to open this type of database, but for now I haven't found any querying capability. Iterating through all the elements from python is simply too slow.\nCan I do what I want from python in a reasonable time?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":5441,"Q_Id":2373086,"Users Score":2,"Answer":"Chances are, your performance is more I\/O bound than CPU bound. As such, the best way to speed it up is to optimize your search. You probably want to build some kind of index keyed by whatever your search predicate is.","Q_Score":9,"Tags":"python,performance,python-3.x,dbf,xbase","A_Id":2375874,"CreationDate":"2010-03-03T16:38:00.000","Title":"Python: Fast querying in a big dbf (xbase) file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to get the feed from the spreadsheet and worksheet ID. I want to capture the data from each cell. \ni.e, I am able to get the feed from the worksheet. Now I need to get data(string type?) from each of the cells to make a comparison and for input. \nHow exactly can I do that?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":14275,"Q_Id":2377301,"Users Score":1,"Answer":"gspread is probably the fastest way to begin this process, however there are some speed limitations on updating data using gspread from your localhost. If you're moving large sets of data with gspread - for instance moving 20 columns of data over a column, you may want to automate the process using a CRON job.","Q_Score":6,"Tags":"python,google-sheets,gspread","A_Id":22048019,"CreationDate":"2010-03-04T06:32:00.000","Title":"How to write a python script to manipulate google spreadsheet data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"1.I have a list of data and a sqlite DB filled with past data along with some stats on each data. I have to do the following operations with them.\n\nCheck if each item in the list is present in DB. if no then collect some stats on the new item and add them to DB.\nCheck if each item in DB is in the list. if no delete it from DB.\n\nI cannot just create a new DB, coz I have other processing to do on the new items and the missing items.\nIn short, i have to update the DB with the new data in list. What is best way to do it? \n2.I had to use sqlite with python threads. So I put a lock for every DB read and write operation. Now it has slowed down the DB access. What is the overhead for thread lock operation? And Is there any other way to use the DB with multiple threads?\nCan someone help me on this?I am using python3.1.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":227,"Q_Id":2378364,"Users Score":0,"Answer":"It does not need to check anything, just use INSERT OR IGNORE in first case (just make sure you have corresponding unique fields so INSERT would not create duplicates) and DELETE FROM tbl WHERE data NOT IN ('first item', 'second item', 'third item') in second case.\nAs it is stated in the official SQLite FAQ, \"Threads are evil. Avoid them.\" As far as I remember there were always problems with threads+sqlite. It's not that sqlite is not working with threads at all, just don't rely much on this feature. You can also make single thread working with database and pass all queries to it first, but effectiveness of such approach is heavily dependent on style of database usage in your program.","Q_Score":0,"Tags":"python,sqlite,multithreading","A_Id":2378530,"CreationDate":"2010-03-04T10:14:00.000","Title":"Need help on python sqlite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin.\nI have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why?\nWhilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":10874,"Q_Id":2388870,"Users Score":15,"Answer":"Compared to \"any key-value store\", the key features for ZODB would be automatic integration of attribute changes with real ACID transactions, and clean, \"arbitrary\" references to other persistent objects.\nThe ZODB is bigger than just the FileStorage used by default in Zope:\n\nThe RelStorage backend lets you put your data in an RDBMS which can be backed up, replicated, etc. using standard tools.\nZEO allows easy scaling of appservers and off-line jobs.\nThe two-phase commit support allows coordinating transactions among multiple databases, including RDBMSes (assuming that they provide a TPC-aware layer).\nEasy hierarchy based on object attributes or containment: you don't need to write recursive self-joins to emulate it.\nFilesystem-based BLOB support makes serving large files trivial to implement.\n\nOverall, I'm very happy using ZODB for nearly any problem where the shape of the data is not obviously \"square\".","Q_Score":43,"Tags":"python,zodb","A_Id":2390062,"CreationDate":"2010-03-05T18:01:00.000","Title":"ZODB In Real Life","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin.\nI have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why?\nWhilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(","AnswerCount":5,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":10874,"Q_Id":2388870,"Users Score":5,"Answer":"I would recommend it.\nI really don't have any criticisms. If it's an object store your looking for, this is the one to use. I've stored 2.5 million objects in it before and didn't feel a pinch.","Q_Score":43,"Tags":"python,zodb","A_Id":2391063,"CreationDate":"2010-03-05T18:01:00.000","Title":"ZODB In Real Life","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin.\nI have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why?\nWhilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":10874,"Q_Id":2388870,"Users Score":2,"Answer":"ZODB has been used for plenty of large databases\nMost ZODB usage is\/was probably Zope users who migrated away if they migrate away from Zope\nPerformance is not so good as relatonal database+ORM especially if you have lots of writes.\nLong term maintenance is not so bad, you want to pack the database from time to time, but that can be done live.\nYou have to use ZEO if you are going to use more than one process with your ZODB which is quite a lot slower than using ZODB directly\nI have no idea how ZODB performs on flash disks.","Q_Score":43,"Tags":"python,zodb","A_Id":2389155,"CreationDate":"2010-03-05T18:01:00.000","Title":"ZODB In Real Life","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I process a lot of text\/data that I exchange between Python, R, and sometimes Matlab.\nMy go-to is the flat text file, but also use SQLite occasionally to store the data and access from each program (not Matlab yet though). I don't use GROUPBY, AVG, etc. in SQL as much as I do these operations in R, so I don't necessarily require the database operations.\nFor such applications that requires exchanging data among programs to make use of available libraries in each language, is there a good rule of thumb on which data exchange format\/method to use (even XML or NetCDF or HDF5)?\nI know between Python -> R there is rpy or rpy2 but I was wondering about this question in a more general sense - I use many computers which all don't have rpy2 and also use a few other pieces of scientific analysis software that require access to the data at various times (the stages of processing and analysis are also separated).","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3563,"Q_Id":2392017,"Users Score":15,"Answer":"If all the languages support SQLite - use it. The power of SQL might not be useful to you right now, but it probably will be at some point, and it saves you having to rewrite things later when you decide you want to be able to query your data in more complicated ways.\nSQLite will also probably be substantially faster if you only want to access certain bits of data in your datastore - since doing that with a flat-text file is challenging without reading the whole file in (though it's not impossible).","Q_Score":8,"Tags":"python,sql,database,r,file-format","A_Id":2392026,"CreationDate":"2010-03-06T09:30:00.000","Title":"SQLite or flat text file?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"They will also search part of their name. Not only words with spaces.\nIf they type \"Matt\", I expect to retrieve \"Matthew\" too.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":291,"Q_Id":2394870,"Users Score":0,"Answer":"If you are trying to search for the names through any development Language, you can use the Regular expression package in Java.\nSome thing like java.util.regex.*;","Q_Score":3,"Tags":"python,mysql,database,search,indexing","A_Id":2395473,"CreationDate":"2010-03-07T01:54:00.000","Title":"Suppose I have 400 rows of people's names in a database. What's the best way to do a search for their names?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I have a simple table that contains username, firstname, lastname.\nHow do I express this in berkeley Db?\nI'm currently using bsddb as the interface.\nCheers.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1199,"Q_Id":2399643,"Users Score":4,"Answer":"You have to pick one \"column\" as the key (must be unique; I imagine that would be \"username\" in your case) -- the only way searches will ever possibly happen. The other columns can be made to be the single string value of that key by any way you like, from pickling to simple joining with a character that's guaranteed to never occur in any of the columns, such as `\\0' for many kind of \"readable text strings\".\nIf you need to be able to search by different keys you'll need other, supplementary and separate bsddb databases set up as \"indices\" into your main table -- it's lots of work, and there's lots of literature on the subject. (Alternatively, you move to a higher-abstraction technology, such as sqlite, which handles the indexing neatly on your behalf;-).","Q_Score":1,"Tags":"python,berkeley-db,bsddb,okvs","A_Id":2399691,"CreationDate":"2010-03-08T06:25:00.000","Title":"Expressing multiple columns in berkeley db in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have 500 rows of data, each with a paragraph of text (like this paragraph). That's it.I want to do a search that matches part of words. (%LIKE%, not FULL_TEXT)\nWhat would be faster?\n\nSELECT * FROM ...WHERE LIKE \"%query%\"; This would put load on the database server.\nSelect all. Then, go through each one and do .find >= 0 This would put load on the web server.\n\nThis is a website, and people will be searching frequently.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":105,"Q_Id":2401508,"Users Score":1,"Answer":"This is very hard for us to determine without knowing:\n\nthe amount of text to search\nthe load and configuration on the database server\nthe load and configuration on on the webserver\netc etc ...\n\nWith that said i would conceptually definitely go for the first scenario. It should be lightening-fast when searching only 500 rows.","Q_Score":0,"Tags":"python,mysql,database,regex,search","A_Id":2401635,"CreationDate":"2010-03-08T13:18:00.000","Title":"What would be the most efficient way to do this search (mysql or text)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an idea for a product that I want to be web-based. But because I live in a part of the world where the internet is not always available, there needs to be a client desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. \nSo from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. \nI will probably start with a local ISP provider for the cloud servce. But then maybe move to something more \"robust\" and universal in the future, ie. Amazon, or Azure, or something along that line. \nMy question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of.\nThanks everyone,\nCraig","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":2428077,"Users Score":0,"Answer":"If you wan't to run a version of the server on desktops, your best options would be Python, Rails, or Java servlets, all of which can be easily packaged into small self contained servers with no dependencies.\nMy recommendation for the desktop would be HTML 5 local storage. The standard hasn't been finalized, but there is experimental support in Google Chrome. If you can force your users to use a specific browser version, you should be OK, until it is finalized.\nI would recommend looking at Django and Rails before any other framework. They have different design philosophies, so one of them might be better suited for your application. Another framework to consider is Grails, which is essentially a clone of Rails in the groovy language.","Q_Score":1,"Tags":"php,python,ruby-on-rails,programming-languages,saas","A_Id":2430572,"CreationDate":"2010-03-11T19:38:00.000","Title":"Old desktop programmer wants to create S+S project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an idea for a product that I want to be web-based. But because I live in a part of the world where the internet is not always available, there needs to be a client desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. \nSo from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. \nI will probably start with a local ISP provider for the cloud servce. But then maybe move to something more \"robust\" and universal in the future, ie. Amazon, or Azure, or something along that line. \nMy question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of.\nThanks everyone,\nCraig","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":2428077,"Users Score":0,"Answer":"The languages you list are all serverside components. The big question is whether you can sensibly build a thick client - effectively you could develop a multi-tier application where the webserver sits on the client and uses a webservice as a datafeed if\/when its available but the solution is not very portable.\nYou could build a purely ajax driven website in javascript then deploy it to the client as signed javascripts on the local filesystem (they need to be signed to get around the restriction that javscripts can only connect back to the server where they served from normally).\nAnother approach would be to use Google Gears - but that would be a single browser solution.\nC.","Q_Score":1,"Tags":"php,python,ruby-on-rails,programming-languages,saas","A_Id":2429484,"CreationDate":"2010-03-11T19:38:00.000","Title":"Old desktop programmer wants to create S+S project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an idea for a product that I want to be web-based. But because I live in a part of the world where the internet is not always available, there needs to be a client desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. \nSo from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. \nI will probably start with a local ISP provider for the cloud servce. But then maybe move to something more \"robust\" and universal in the future, ie. Amazon, or Azure, or something along that line. \nMy question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of.\nThanks everyone,\nCraig","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":2428077,"Users Score":0,"Answer":"If you want a 'desktop component' that is available for you to do development on whenever your internet is out, you could really choose any of those technologies. You can always have a local server (like apache) running on your machine, as well as a local sql database, though if your database contains a large amount of data you may need to scale it down.\nRuby on Rails may be the easiest for you to get started with, though, since it comes packaged with WEBrick (a ruby library that provides HTTP services), and SQLite, a lightweight SQL database management system. Ruby on Rails is configured by default to use these.","Q_Score":1,"Tags":"php,python,ruby-on-rails,programming-languages,saas","A_Id":2428452,"CreationDate":"2010-03-11T19:38:00.000","Title":"Old desktop programmer wants to create S+S project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a \"PathToPhoto\" column. My question is how do I fill in the \"PathToPhoto\" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next. \nI'm using python and postgresql.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":397,"Q_Id":2435281,"Users Score":0,"Answer":"To make sure we're on the same page, is the following correct?\n\nYou're inserting the photo information into the Photo table immediately after the user uploads the photo but before he\/she submits the form;\nWhen the user submits the form, you're inserting a row into the User table;\nOne of the items in that row is information about the previously created photo entry.\n\nIf so, you should be able to store the \"path to photo\" information in a Python variable until the user submits the form, and then use the value from that variable in your User-table insert.","Q_Score":0,"Tags":"python,database,postgresql","A_Id":2435639,"CreationDate":"2010-03-12T19:32:00.000","Title":"Database: storing data from user registration form","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time).\nFor performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL.\nMy question is:\n\nhow web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope,\nHow is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql?\n\nCould you advice me please.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1388,"Q_Id":2459549,"Users Score":1,"Answer":"Zope and the ZODB have been used with big applications, but I'd still consider linking Zope with MySQL or something like that for serious large-scale applications. Even though Zope has had a lot of development cycles, it is usually used with another database engine for good reason. As far as I know, the argument applies doubly for web2py.","Q_Score":3,"Tags":"python,zope,web2py,zodb,grok","A_Id":9985357,"CreationDate":"2010-03-17T02:26:00.000","Title":"web2py or grok (zope) on a big portal,","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time).\nFor performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL.\nMy question is:\n\nhow web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope,\nHow is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql?\n\nCould you advice me please.","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1388,"Q_Id":2459549,"Users Score":7,"Answer":"First, don't assume that a data abstraction layer will have unacceptable performance, until you actually see it in practice. It is pretty easy to switch to RAW sql if and when you run into a problem.\nSecond, most users who worry about there server technology handling a million users never finish their applications. Pick whatever technology you think will enable you to build the best application in the shortest time. Any technology can be scaled, at the very least, through clustering.","Q_Score":3,"Tags":"python,zope,web2py,zodb,grok","A_Id":2459620,"CreationDate":"2010-03-17T02:26:00.000","Title":"web2py or grok (zope) on a big portal,","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python MySQLDB, and I want to insert this into DATETIME field in Mysql . How do I do that with cursor.execute?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":9981,"Q_Id":2460491,"Users Score":1,"Answer":"Solved.\nI just did this:\ndatetime.datetime.now() ...insert that into the column.","Q_Score":5,"Tags":"python,mysql,database,datetime,date","A_Id":2460546,"CreationDate":"2010-03-17T07:28:00.000","Title":"In Python, if I have a unix timestamp, how do I insert that into a MySQL datetime field?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to store the images related to a each person's profile in the DB and retrieve them\nwhen requested and save it as .jpg file - and display it to the users.\nHow could I render the image data stored in the DB as an image and store it locally??","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":19327,"Q_Id":2477045,"Users Score":1,"Answer":"Why don't you simply store the images on the file system, and only store their references on the database. That's a lot more elegant, and won't consume loads of your database.\nAlso, you won't have to use any kind of binary functions to read them from the DB, saving memory and loading time.\nIs there a very specific reason why you wanna store it on the DB?\nCheers","Q_Score":3,"Tags":"python,image-manipulation","A_Id":2477074,"CreationDate":"2010-03-19T12:08:00.000","Title":"Storing and Retrieving Images from Database using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all.\nI'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java.\nAccording to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database.\nWith Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. The database in question contains all kinds of primary keys, including lots of composite keys.\nAlso I appreciate suggestions of other frameworks worth considering.","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1525,"Q_Id":2507463,"Users Score":0,"Answer":"There are no clear cut winners when picking a web framework. Each platform you mentioned has its benefits and drawbacks (cost of hardware, professional support, community support, etc.). Depending on your time table, project requirements, and available hardware resources you are probably going to need some different answers.Personally, I would start your investigation with a platform where you and your team are most experienced. \nLike many of the other posters I can only speak to what I'm actively using now, and in my case it is Java. If Java seems to match your projects requirements, you probably want to go with one of the newer frameworks with an active community. Currently Spring Web MVC, Struts2, and Stripes seem to be fairly popular. These frameworks are mostly, if not totally, independent of the persistence layer, but all integrate well with technologies like hibernate and jpa; although you have to do most, if not all, of the wiring yourself. \nIf you want to take the Java road there are also pre-built application stacks that take care of most of wiring issues for you. For an example you might want to look at Matt Raible's AppFuse. He has built an extensible starter application with many permutations of popular java technologies.\nIf you are interested in the JVM as a platform, you may also want to look at complete stack solutions like Grails, or tools that help you build your stack quickly like Spring Roo. \nAlmost all of the full stack solutions I've seen allow for integration with a legacy database schema. As long as your database is well designed, you should be able to map your tables. The mention of composite keys kind of scares me, but depending on your persistence technology this may or may not be an issue. Hibernate in Java\/.NET supports mapping to composite keys, as does GORM in grails (built on hibernate). In almost all cases these mappings are discouraged, but people who build persistence frameworks know you can't always scorch earth and completely recreate your model.","Q_Score":1,"Tags":"java,php,python,ruby","A_Id":2512975,"CreationDate":"2010-03-24T12:11:00.000","Title":"Web framework for an application utilizing existing database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all.\nI'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java.\nAccording to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database.\nWith Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. The database in question contains all kinds of primary keys, including lots of composite keys.\nAlso I appreciate suggestions of other frameworks worth considering.","AnswerCount":8,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1525,"Q_Id":2507463,"Users Score":2,"Answer":"I have very good experience with Django. Every time I needed it was up to the task for interfacing with existing database. \nAutogenerated models are the start, as MySQL is not the strictest with its schema. Not that it will not work only that usually some of the db restrictions are held in app itself.","Q_Score":1,"Tags":"java,php,python,ruby","A_Id":2507492,"CreationDate":"2010-03-24T12:11:00.000","Title":"Web framework for an application utilizing existing database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am in need of a lightweight way to store dictionaries of data into a database. What I need is something that: \n\nCreates a database table from a simple type description (int, float, datetime etc)\nTakes a dictionary object and inserts it into the database (including handling datetime objects!)\nIf possible: Can handle basic references, so the dictionary can reference other tables\n\nI would prefer something that doesn't do a lot of magic. I just need an easy way to setup and get data into an SQL database. \nWhat would you suggest? There seems to be a lot of ORM software around, but I find it hard to evaluate them.","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":1069,"Q_Id":2539147,"Users Score":3,"Answer":"SQLAlchemy offers an ORM much like django, but does not require that you work within a web framework.","Q_Score":1,"Tags":"python,sql,orm","A_Id":2539235,"CreationDate":"2010-03-29T15:30:00.000","Title":"Lightweight Object->Database in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently i have developed a billing application for my company with Python\/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. \nSo do you guys know any performance optimization techniques in python that will really help me with the scalability issue\nGuys we are using mysql database and its hosted on apache web server on Linux box. Secondly what i have noticed more is the over all application is slow and not the database transactional part. For example once the application is loaded then it works fine but if they navigate to other link on that application then it takes a whole lot of time.\nAnd yes we are using HTML, CSS and Javascript","AnswerCount":9,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":1942,"Q_Id":2545820,"Users Score":6,"Answer":"ok, not entirely to the point, but before you go and start fixing it, make sure everyone understands the situation. it seems to me that they're putting some pressure on you to fix the \"problem\".\nwell first of all, when you wrote the application, have they specified the performance requirements? did they tell you that they need operation X to take less than Y secs to complete? Did they specify how many concurrent users must be supported without penalty to the performance? If not, then tell them to back off and that it is iteration (phase, stage, whatever) one of the deployment, and the main goal was the functionality and testing. phase two is performance improvements. let them (with your help obviously) come up with some non functional requirements for the performance of your system.\nby doing all this, a) you'll remove the pressure applied by the finance team (and i know they can be a real pain in the bum) b) both you and your clients will have a clear idea of what you mean by \"performance\" c) you'll have a base that you can measure your progress and most importantly d) you'll have some agreed time to implement\/fix the performance issues.\nPS. that aside, look at the indexing... :)","Q_Score":11,"Tags":"python","A_Id":2545940,"CreationDate":"2010-03-30T14:07:00.000","Title":"Optimization Techniques in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Recently i have developed a billing application for my company with Python\/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. \nSo do you guys know any performance optimization techniques in python that will really help me with the scalability issue\nGuys we are using mysql database and its hosted on apache web server on Linux box. Secondly what i have noticed more is the over all application is slow and not the database transactional part. For example once the application is loaded then it works fine but if they navigate to other link on that application then it takes a whole lot of time.\nAnd yes we are using HTML, CSS and Javascript","AnswerCount":9,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1942,"Q_Id":2545820,"Users Score":4,"Answer":"A surprising feature of Python is that the pythonic code is quite efficient... So a few general hints:\n\nUse built-ins and standard functions whenever possible, they're already quite well optimized.\nTry to use lazy generators instead one-off temporary lists.\nUse numpy for vector arithmetic.\nUse psyco if running on x86 32bit.\nWrite performance critical loops in a lower level language (C, Pyrex, Cython, etc.).\nWhen calling the same method of a collection of objects, get a reference to the class function and use it, it will save lookups in the objects dictionaries (this one is a micro-optimization, not sure it's worth)\n\nAnd of course, if scalability is what matters:\n\nUse O(n) (or better) algorithms! Otherwise your system cannot be linearly scalable.\nWrite multiprocessor aware code. At some point you'll need to throw more computing power at it, and your software must be ready to use it!","Q_Score":11,"Tags":"python","A_Id":2546955,"CreationDate":"2010-03-30T14:07:00.000","Title":"Optimization Techniques in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Recently i have developed a billing application for my company with Python\/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. \nSo do you guys know any performance optimization techniques in python that will really help me with the scalability issue\nGuys we are using mysql database and its hosted on apache web server on Linux box. Secondly what i have noticed more is the over all application is slow and not the database transactional part. For example once the application is loaded then it works fine but if they navigate to other link on that application then it takes a whole lot of time.\nAnd yes we are using HTML, CSS and Javascript","AnswerCount":9,"Available Count":3,"Score":0.0444152037,"is_accepted":false,"ViewCount":1942,"Q_Id":2545820,"Users Score":2,"Answer":"before you can \"fix\" something you need to know what is \"broken\". In software development that means profiling, profiling, profiling. Did I mention profiling. Without profiling you don't know where CPU cycles and wall clock time is going. Like others have said to get any more useful information you need to post the details of your entire stack. Python version, what you are using to store the data in (mysql, postgres, flat files, etc), what web server interface cgi, fcgi, wsgi, passenger, etc. how you are generating the HTML, CSS and assuming Javascript. Then you can get more specific answers to those tiers.","Q_Score":11,"Tags":"python","A_Id":2546996,"CreationDate":"2010-03-30T14:07:00.000","Title":"Optimization Techniques in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Why do people use SQLAlchemy instead of MySQLdb? What advantages does it offer?","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":19410,"Q_Id":2550292,"Users Score":6,"Answer":"In addition to what Alex said...\n\n\"Not wanting to learn SQL\" is probably a bad thing. However, if you want to get more non-technical people involved as part of the development process, ORMs do a pretty good job at it because it does push this level of complexity down a level. One of the elements that has made Django successful is its ability to let \"newspaper journalists\" maintain a website, rather than software engineers.\n\nOne of the limitations of ORMs is that they are not as scalable as using raw SQL. At a previous job, we wanted to get rid of a lot of manual SQL generation and switched to an ORM for ease-of-use (SQLAlchemy, Elixir, etc.), but months later, I ended up having to write raw SQL again to get around the inefficient or high latency queries that were generated by the ORM system.","Q_Score":24,"Tags":"python,sql,mysql,sqlalchemy","A_Id":2550578,"CreationDate":"2010-03-31T03:19:00.000","Title":"Purpose of SQLAlchemy over MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why do people use SQLAlchemy instead of MySQLdb? What advantages does it offer?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":19410,"Q_Id":2550292,"Users Score":32,"Answer":"You don't use SQLAlchemy instead of MySQLdb\u2014you use SQLAlchemy to access something like MySQLdb, oursql (another MySQL driver that I hear is nicer and has better performance), the sqlite3 module, psycopg2, or whatever other database driver you are using. \nAn ORM (like SQLAlchemy) helps abstract away the details of the database you are using. This allows you to keep from the miry details of the database system you're using, avoiding the possibility of errors some times (and introducing the possibility of others), and making porting trivial (at least in theory).","Q_Score":24,"Tags":"python,sql,mysql,sqlalchemy","A_Id":2550364,"CreationDate":"2010-03-31T03:19:00.000","Title":"Purpose of SQLAlchemy over MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why do people use SQLAlchemy instead of MySQLdb? What advantages does it offer?","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":19410,"Q_Id":2550292,"Users Score":12,"Answer":"Easier portability among different DB engines (say that tomorrow you decide you want to move to sqlite, or PostgreSQL, or...), and higher level of abstraction (and thus potentially higher productivity).\nThose are some of the good reasons. There are also some bad reasons for using an ORM, such as not wanting to learn SQL, but I suspect SQLAlchemy in particular is not really favored by people for such bad reasons for wanting an ORM rather than bare SQL;-).","Q_Score":24,"Tags":"python,sql,mysql,sqlalchemy","A_Id":2550304,"CreationDate":"2010-03-31T03:19:00.000","Title":"Purpose of SQLAlchemy over MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm going to write the web portal using Cassandra databases.\nCan you advise me which python interface to use? thrift, lazygal or pycassa?\nAre there any benefits to use more complicated thrift then cleaner pycassa?\nWhat about performace - is the same (all of them are just the layer)?\nThanks for any advice.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1065,"Q_Id":2561804,"Users Score":4,"Answer":"Use pycassa if you don't know what to use.\nUse lazyboy if you want it to maintain indexes for you. It's significantly more complex.","Q_Score":5,"Tags":"python,database,cassandra,thrift","A_Id":2567396,"CreationDate":"2010-04-01T16:05:00.000","Title":"Cassandra database, which python interface?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have a SQL database of passwords stored in MD5. The server needs to generate a unique key, then sends to the client. In the client, it will use the key as a salt then hash together with the password and send back to the server.\nThe only problem is that the the SQL DB has the passwords in MD5 already. Therefore for this to work, I would have to MD5 the password client side, then MD5 it again with the salt. Am I doing this wrong, because it doesn't seem like a proper solution. Any information is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":260,"Q_Id":2564312,"Users Score":1,"Answer":"You should use SSL to encrypt the connection, then send the password over plain text from the client. The server will then md5 and compare with the md5 hash in the database to see if they are the same. If so auth = success.\nMD5'ing the password on the client buys you nothing because a hacker with the md5 password can get in just as easy as if it was in plain text.","Q_Score":2,"Tags":"python,sql,database,authorization,md5","A_Id":2564367,"CreationDate":"2010-04-01T23:42:00.000","Title":"Server authorization with MD5 and SQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an easy way (without downloading any plugins) to connect to a MySQL database in Python?\nAlso, what would be the difference from calling a PHP script to retrieve the data from the database and hand it over to Python and importing one of these third-parties plugins that requires some additional software in the server.\nEDIT: the server has PHP and Python installed by default.","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":332,"Q_Id":2569427,"Users Score":-1,"Answer":"No, there is no way that I've ever heard of or can think of to connect to a MySQL database with vanilla python. Just install the MySqldb python package-\nYou can typically do:\n\nsudo easy_install MySqldb","Q_Score":0,"Tags":"php,python,mysql","A_Id":2569567,"CreationDate":"2010-04-02T22:00:00.000","Title":"Python and MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an easy way (without downloading any plugins) to connect to a MySQL database in Python?\nAlso, what would be the difference from calling a PHP script to retrieve the data from the database and hand it over to Python and importing one of these third-parties plugins that requires some additional software in the server.\nEDIT: the server has PHP and Python installed by default.","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":332,"Q_Id":2569427,"Users Score":-1,"Answer":"If you don't want to download the python libraries to connect to MySQL, the effective answer is no, not trivially.","Q_Score":0,"Tags":"php,python,mysql","A_Id":2569448,"CreationDate":"2010-04-02T22:00:00.000","Title":"Python and MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":0.022218565,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":1,"Answer":"It has been a couple of months since I posted this question and I wanted to let you all know how I solved this problem. I am using Berkeley DB with the module bsddb instead loading all the data in a Python dictionary. I am not fully happy, but my users are.\nMy next step is trying to get a shared server with redis, but unless users starts complaining about speed, I doubt I will get it.\nMany thanks everybody who helped here, and I hope this question and answers are useful to somebody else.","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2981162,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":0.0,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":0,"Answer":"Take a look at mongodb.","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2581460,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":1.0,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":12,"Answer":"You probably do need a full relational DBMS, if not right now, very soon. If you start now while your problems and data are simple and straightforward then when they become complex and difficult you will have plenty of experience with at least one DBMS to help you. You probably don't need MySQL on all desktops, you might install it on a server for example and feed data out over your network, but you perhaps need to provide more information about your requirements, toolset and equipment to get better suggestions.\nAnd, while the other DBMSes have their strengths and weaknesses too, there's nothing wrong with MySQL for large and complex databases. I don't know enough about SQLite to comment knowledgeably about it.\nEDIT: @Eric from your comments to my answer and the other answers I form even more strongly the view that it is time you moved to a database. I'm not surprised that trying to do database operations on a 900MB Python dictionary is slow. I think you have to first convince yourself, then your management, that you have reached the limits of what your current toolset can cope with, and that future developments are threatened unless you rethink matters.\nIf your network really can't support a server-based database than (a) you really need to make your network robust, reliable and performant enough for such a purpose, but (b) if that is not an option, or not an early option, you should be thinking along the lines of a central database server passing out digests\/extracts\/reports to other users, rather than simultaneous, full RDBMS working in a client-server configuration.\nThe problems you are currently experiencing are problems of not having the right tools for the job. They are only going to get worse. I wish I could suggest a magic way in which this is not the case, but I can't and I don't think anyone else will.","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2577979,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":0.022218565,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":1,"Answer":"It sounds like each department has their own feudal database, and this implies a lot of unnecessary redundancy and inefficiency.\nInstead of transferring hundreds of megabytes to everyone across your network, why not keep your data in MySQL and have the departments upload their data to the database, where it can be normalized and accessible by everyone?\nAs your organization grows, having completely different departmental databases that are unaware of each other, and contain potentially redundant or conflicting data, is going to become very painful.","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2578659,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":0.0,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":0,"Answer":"If you have that problem with a CSV file, maybe you can just pickle the dictionary and generate a pickle \"binary\" file with pickle.HIGHEST_PROTOCOL option. It can be faster to read and you get a smaller file. You can load the CSV file once and then generate the pickled file, allowing faster load in next accesses.\nAnyway, with 900 Mb of information, you're going to deal with some time loading it in memory. Another approach is not loading it on one step on memory, but load only the information when needed, maybe making different files by date, or any other category (company, type, etc..)","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2578310,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":0.022218565,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":1,"Answer":"Have you done any bench marking to confirm that it is the text files that are slowing you down? If you haven't, there's a good chance that tweaking some other part of the code will speed things up so that it's fast enough.","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2578080,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases. \nSales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.\nI thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.\nUpdate: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.\nMany thanks!","AnswerCount":9,"Available Count":7,"Score":0.022218565,"is_accepted":false,"ViewCount":11416,"Q_Id":2577967,"Users Score":1,"Answer":"Does the machine this process runs on have sufficient memory and bandwidth to handle this efficiently? Putting MySQL on a slow machine and recoding the tool to use MySQL rather than text files could potentially be far more costly than simply adding memory or upgrading the machine.","Q_Score":15,"Tags":"python,database,nosql,data-mining","A_Id":2578751,"CreationDate":"2010-04-05T10:59:00.000","Title":"Best DataMining Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a database of all DVDs I have at home.\nOne of the fields, actors, I would like it to be a set of values from an other table, which is storing actors. So for every film I want to store a list of actors, all of which selected from a list of actors, taken from a different table.\nIs it possible? How do I do this? It would be a set of foreign keys basically.\nI'm using a MySQL database for a Django application (python), so any hint in SQL or Python would be much appreciated.\nI hope the question is clear,\nmany thanks.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":85,"Q_Id":2579866,"Users Score":1,"Answer":"The answer is clear too. You will need not a field, but another films_actors table. This table would act as your field, but much more reliable. This is called many-to-many relation.","Q_Score":3,"Tags":"python,sql,mysql,django-models","A_Id":2579922,"CreationDate":"2010-04-05T17:31:00.000","Title":"Using set with values from a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a set of .csv files that I want to process. It would be far easier to process it with SQL queries. I wonder if there is some way to load a .csv file and use SQL language to look into it with a scripting language like python or ruby. Loading it with something similar to ActiveRecord would be awesome. \nThe problem is that I don't want to have to run a database somewhere prior to running my script. I souldn't have additionnal installations needed outside of the scripting language and some modules.\nMy question is which language and what modules should I use for this task. I looked around and can't find anything that suits my need. Is it even possible?","AnswerCount":7,"Available Count":1,"Score":0.0855049882,"is_accepted":false,"ViewCount":12118,"Q_Id":2580497,"Users Score":3,"Answer":"CSV files are not databases--they have no indices--and any SQL simulation you imposed upon them would amount to little more than searching through the entire thing over and over again.","Q_Score":26,"Tags":"python,sql,database,sqlite,sqlalchemy","A_Id":2580542,"CreationDate":"2010-04-05T19:10:00.000","Title":"Database on the fly with scripting languages","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not.\nIt's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published.\nWhat is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get.\nI couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers.\nshould I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":1516,"Q_Id":2610000,"Users Score":3,"Answer":"I don't know postgres but normally ISBM would be a unique index key but not the primary. It's better to have an integer as primary\/foreign key. That way you only need to add a new field EAN\/ISSN as nullable.","Q_Score":2,"Tags":"python,django,postgresql,isbn","A_Id":2610094,"CreationDate":"2010-04-09T18:45:00.000","Title":"ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not.\nIt's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published.\nWhat is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get.\nI couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers.\nshould I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1516,"Q_Id":2610000,"Users Score":1,"Answer":"A simple solution (although arguably whether good) would be to use (isbn,title) or (isbn,author) which should pretty much guarantee uniqueness. Ideology is great but practicality also serves a purpose.","Q_Score":2,"Tags":"python,django,postgresql,isbn","A_Id":2614029,"CreationDate":"2010-04-09T18:45:00.000","Title":"ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Does anyone know if Python's shelve module uses memory-mapped IO?\nMaybe that question is a bit misleading. I realize that shelve uses an underlying dbm-style module to do its dirty work. What are the chances that the underlying module uses mmap?\nI'm prototyping a datastore, and while I realize premature optimization is generally frowned upon, this could really help me understand the trade-offs involved in my design.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":902,"Q_Id":2618921,"Users Score":3,"Answer":"I'm not sure what you're trying to learn by asking this question, since you already seem to know the answer: it depends on the actual dbm store being used. Some of them will use mmap -- I expect everything but dumbdbm to use mmap -- but so what? The overhead in shelve is almost certainly not in the mmap-versus-fileIO choice, but in the pickling operation. You can't mmap the dbm file sensibly yourself in either case, as the dbm module may have its own fancy locking (and it may not be a single file anyway, like when it uses bsddb.)\nIf you're just looking for inspiration for your own datastore, well, don't look at shelve, since all it does is pickle-and-pass-along to another datastore.","Q_Score":2,"Tags":"python,mmap,shelve,dbm","A_Id":2618963,"CreationDate":"2010-04-11T22:18:00.000","Title":"Does Python's shelve module use memory-mapped IO?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know if Python's shelve module uses memory-mapped IO?\nMaybe that question is a bit misleading. I realize that shelve uses an underlying dbm-style module to do its dirty work. What are the chances that the underlying module uses mmap?\nI'm prototyping a datastore, and while I realize premature optimization is generally frowned upon, this could really help me understand the trade-offs involved in my design.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":902,"Q_Id":2618921,"Users Score":4,"Answer":"Existing dbm implementations in the Python standard library all use \"normal\" I\/O, not memory mapping. You'll need to code your own dbmish implementation with memory mapping, and integrate it with shelve (directly, or, more productively, through anydbm).","Q_Score":2,"Tags":"python,mmap,shelve,dbm","A_Id":2618981,"CreationDate":"2010-04-11T22:18:00.000","Title":"Does Python's shelve module use memory-mapped IO?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm from Brazil and study at FATEC (college located in Brazil).\nI'm trying to learn about AppEngine.\nNow, I'm trying to load a large database from MySQL to AppEngine to perform some queries, but I don't know how i can do it. I did some testing with CSV files,but is there any way to perform the direct import from MySQL?\nThis database is from Pentaho BI Server (www.pentaho.com).\nThank you for your attention.\nRegards,\nDaniel Naito","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1278,"Q_Id":2650499,"Users Score":0,"Answer":"If you're using Pentaho BI Server as your data source, why don't you consider using Pentaho Data Integration (ETL tool) to move the data over? At the very least PDI automate any movement of data between your data source and any AppEngine bulk loader tool (it can easily trigger any app with a shell step).","Q_Score":0,"Tags":"python,mysql,google-app-engine,bulk-load","A_Id":2662880,"CreationDate":"2010-04-16T03:57:00.000","Title":"MySQL to AppEngine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"There is a m2m relation in my models, User and Role. \nI want to merge a role, but i DO NOT want this merge has any effect on user and role relation-ship. Unfortunately, for some complicate reason, role.users if not empty. \nI tried to set role.users = None, but SA complains None is not a list.\nAt this moment, I use sqlalchemy.orm.attributes.del_attribute, but I don't know if it's provided for this purpose.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":396,"Q_Id":2665253,"Users Score":0,"Answer":"You'd better fix your code to avoid setting role.users for the item you are going to merge. But there is another way - setting cascade='none' for this relation. Then you lose an ability to save relationship from Role side, you'll have to save User with roles attribute set.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":2667004,"CreationDate":"2010-04-19T05:01:00.000","Title":"In SqlAlchemy, how to ignore m2m relationship attributes when merge?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that?\nSomeone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place.\nIdeas? Thanks.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":14434,"Q_Id":2670887,"Users Score":4,"Answer":"I am completely ignorant about Python, but if it can call DLLs then it ought to be able to use Microsoft's ADOMD object. This is the best option I can think of.\nYou could look at Office Web Components (OWC) as that has a OLAP control than can be embedded on a web page. I think you can pass MDX to it, but perhaps you want Python to see the results too, which I don't think it allows.\nOtherwise perhaps you can build your own 'proxy' in another language. This program\/webpage could accept MDX in, and return you XML showing the results. Python could then consume this XML.","Q_Score":7,"Tags":"python,database,olap","A_Id":2743692,"CreationDate":"2010-04-19T21:05:00.000","Title":"MS Analysis Services OLAP API for Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python in Linux to automate an excel. I have finished writing data into excel by using pyexcelerator package. \nNow comes the real challenge. I have to add another tab to the existing sheet and that tab should contain the macro run in the first tab. All these things should be automated. I Googled a lot and found win32come to do a job in macro, but that was only for windows.\nAnyone have any idea of how to do this, or can you guide me with few suggestions.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1093,"Q_Id":2697701,"Users Score":0,"Answer":"Excel Macros are per sheets, so, I am afraid, you need to copy the macros explicitly if you created new sheet, instead of copying existing sheet to new one.","Q_Score":1,"Tags":"python,linux,excel,automation","A_Id":2697769,"CreationDate":"2010-04-23T10:08:00.000","Title":"Automating Excel macro using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python in Linux to automate an excel. I have finished writing data into excel by using pyexcelerator package. \nNow comes the real challenge. I have to add another tab to the existing sheet and that tab should contain the macro run in the first tab. All these things should be automated. I Googled a lot and found win32come to do a job in macro, but that was only for windows.\nAnyone have any idea of how to do this, or can you guide me with few suggestions.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1093,"Q_Id":2697701,"Users Score":0,"Answer":"Maybe manipulating your .xls with Openoffice and pyUno is a better way. Way more powerful.","Q_Score":1,"Tags":"python,linux,excel,automation","A_Id":3596123,"CreationDate":"2010-04-23T10:08:00.000","Title":"Automating Excel macro using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Last night I upgraded my machine to Ubuntu 10.04 from 9.10.\nIt seems to have cluttered my python module. Whenever I run python manage.py I get this error:\n\nImportError: No module named postgresql_psycopg2.base\n\nCan any one throw any light on this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1092,"Q_Id":2711737,"Users Score":1,"Answer":"Couple of things. I ran into the same kind of error - but for a different thing (ie. \"ImportError: No module named django\") when I reinstalled some software. Essentially, it messed up my Python paths.\nSo, you're issue is very reminiscent of the one I had. The issue for me ended up being that the installed I used altered my .profile file (.bash_profile on some systems) in my home directory that messed up the Path environment variable to point to the incorrect Python binaries. This includes, of course, pointing to the wrong site-packages (where many Python extensions are installed). \nTo verify this, I used two Linux shell commands that saved the day for me where:\n\"which python\" and \"whereis python\" \nThe first tells you which version of Python you are running, and the second tells you where it is located. This is important since you can have multiple versions of Python installed on your machine.\nHopefully, this is will help you troubleshoot your issue. You may also want to try \"$echo Path\" (at the command line \/ terminal) to see where the paths to resolve commands.\nYou can fix your issue either by:\n1- fixing your Path variable, and exporting Path, in .profile (or .bash_profile)\n2- creating a sym link to the appropriate Python binary \nGood luck :)\n~Aki","Q_Score":2,"Tags":"python,django,postgresql,psycopg2","A_Id":4505549,"CreationDate":"2010-04-26T07:33:00.000","Title":"Some problem with postgres_psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use sqlite memory database for all my testing and Postgresql for my development\/production server.\nBut the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial \nIs it easy to port the SQL script from sqlite to postgresql... what are your solutions?\nIf you want me to use standard SQL, how should I go about generating primary key in both the databases?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2892,"Q_Id":2716847,"Users Score":12,"Answer":"Don't do it. Don't test in one environment and release and develop in another. Your asking for buggy software using this process.","Q_Score":5,"Tags":"python,sqlite,postgresql,sqlalchemy","A_Id":2721100,"CreationDate":"2010-04-26T20:59:00.000","Title":"SQLAlchemy - SQLite for testing and Postgresql for development - How to port?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use sqlite memory database for all my testing and Postgresql for my development\/production server.\nBut the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial \nIs it easy to port the SQL script from sqlite to postgresql... what are your solutions?\nIf you want me to use standard SQL, how should I go about generating primary key in both the databases?","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":2892,"Q_Id":2716847,"Users Score":19,"Answer":"My suggestion would be: don't. The capabilities of Postgresql are far beyond what SQLite can provide, particularly in the areas of date\/numeric support, functions and stored procedures, ALTER support, constraints, sequences, other types like UUID, etc., and even using various SQLAlchemy tricks to try to smooth that over will only get you a slight bit further. In particular date and interval arithmetic are totally different beasts on the two platforms, and SQLite has no support for precision decimals (non floating-point) the way PG does. PG is very easy to install on every major OS and life is just easier if you go that route.","Q_Score":5,"Tags":"python,sqlite,postgresql,sqlalchemy","A_Id":2717071,"CreationDate":"2010-04-26T20:59:00.000","Title":"SQLAlchemy - SQLite for testing and Postgresql for development - How to port?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have searched high and low for an answer to why query results returned in this format and how to convert to a list.\ndata = cursor.fetchall()\nWhen I print data, it results in:\n(('car',), ('boat',), ('plane',), ('truck',))\nI want to have the results in a list as [\"car\", \"boat\", \"plane\", \"truck\"]","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1007,"Q_Id":2723432,"Users Score":1,"Answer":"The result for fetchall() returns an array of rows, where each row is an array with one value per column.\nEven if you are selecting only one column, you will still get an array of arrays, but only one value for each row.","Q_Score":3,"Tags":"python,mysql,list,recordset","A_Id":2723548,"CreationDate":"2010-04-27T17:21:00.000","Title":"Why is recordset result being returned in this way for Python database query?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a django project that uses a sqlite database that can be written to by an external tool. The text is supposed to be UTF-8, but in some cases there will be errors in the encoding. The text is from an external source, so I cannot control the encoding. Yes, I know that I could write a \"wrapping layer\" between the external source and the database, but I prefer not having to do this, especially since the database already contains a lot of \"bad\" data.\nThe solution in sqlite is to change the text_factory to something like:\n\nlambda x: unicode(x, \"utf-8\", \"ignore\")\n\nHowever, I don't know how to tell the Django model driver this.\nThe exception I get is:\n\n'Could not decode to UTF-8 column 'Text' with text'\nin\n\/var\/lib\/python-support\/python2.5\/django\/db\/backends\/sqlite3\/base.py in execute\n\nSomehow I need to tell the sqlite driver not to try to decode the text as UTF-8 (at least not using the standard algorithm, but it needs to use my fail-safe variant).","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4701,"Q_Id":2744632,"Users Score":0,"Answer":"Incompatible Django version. Check Django version for solving this error first. I was running on Django==3.0.8 and it was producing an error. Than I ran virtualenv where I have Django==3.1.2 and the error was removed.","Q_Score":6,"Tags":"python,django,sqlite,pysqlite","A_Id":64263492,"CreationDate":"2010-04-30T13:00:00.000","Title":"Change text_factory in Django\/sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..)\nWe're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because:\n\nWe couldn't track changes using the ORM\nThe state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints)\nThere was no way to audit database changes unless the developer documented the database change via email to the team.\n\nOur solution to this problem was to basically have a \"gatekeeper\" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database.\nI would like to know what are some strategies to maintain database schemas and if ours seems reasonable.\nThanks!","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":930,"Q_Id":2748946,"Users Score":2,"Answer":"The solution is rather administrative then technical :)\nThe general rule is easy, there should only be tree-like dependencies in the project:\n- There should always be a single master source of schema, stored together with the project source code in the version control\n- Everything affected by the change in the master source should be automatically re-generated every time the master source is updated, no manual intervention allowed never, if automatic generation does not work -- fix either master source or generator, don't manually update the source code\n- All re-generations should be performed by the same person who updated the master source and all changes including the master source change should be considered a single transaction (single source control commit, single build\/deployment for every affected environment including DBs update)\nBeing enforced, this gives 100% reliable result.\nThere are essentially 3 possible choices of the master source\n1) DB metadata, sources are generated after DB update by some tool connecting to the live DB\n2) Source code, some tool is generating SQL scheme from the sources, annotated in a special way and then SQL is run on the DB\n3) DDL, both SQL schema and source code are generated by some tool\n4) some other description is used (say a text file read by a special Perl script generating both SQL schema and the source code)\n1,2,3 are equally good, providing that the tool you need exists and is not over expensive\n4 is a universal approach, but it should be applied from the very beginning of the project and has an overhead of couple thousands lines of code in a strange language to maintain","Q_Score":9,"Tags":"python,database,postgresql,sqlalchemy,database-schema","A_Id":2768187,"CreationDate":"2010-05-01T04:57:00.000","Title":"What are some strategies for maintaining a common database schema with a team of developers and no DBA?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). \nBut, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name.\nAny idea what could be causing this?\nAlso, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":921,"Q_Id":2751957,"Users Score":2,"Answer":"Looks like SQLalchemy is pushing or echoing the query to your output (where fast-cgi) is instead looking for headers, then body. Maybe setting sqlalchemy.echo to False can help.","Q_Score":0,"Tags":"python,mysql,apache,sqlalchemy,fastcgi","A_Id":2751989,"CreationDate":"2010-05-01T23:44:00.000","Title":"Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would one go about authenticating against a single db using Python and openfire? Is there a simple module that will do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":2752047,"Users Score":0,"Answer":"Openfire uses a SQL database. So talking to the database from python is probably the easiest way.\nYou could also try to connect\/authenticate via XMPP - there's probably an xmpp library for python somewhere.","Q_Score":0,"Tags":"python,database,openfire","A_Id":2766455,"CreationDate":"2010-05-02T00:35:00.000","Title":"I need to authenticate against one db with python and openfire. How do I do this?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to access a MySQL database with python through Pydev Eclipse. I have installed the necessary files to access MysQL from python and I can access the database only when I write code in Python IDLE environment and run it from command prompt. However I am not able to run my applications from Pydev. \nwhen I use this \"import MysqlDB\" i get an error, but in IDLE no errors and my code runs very smoothly.\nDoes anyone know were the problem is?\nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6768,"Q_Id":2775095,"Users Score":0,"Answer":"If the connector works in the IDLE but not in PyDev. Open Eclipse preferences, open PyDev directory and go to interpreter screen. Remove the interpreter and add it again from the location on your computer (Usually C drive). Close and reload Eclipse and now it should work.","Q_Score":2,"Tags":"python,mysql,eclipse,pydev,mysql-python","A_Id":23798598,"CreationDate":"2010-05-05T16:36:00.000","Title":"Using MySQL in Pydev Eclipse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to access a MySQL database with python through Pydev Eclipse. I have installed the necessary files to access MysQL from python and I can access the database only when I write code in Python IDLE environment and run it from command prompt. However I am not able to run my applications from Pydev. \nwhen I use this \"import MysqlDB\" i get an error, but in IDLE no errors and my code runs very smoothly.\nDoes anyone know were the problem is?\nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6768,"Q_Id":2775095,"Users Score":0,"Answer":"Posting Answer in case URL changed in future\nFrom Eclipse, choose Window \/ Preferences \/ PyDev \/ Interpreters \/ Python Interpreter, click on Manage with pip and enter the command:\ninstall mysql-connector-python","Q_Score":2,"Tags":"python,mysql,eclipse,pydev,mysql-python","A_Id":70125088,"CreationDate":"2010-05-05T16:36:00.000","Title":"Using MySQL in Pydev Eclipse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"If so, how can I do this?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":848,"Q_Id":2810235,"Users Score":1,"Answer":"when you create a prepared statement, the \"template\" SQL code is sent to the DBMS already, which compiles it into an expression tree. When you pass the values, the corresponding library (python sqlite3 module in your case) doesn't merge the values into the statement. The DBMS does.\nIf you still want to produce a normal SQL string, you can use string replace functions to replace the placeholders by the values (after escaping them).\nWhat do you need this for?","Q_Score":0,"Tags":"python,sqlite,pysqlite","A_Id":2810300,"CreationDate":"2010-05-11T11:28:00.000","Title":"Can I get the raw SQL generated by a prepared statement in Python\u2019s sqlite3 module?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If so, how can I do this?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":848,"Q_Id":2810235,"Users Score":2,"Answer":"When executing a prepared statement, no new SQL is generated.\nThe idea of prepared statements is that the SQL query and its data are transmitted separately (that's why you don't have to escape any arguments) - the query is most likely only stored in an optimized form after preparing it.","Q_Score":0,"Tags":"python,sqlite,pysqlite","A_Id":2810250,"CreationDate":"2010-05-11T11:28:00.000","Title":"Can I get the raw SQL generated by a prepared statement in Python\u2019s sqlite3 module?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time.\nIs there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp.\nAny help would be greatly appreciated.\nWill\nEDIT:\nSince this seems important ... The dump file was created with a MySQL database and so it has quite a few commands\/syntax that sqlite3 does not understand correctly.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3406,"Q_Id":2824244,"Users Score":2,"Answer":"\"or some kind of parser\"\nI've found MySQL to be a great parser for MySQL dump files :)\nYou said it yourself: \"so it has quite a few commands\/syntax that sqlite3 does not understand correctly.\" Clearly then, SQLite is not the tool for this task.\nAs for your particular error: without context (i.e. a traceback) there's nothing I can say about it. Martelli or Skeet could probably reach across time and space and read your interpreter's mind, but me, not so much.","Q_Score":5,"Tags":"python,sql,sqlalchemy","A_Id":2828580,"CreationDate":"2010-05-13T03:23:00.000","Title":"How can I load a sql \"dump\" file into sql alchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time.\nIs there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp.\nAny help would be greatly appreciated.\nWill\nEDIT:\nSince this seems important ... The dump file was created with a MySQL database and so it has quite a few commands\/syntax that sqlite3 does not understand correctly.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3406,"Q_Id":2824244,"Users Score":0,"Answer":"The SQL recognized by MySQL and the SQL in SQLite are quite different. I suggest dumping the data of each table individually, then loading the data into equivalent tables in SQLite.\nCreate the tables in SQLite manually, using a subset of the \"CREATE TABLE\" commands given in your raw-dump file.","Q_Score":5,"Tags":"python,sql,sqlalchemy","A_Id":2828621,"CreationDate":"2010-05-13T03:23:00.000","Title":"How can I load a sql \"dump\" file into sql alchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to delete all records in a mysql db except the record id's I have in a list. The length of that list can vary and could easily contain 2000+ id's, ...\nCurrently I convert my list to a string so it fits in something like this:\ncursor.execute(\"\"\"delete from table where id not in (%s)\"\"\",(list))\nWhich doesn't feel right and I have no idea how long list is allowed to be, ....\nWhat's the most efficient way of doing this from python?\nAltering the structure of table with an extra field to mark\/unmark records for deletion would be great but not an option.\nHaving a dedicated table storing the id's would indeed be helpful then this can just be done through a sql query... but I would really like to avoid these options if possible.\nThanks,","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3327,"Q_Id":2826387,"Users Score":0,"Answer":"I'd add a \"todelete tinyint(1) not null default 1\" column to the table, update it to 0 for those id's which have to be kept, then delete from table where todelete;. It's faster than not in.\nOr, create a table with the same structure as yours, insert the kept rows there and rename tables. Then, drop the old one.","Q_Score":3,"Tags":"python,mysql","A_Id":2827845,"CreationDate":"2010-05-13T11:37:00.000","Title":"delete all records except the id I have in a python list","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently, i am querying with this code: meta.Session.query(Label).order_by(Label.name).all()\nand it returns me objects sorted by Label.name in this manner ['1','7','1a','5c']. Is there a way i can have the objects returned in the order with their Label.name sorted like this ['1','1a','5c','7']\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1746,"Q_Id":2863748,"Users Score":1,"Answer":"Sorting is done by the database. If you database doesn't support natural sorting your are out of luck and have to sort your rows manually after retrieving them via sqlalchemy.","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":2863830,"CreationDate":"2010-05-19T07:59:00.000","Title":"sqlalchemy natural sorting","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a relatively extensive sqlite database that I'd like to import into my Google App Engine python app.\nI've created my models using the appengine API which are close, but not quite identical to the existing schema. I've written an import script to load the data from sqlite and create\/save new appengine objects, but the appengine environment blocks me from accessing the sqlite library. This script is only to be run on my local app engine instance, and from there I hope to push the data to google.\nAm I approaching this problem the wrong way, or is there a way to import the sqlite library while running in the local instance's environment?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2231,"Q_Id":2870379,"Users Score":0,"Answer":"I have not had any trouble importing pysqlite2, reading data, then transforming it and writing it to AppEngine using the remote_api. \nWhat error are you seeing?","Q_Score":5,"Tags":"python,google-app-engine,sqlite","A_Id":2873946,"CreationDate":"2010-05-20T00:47:00.000","Title":"Importing Sqlite data into Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am working on a personal project where I need to manipulate values in a database-like format.\nUp until now I have been using dictionaries, tuples, and lists to store and consult those values.\nI am thinking about starting to use SQL to manipulate those values, but I don't know if it's worth the effort, because I don't know anything about SQL, and I don't want to use something that won't bring me any benefits (if I can do it in a simpler way, I don't want to complicate things)\nIf I am only storing and consulting values, what would be the benefit of using SQL? \nPS: the numbers of rows goes between 3 and 100 and the number of columns is around 10 (some may have 5 some may have 10 etc.)","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":291,"Q_Id":2870815,"Users Score":2,"Answer":"No, I think you just stick to dictionaries or tuples if you only have rows around 100","Q_Score":1,"Tags":"python,sql,database","A_Id":2870821,"CreationDate":"2010-05-20T03:24:00.000","Title":"Python and database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a personal project where I need to manipulate values in a database-like format.\nUp until now I have been using dictionaries, tuples, and lists to store and consult those values.\nI am thinking about starting to use SQL to manipulate those values, but I don't know if it's worth the effort, because I don't know anything about SQL, and I don't want to use something that won't bring me any benefits (if I can do it in a simpler way, I don't want to complicate things)\nIf I am only storing and consulting values, what would be the benefit of using SQL? \nPS: the numbers of rows goes between 3 and 100 and the number of columns is around 10 (some may have 5 some may have 10 etc.)","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":291,"Q_Id":2870815,"Users Score":7,"Answer":"SQL is nice and practical for many kinds of problems, is not that hard to learn at a simple \"surface\" level, and can be very handy to use in Python with its embedded sqlite. But if you don't know SQL, have no intrinsic motivation to learn it right now, and are already doing all you need to do to\/with your data without problems, then the immediate return on the investment of learning SQL (relatively small as that investment may be) seem like it would be pretty meager indeed for you.","Q_Score":1,"Tags":"python,sql,database","A_Id":2870832,"CreationDate":"2010-05-20T03:24:00.000","Title":"Python and database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a personal project where I need to manipulate values in a database-like format.\nUp until now I have been using dictionaries, tuples, and lists to store and consult those values.\nI am thinking about starting to use SQL to manipulate those values, but I don't know if it's worth the effort, because I don't know anything about SQL, and I don't want to use something that won't bring me any benefits (if I can do it in a simpler way, I don't want to complicate things)\nIf I am only storing and consulting values, what would be the benefit of using SQL? \nPS: the numbers of rows goes between 3 and 100 and the number of columns is around 10 (some may have 5 some may have 10 etc.)","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":291,"Q_Id":2870815,"Users Score":2,"Answer":"SQL is useful in many applications. But it is an overkill in this case. You can easily store your data in CSV, pickle or JSON format. Get this job done in 5 minutes and then learn SQL when you have time.","Q_Score":1,"Tags":"python,sql,database","A_Id":2871090,"CreationDate":"2010-05-20T03:24:00.000","Title":"Python and database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install postgrepsql to cygwin on a windows 7 machine and want it to work with django. \nAfter built and installed postgrepsql in cygwin, I built and installed psycopg2 in cygwin as well and got no error, but when use it in python with cygwin, I got the \"no such process\" error:\n\n\n\nimport psycopg2\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"\/usr\/lib\/python2.5\/site-packages\/psycopg2\/init.py\", line 60, in \n from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID\n ImportError: No such process\n\n\n\nany clues?\nThanks!\nJerry","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1204,"Q_Id":2879246,"Users Score":1,"Answer":"In my case, I had to reinstall libpq5.","Q_Score":0,"Tags":"python,django,postgresql,psycopg2","A_Id":14780956,"CreationDate":"2010-05-21T02:34:00.000","Title":"psycopg2 on cygwin: no such process","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install postgrepsql to cygwin on a windows 7 machine and want it to work with django. \nAfter built and installed postgrepsql in cygwin, I built and installed psycopg2 in cygwin as well and got no error, but when use it in python with cygwin, I got the \"no such process\" error:\n\n\n\nimport psycopg2\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"\/usr\/lib\/python2.5\/site-packages\/psycopg2\/init.py\", line 60, in \n from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID\n ImportError: No such process\n\n\n\nany clues?\nThanks!\nJerry","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1204,"Q_Id":2879246,"Users Score":0,"Answer":"Why? There is native psycopg2 for Win.","Q_Score":0,"Tags":"python,django,postgresql,psycopg2","A_Id":2885759,"CreationDate":"2010-05-21T02:34:00.000","Title":"psycopg2 on cygwin: no such process","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After running a bunch of simulations I'm going to be outputting the results into a table created using SQLAlchemy. I plan to use this data to generate statistics - mean and variance being key. These, in turn, will be used to generate some graphs - histograms\/line graphs, pie-charts and box-and-whisker plots specifically.\nI'm aware of the Python graphing libraries like matplotlib. The thing is, I'm not sure how to have this integrate with the information contained within the database tables. \nAny suggestions on how to make these two play with each other?\nThe main problem is that I'm not sure how to supply the information as \"data sets\" to the graphing library.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1415,"Q_Id":2890564,"Users Score":1,"Answer":"It looks like matplotlib takes simple python data types -- lists of numbers, etc, so you'll be need to write custom code to massage what you pull out of mysql\/sqlalchemy for input into the graphing functions...","Q_Score":1,"Tags":"python,matplotlib,sqlalchemy","A_Id":2891001,"CreationDate":"2010-05-23T03:10:00.000","Title":"How to generate graphs and statistics from SQLAlchemy tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If you wanted to manipulate the data in a table in a postgresql database using some python (maybe running a little analysis on the result set using scipy) and then wanted to export that data back into another table in the same database, how would you go about the implementation?\nIs the only\/best way to do this to simply run the query, have python store it in an array, manipulate the array in python and then run another sql statement to output to the database?\nI'm really just asking, is there a more efficient way to deal with the data?\nThanks,\nIan","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1582,"Q_Id":2905097,"Users Score":0,"Answer":"I agree with the SQL Alchemy suggestions or using Django's ORM. Your needs seem to simple for PL\/Python to be used.","Q_Score":4,"Tags":"python,postgresql","A_Id":2906866,"CreationDate":"2010-05-25T13:35:00.000","Title":"Python and Postgresql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision.\nHere is my situation\n1) It'll be run locally on each machine, all Windows computers\n2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc, \n3) I'm thinking about using a browser interface because (a) from what I've read, browser apps \ncan look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI\/GUI code with drag and drop tools, so that helps my \"keep it simple\" goal.\n4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run\n(If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?) \nFor my situation, should\/could I make it a browser app? What would be the pros and cons for my situation?","AnswerCount":8,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5638,"Q_Id":2924231,"Users Score":0,"Answer":"You question is a little broad. I'll try to cover as much as I can. \nFirst, what I understood and my assumptions.\nIn your situation, the sqlite database is just a data store. Only one process (unless your application is multiprocess) will be accessing it so you won't need to worry about locking issues. The application doesn't need to communicate with other instances etc. over the network. It's a single desktop app. The platform is Windows. \nHere are some thoughts that come to mind. \n\nIf you develop an application in Python (either web based or desktop), you will have to package it as a single executable and distribute it to your users. They might have to install the Python runtime as well as any extra modules that you might be using.\nGuis are in my experience easier to develop using a standalone widget system than in a browser with Javascript. There are things like Pyjamas that make this better but it's still hard. \nWhile it's not impossible to have local web applications running on each computer, your real benefits come if you centralise it. One place to update software. No need to \"distribute\" etc. This of course entails that you use a more powerful database system and you can actually manage multiple users. It will also require that you worry about browser specific quirks. \n\nI'd go with a simple desktop app that uses a prepackaged toolkit (perhaps Tkinter which ships with Python). It's not the best of approaches but it will avoid problems for you. I'd also consider using a language that's more \"first class\" on windows like C# so that the runtimes and other things are already there. You requirement for a fancy GUI is secondary and I'd recommend that you get the functionality working fine before you focus on the bells and whistles. \nGood luck.","Q_Score":5,"Tags":"python,sqlite,browser","A_Id":2979467,"CreationDate":"2010-05-27T19:28:00.000","Title":"Python\/Sqlite program, write as browser app or desktop app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When using SQL Alchemy for abstracting your data access layer and using controllers as the way to access objects from that abstraction layer, how should joins be handled?\nSo for example, say you have an Orders controller class that manages Order objects such that it provides getOrder, saveOrder, etc methods and likewise a similar controller for User objects. \nFirst of all do you even need these controllers? Should you instead just treat SQL Alchemy as \"the\" thing for handling data access. Why bother with object oriented controller stuff there when you instead have a clean declarative way to obtain and persist objects without having to write SQL directly either. \nWell one reason could be that perhaps you may want to replace SQL Alchemy with direct SQL or Storm or whatever else. So having controller classes there to act as an intermediate layer helps limit what would need to change then. \nAnyway - back to the main question - so assuming you have these two controllers, now lets say you want the list of orders for a certain set of users meeting some criteria. How do you go about doing this? Generally you don't want the controllers crossing domains - the Orders controllers knows only about Orders and the User controller just about Users - they don't mess with each other. You also don't want to go fetch all the Users that match and then feed a big list of user ids to the Orders controller to go find the matching Orders. \nWhat's needed is a join. Here's where I'm stuck - that seems to mean either the controllers must cross domains or perhaps they should be done away with altogether and you simply do the join via SQL Alchemy directly and get the resulting User and \/ or Order objects as needed. Thoughts?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":430,"Q_Id":2933796,"Users Score":2,"Answer":"Controllers are meant to encapsulate features for your convienience. Not to bind your hands. If you want to join, simply join. Use the controller that you think is logically fittest to make the query.","Q_Score":1,"Tags":"python,model-view-controller,sqlalchemy,dns,controllers","A_Id":2934084,"CreationDate":"2010-05-29T04:25:00.000","Title":"SQL Alchemy MVC and cross controller joins","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing\/averaging + additional transformations) over observations\/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to \n\nwrite the whole thing in C (or\nFortran)\nimport the files (tables) into a\nrelational database directly and\nthen pull off chunks in R or Python\n(some of the transformations are not\namenable for pure SQL solutions)\nwrite the whole thing in Python\n\nWould (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I\/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks\nEdit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...","AnswerCount":8,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":5360,"Q_Id":2937619,"Users Score":2,"Answer":"When you say \"accumulating\" then solution (2) looks most suitable to problem.\nAfter initial load up to database you only update database with new files (daily, weekly? depends how often you need this).\nIn cases (1) and (3) you need to process files each time (what was stated earlier as most time\/resources-consuming), unless you find a way to stored results and update them with new files.\nYou could use R to process files from csv to, for example, SQLite database.","Q_Score":32,"Tags":"python,sql,r,large-files,large-data-volumes","A_Id":2942419,"CreationDate":"2010-05-30T05:06:00.000","Title":"large amount of data in many text files - how to process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing\/averaging + additional transformations) over observations\/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to \n\nwrite the whole thing in C (or\nFortran)\nimport the files (tables) into a\nrelational database directly and\nthen pull off chunks in R or Python\n(some of the transformations are not\namenable for pure SQL solutions)\nwrite the whole thing in Python\n\nWould (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I\/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks\nEdit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...","AnswerCount":8,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":5360,"Q_Id":2937619,"Users Score":4,"Answer":"With terabytes, you will want to parallelize your reads over many disks anyway; so might as well go straight into Hadoop.\nUse Pig or Hive to query the data; both have extensive support for user-defined transformations, so you should be able to implement what you need to do using custom code.","Q_Score":32,"Tags":"python,sql,r,large-files,large-data-volumes","A_Id":2937664,"CreationDate":"2010-05-30T05:06:00.000","Title":"large amount of data in many text files - how to process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing\/averaging + additional transformations) over observations\/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to \n\nwrite the whole thing in C (or\nFortran)\nimport the files (tables) into a\nrelational database directly and\nthen pull off chunks in R or Python\n(some of the transformations are not\namenable for pure SQL solutions)\nwrite the whole thing in Python\n\nWould (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I\/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks\nEdit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...","AnswerCount":8,"Available Count":4,"Score":0.024994793,"is_accepted":false,"ViewCount":5360,"Q_Id":2937619,"Users Score":1,"Answer":"Yes. You are right! I\/O would cost most of your processing time. I don't suggest you to use distributed systems, like hadoop, for this task. \nYour task could be done in a modest workstation. I am not an Python expert, I think it has support for asynchronous programming. In F#\/.Net, the platform has well support for that. I was once doing an image processing job, loading 20K images on disk and transform them into feature vectors only costs several minutes in parallel. \nall in all, load and process your data in parallel and save the result in memory (if small), in database (if big).","Q_Score":32,"Tags":"python,sql,r,large-files,large-data-volumes","A_Id":2937660,"CreationDate":"2010-05-30T05:06:00.000","Title":"large amount of data in many text files - how to process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing\/averaging + additional transformations) over observations\/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to \n\nwrite the whole thing in C (or\nFortran)\nimport the files (tables) into a\nrelational database directly and\nthen pull off chunks in R or Python\n(some of the transformations are not\namenable for pure SQL solutions)\nwrite the whole thing in Python\n\nWould (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I\/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks\nEdit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...","AnswerCount":8,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":5360,"Q_Id":2937619,"Users Score":14,"Answer":"(3) is not necessarily a bad idea -- Python makes it easy to process \"CSV\" file (and despite the C standing for Comma, tab as a separator is just as easy to handle) and of course gets just about as much bandwidth in I\/O ops as any other language. As for other recommendations, numpy, besides fast computation (which you may not need as per your statements) provides very handy, flexible multi-dimensional arrays, which may be quite handy for your tasks; and the standard library module multiprocessing lets you exploit multiple cores for any task that's easy to parallelize (important since just about every machine these days has multi-cores;-).","Q_Score":32,"Tags":"python,sql,r,large-files,large-data-volumes","A_Id":2937630,"CreationDate":"2010-05-30T05:06:00.000","Title":"large amount of data in many text files - how to process?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Should I invest a lot of time trying to figure out an ORM style implementation, or is it still common to just stick with standard SQL queries in python\/pylons\/sqlalchemy?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1032,"Q_Id":2947172,"Users Score":8,"Answer":"ORMs are very popular, for several reasons -- e.g.: some people would rather not learn SQL, ORMs can ease porting among different SQL dialects, they may fit in more smoothly with the mostly-OOP style of applications, indeed might even ease some porting to non-SQL implementations (e.g, moving a Django app to Google App Engine would be much more work if the storage access layer relied on SQL statements -- as it relies on the ORM, that reduces, a bit, the needed porting work).\nSQLAlchemy is the most powerful ORM I know of for Python -- it lets you work at several possible levels, from a pretty abstract declarative one all the way down to injecting actual SQL in some queries where your profiling work has determined it makes a big difference (I think most people use it mostly at the intermediate level where it essentially mediates between OOP and relational styles, just like other ORMs).\nYou haven't asked for my personal opinion in the matter, which is somewhat athwart of the popular one I summarized above -- I've never really liked \"code generators\" of any kind (they increase your productivity a bit when everything goes smoothly... but you can pay that back with interest when you find yourself debugging problems [[including performance bottlenecks]] due to issues occurring below the abstraction levels that generators strive to provide).\nWhen I get a chance to use a good relational engine, such as PostgreSQL, I believe I'm overall more productive than I would be with any ORM in between (incuding SQLAlchemy, despite its many admirable qualities). However, I have to admit that the case is different when the relational engine is not all that good (e.g., I've never liked MySQL), or when porting to non-relational deployments is an important consideration.\nSo, back to your actual question, I do think that, overall, investing time in mastering SQLAlchemy is a good idea, and time well-spent.","Q_Score":1,"Tags":"python,sql,orm,sqlalchemy","A_Id":2947182,"CreationDate":"2010-06-01T03:49:00.000","Title":"Transitioning from php to python\/pylons\/SQLAlchemy -- Are ORMs the standard now?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Should I invest a lot of time trying to figure out an ORM style implementation, or is it still common to just stick with standard SQL queries in python\/pylons\/sqlalchemy?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1032,"Q_Id":2947172,"Users Score":1,"Answer":"If you have never use an ORM like SqlAlchemy before, I would suggest that you learn it - as long as you are learning the Python way. If nothing else, you will be better able to decide where\/when to use it vs plain SQL. I don't think you should have to invest a lot of time on it. Documentation for SQLAlchemy is decent, and you can always ask for help if you get stuck.","Q_Score":1,"Tags":"python,sql,orm,sqlalchemy","A_Id":2947191,"CreationDate":"2010-06-01T03:49:00.000","Title":"Transitioning from php to python\/pylons\/SQLAlchemy -- Are ORMs the standard now?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table formatted similar to this:\nDate | ID | Value | Difference\nI need to get the difference between a record's value column, and the previous record's value column based off of the date.\nI.E\n2 days ago | cow | 1 | Null\nYesterday | cow | 2 | Null\nToday | cow | 3 | Null\nYesterdays difference would be 1, and today's difference would be 1.\nbasically, I need to get the previous record based off the date, I don't know the interval's between each record. I've been stumped on this for a while. I am using Mysql, and Python to do the majority of the calculations.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":530,"Q_Id":2960481,"Users Score":0,"Answer":"Use a SELECT... WHERE date <= NOW() && date >= ( NOW() - 90000 ) (90,000 is 25 hours, giving you a little leeway with the insert time), and then take the difference between the rows in python.","Q_Score":0,"Tags":"python,mysql","A_Id":2960589,"CreationDate":"2010-06-02T18:35:00.000","Title":"Get the previous date in Mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In one of my Django projects that use MySQL as the database, I need to have a date fields that accept also \"partial\" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).\nThe date field in MySQL can deal with that by accepting 00 for the month and the day. So 2010-00-00 is valid in MySQL and it represent 2010. Same thing for 2010-05-00 that represent May 2010.\nSo I started to create a PartialDateField to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a datetime.date object for a date field AND datetime.date() support only real date. So it's possible to modify the converter for the date field used by MySQLdb and return only a string in this format 'YYYY-MM-DD'. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL date fields. But Django DateField rely on the fact that the database return a datetime.date object, so if I change the converter to return a string, Django is not happy at all.\nSomeone have an idea or advice to solve this problem? How to create a PartialDateField in Django ?\nEDIT\nAlso I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by Alison R.) or use a varchar field to keep date as string in this format YYYY-MM-DD.\nBut in both solutions, if I'm not wrong, I will loose the special properties of a date field like doing query of this kind on them: Get all entries after this date. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)","AnswerCount":5,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4080,"Q_Id":2971198,"Users Score":7,"Answer":"First, thanks for all your answers. None of them, as is, was a good solution for my problem, but, for your defense, I should add that I didn't give all the requirements. But each one help me think about my problem and some of your ideas are part of my final solution.\nSo my final solution, on the DB side, is to use a varchar field (limited to 10 chars) and storing the date in it, as a string, in the ISO format (YYYY-MM-DD) with 00 for month and day when there's no month and\/or day (like a date field in MySQL). This way, this field can work with any databases, the data can be read, understand and edited directly and easily by a human using a simple client (like mysql client, phpmyadmin, etc.). That was a requirement. It can also be exported to Excel\/CSV without any conversion, etc. The disadvantage is that the format is not enforce (except in Django). Someone could write 'not a date' or do a mistake in the format and the DB will accept it (if you have an idea about this problem...).\nThis way it's also possible to do all of the special queries of a date field relatively easily. For queries with WHERE: <, >, <=, >= and = work directly. The IN and BETWEEN queries work directly also. For querying by day or month you just have to do it with EXTRACT (DAY|MONTH ...). Ordering work also directly. So I think it covers all the query needs and with mostly no complication.\nOn the Django side, I did 2 things. First, I have created a PartialDate object that look mostly like datetime.date but supporting date without month and\/or day. Inside this object I use a datetime.datetime object to keep the date. I'm using the hours and minutes as flag that tell if the month and day are valid when they are set to 1. It's the same idea that steveha propose but with a different implementation (and only on the client side). Using a datetime.datetime object gives me a lot of nice features for working with dates (validation, comparaison, etc.).\nSecondly, I have created a PartialDateField that mostly deal with the conversion between the PartialDate object and the database.\nSo far, it works pretty well (I have mostly finish my extensive unit tests).","Q_Score":8,"Tags":"python,mysql,database,django,date","A_Id":3027410,"CreationDate":"2010-06-04T02:49:00.000","Title":"How to deal with \"partial\" dates (2010-00-00) from MySQL in Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers.\nRight now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key.\nMy question is what is the best way to do this if the CSV file increases to 100K+ records?\nWould it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":109,"Q_Id":2980257,"Users Score":1,"Answer":"As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer.\nYou are right that switching to an SQLite database is a good choice when the number of records gets very large.","Q_Score":3,"Tags":"python,database,sqlite,dictionary,csv","A_Id":2980269,"CreationDate":"2010-06-05T12:08:00.000","Title":"Efficient way to access a mapping of identifiers in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows:\nDataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other \"proper\" DB, which I've never used actually. \nAnd the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other?\nIn matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python!","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":306,"Q_Id":2990995,"Users Score":3,"Answer":"Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok.","Q_Score":3,"Tags":"python,serialization,pickle,object-persistence","A_Id":2991030,"CreationDate":"2010-06-07T15:52:00.000","Title":"How to save big \"database-like\" class in python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a web application, and I need to use an architecture that allows me to run it over two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. \nThen I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. \nI'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. \nWhat frame work(s) should I use for this kind of job? Is MVC and Cakephp a good solution? If so will I be able to control and monitor the Python code using it?\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1161,"Q_Id":3021921,"Users Score":2,"Answer":"How do go about implementing this? \n\nToo big a question for an answer here. Certainly you don't want 2 sets of code for the scraping (1 for scheduled, 1 for demand) in addition to the added complication, you really don't want to be running job which will take an indefinite time to complete within the thread generated by a request to your webserver - user requests for a scrape should be run via the scheduling mechanism and reported back to users (although if necessary you could use Ajax polling to give the illusion that it's happening in the same thread).\n\nWhat frame work(s) should I use?\n\nFrameworks are not magic bullets. And you shouldn't be choosing a framework based primarily on the nature of the application you are writing. Certainly if specific, critical functionality is precluded by a specific framework, then you are using the wrong framework - but in my experience that has never been the case - you just need to write some code yourself. \n\nusing something more complex than a cron job\n\nYes, a cron job is probably not the right way to go for lots of reasons. If it were me I'd look at writing a daemon which would schedule scrapes (and accept connections from web page scripts to enqueue additional scrapes). But I'd run the scrapes as separate processes. \n\nIs MVC a good architecture for this? (I'm new to MVC, architectures etc.)\n\nNo. Don't start by thinking whether a pattern fits the application - patterns are a useful tool for teaching but describe what code is not what it will be\n(Your application might include some MVC patterns - but it should also include lots of other ones).\nC.","Q_Score":3,"Tags":"php,python,model-view-controller,cakephp,application-server","A_Id":3022395,"CreationDate":"2010-06-11T10:12:00.000","Title":"Web application architecture, and application servers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Criteria for 'better': fast in math and simple (few fields, many records) db transactions, convenient to develop\/read\/extend, flexible, connectible.\nThe task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selecting\/inserting sets of floats and doing maths with them).\nThe choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, or JavaScript (node.js).\nAll the data is to be stored in a relational database (due to its heavily multidimensional nature); all the communication with outer world is to be done by means of web services.","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":451,"Q_Id":3022232,"Users Score":4,"Answer":"The best option is probably the language you're most familiar with. My second consideration would be if you need to use any special maths libraries and whether they're supported in each of the languages.","Q_Score":4,"Tags":"php,python,ruby,performance,math","A_Id":3022304,"CreationDate":"2010-06-11T11:13:00.000","Title":"What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Criteria for 'better': fast in math and simple (few fields, many records) db transactions, convenient to develop\/read\/extend, flexible, connectible.\nThe task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selecting\/inserting sets of floats and doing maths with them).\nThe choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, or JavaScript (node.js).\nAll the data is to be stored in a relational database (due to its heavily multidimensional nature); all the communication with outer world is to be done by means of web services.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":451,"Q_Id":3022232,"Users Score":10,"Answer":"I would suggest Python with it's great Scientifical\/Mathematical libraries (SciPy, NumPy). Otherwise the languages are not differing so much, although I doubt that Ruby, PHP or JS can keep up with the speed of Python or Perl.\nAnd what the comments below here say: at this moment, go for the latest Python2 (which is Python2.7). This has mature versions of all needed libraries, and if you follow the coding guidelines, transferring some day to Python 3 will be only a small pain.","Q_Score":4,"Tags":"php,python,ruby,performance,math","A_Id":3022242,"CreationDate":"2010-06-11T11:13:00.000","Title":"What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing some queries in Python on a large database to get some stats out of the database. I want these stats to be in-memory so other programs can use them without going to a database. \nI was thinking of how to structure them, and after trying to set up some complicated nested dictionaries, I realized that a good representation would be an SQL table. I don't want to store the data back into the persistent database, though. Are there any in-memory implementations of an SQL database that supports querying the data with SQL syntax?","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":44308,"Q_Id":3047412,"Users Score":1,"Answer":"In-memory databases usually do not support memory paging option (for the whole database or certain tables), i,e, total size of the database should be smaller than the available physical memory or maximum shared memory size.\nDepending on your application, data-access pattern, size of database and available system memory for database, you have a few choices:\na. Pickled Python Data in File System\nIt stores structured Python data structure (such as list of dictionaries\/lists\/tuples\/sets, dictionary of lists\/pandas dataframes\/numpy series, etc.) in pickled format so that they could be used immediately and convienently upon unpickled. AFAIK, Python does not use file system as backing store for Python objects in memory implicitly but host operating system may swap out Python processes for higher priority processes. This is suitable for static data, having smaller memory size compared to available system memory. These pickled data could be copied to other computers, read by multiple dependent or independent processes in the same computer. The actual database file or memory size has higher overhead than size of the data. It is the fastest way to access the data as the data is in the same memory of the Python process, and without a query parsing step.\nb. In-memory Database\nIt stores dynamic or static data in the memory. Possible in-memory libraries that with Python API binding are Redis, sqlite3, Berkeley Database, rqlite, etc. Different in-memory databases offer different features\n\nDatabase may be locked in the physical memory so that it is not swapped to memory backing store by the host operating system. However the actual implementation for the same libray may vary across different operating systems.\nThe database may be served by a database server process.\nThe in-memory may be accessed by multiple dependent or independent processes.\nSupport full, partial or no ACID model.\nIn-memory database could be persistent to physical files so that it is available when the host operating is restarted.\nSupport snapshots or\/and different database copies for backup or database management.\nSupport distributed database using master-slave, cluster models.\nSupport from simple key-value lookup to advanced query, filter, group functions (such as SQL, NoSQL)\n\nc. Memory-map Database\/Data Structure\nIt stores static or dynamic data which could be larger than physical memory of the host operating system. Python developers could use API such as mmap.mmap() numpy.memmap() to map certain files into process memory space. The files could be arranged into index and data so that data could be lookup\/accessed via index lookup. This is actually the mechanism used by various database libraries. Python developers could implement custom techniques to access\/update data efficiency.","Q_Score":32,"Tags":"python,sql,database,in-memory-database","A_Id":65153849,"CreationDate":"2010-06-15T17:13:00.000","Title":"in-memory database in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running a Django project on Postgresql 8.1.21 (using Django 1.1.1, Python2.5, psycopg2, Apache2 with mod_wsgi 3.2). We've recently encountered this lovely error:\nOperationalError: FATAL: connection limit exceeded for non-superusers\nI'm not the first person to run up against this. There's a lot of discussion about this error, specifically with psycopg, but much of it centers on older versions of Django and\/or offer solutions involving edits to code in Django itself. I've yet to find a succinct explanation of how to solve the problem of the Django ORM (or psycopg, whichever is really responsible, in this case) leaving open Postgre connections.\nWill simply adding connection.close() at the end of every view solve this problem? Better yet, has anyone conclusively solved this problem and kicked this error's ass? \nEdit: we later upped Postgresql's limit to 500 connections; this prevented the error from cropping up, but replaced it with excessive memory usage.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2568,"Q_Id":3049625,"Users Score":1,"Answer":"This could be caused by other things. For example, configuring Apache\/mod_wsgi in a way that theoretically it could accept more concurrent requests than what the database itself may be able to accept at the same time. Have you reviewed your Apache\/mod_wsgi configuration and compared limit on maximum clients to that of PostgreSQL to make sure something like that hasn't been done. Obviously this presumes though that you have managed to reach that limit in Apache some how and also depends on how any database connection pooling is set up.","Q_Score":3,"Tags":"python,database,django,postgresql,django-orm","A_Id":3049796,"CreationDate":"2010-06-15T22:45:00.000","Title":"Django ORM and PostgreSQL connection limits","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"THE TASK:\nI am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer.\nTHE PROBLEM:\nAccess is unable to perform a query that is so complex it seems, as it crashes any time I run the query.\nALTERNATIVES:\nI have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new.\n\nExport each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables).\nExport each table into CSVs and write a script to access each one and merge the CSVs into a single CSV.\nSomehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file.\n\nQUESTION:\nWhat do you recommend?\nCLARIFICATIONS:\n\nI am merging tables, not concatenating. Each table has a different structure and different data. It is a normalized CRM database. Companies->contacts->details = ~ 60 tables of details.\nAs the Access db will be scuttled after the db is migrated, I want to spend as little time in Access as possible.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4410,"Q_Id":3064830,"Users Score":0,"Answer":"I'm not even clear on what you're trying to do. I assume your problem is that Jet\/ACE can't handle a UNION with that many SELECT statements. \nIf you have 64 identically-structured tables and you want them in a single CSV, I'd create a temp table in Access, append each table in turn, then export from the temp table to CSV. This is a simple solution and shouldn't be slow, either. The only possible issue might be if there are dupes, but if there are, you can export from a SELECT DISTINCT saved QueryDef.\nTangentially, I'm surprised Maximizer still exists. I had a client who used to use it, and the db structure was terribly unnormalized, just like all the other sales software like ACT.","Q_Score":2,"Tags":"python,sql,ms-access,crm","A_Id":3073339,"CreationDate":"2010-06-17T19:13:00.000","Title":"Query crashes MS Access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"THE TASK:\nI am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer.\nTHE PROBLEM:\nAccess is unable to perform a query that is so complex it seems, as it crashes any time I run the query.\nALTERNATIVES:\nI have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new.\n\nExport each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables).\nExport each table into CSVs and write a script to access each one and merge the CSVs into a single CSV.\nSomehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file.\n\nQUESTION:\nWhat do you recommend?\nCLARIFICATIONS:\n\nI am merging tables, not concatenating. Each table has a different structure and different data. It is a normalized CRM database. Companies->contacts->details = ~ 60 tables of details.\nAs the Access db will be scuttled after the db is migrated, I want to spend as little time in Access as possible.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":4410,"Q_Id":3064830,"Users Score":1,"Answer":"I would recommend #2 if the merge is fairly simple and straightforward, and doesn't need the power of an RDBMS. I'd go with #1 if the merge is more complex and you will need to write some actual queries to get the data merged properly.","Q_Score":2,"Tags":"python,sql,ms-access,crm","A_Id":3064852,"CreationDate":"2010-06-17T19:13:00.000","Title":"Query crashes MS Access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance.\nIs there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design\/create my database, and then have Django reverse engineer the models file?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":640,"Q_Id":3066255,"Users Score":0,"Answer":"I concur with the 'no foreign keys' advice (with the disclaimer: I also work for Percona).\nThe reason why it is is recommended is for concurrency \/ reducing locking internally.\nIt can be a difficult \"optimization\" to sell, but if you consider that the database has transactions (and is more or less ACID compliant) then it should only be application-logic errors that cause foreign-key violations. Not to say they don't exist, but if you enable foreign keys in development hopefully you should find at least a few bugs.\nIn terms of whether or not you need to write custom SQL:\nThe explanation I usually give is that \"optimization rarely decreases complexity\". I think it is okay to stick with an ORM by default, but if in a profiler it looks like one particular piece of functionality is taking a lot more time than you suspect it would when written by hand, then you need to be prepared to fix it (assuming the code is called often enough).\nThe real secret here is that you need good instrumentation \/ profiling in order to be frugal with your complexity adding optimization(s).","Q_Score":2,"Tags":"python,mysql,django","A_Id":3320441,"CreationDate":"2010-06-17T23:07:00.000","Title":"Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance.\nIs there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design\/create my database, and then have Django reverse engineer the models file?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":640,"Q_Id":3066255,"Users Score":0,"Answer":"django-admin inspectdb allows you to reverse engineer a models file from existing tables. That is only a very partial response to your question ;)","Q_Score":2,"Tags":"python,mysql,django","A_Id":3066274,"CreationDate":"2010-06-17T23:07:00.000","Title":"Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance.\nIs there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design\/create my database, and then have Django reverse engineer the models file?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":640,"Q_Id":3066255,"Users Score":0,"Answer":"You can just create the model.py and avoid having SQL Alchemy automatically create the tables leaving it up to you to define the actual tables as you please. So although there are foreign key relationships in the model.py this does not mean that they must exist in the actual tables. This is a very good thing considering how ludicrously foreign key constraints are implemented in MySQL - MyISAM just ignores them and InnoDB creates a non-optional index on every single one regardless of whether it makes sense.","Q_Score":2,"Tags":"python,mysql,django","A_Id":3066360,"CreationDate":"2010-06-17T23:07:00.000","Title":"Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How would I go around creating a MYSQL table schema inspecting an Excel(or CSV) file.\nAre there any ready Python libraries for the task?\nColumn headers would be sanitized to column names. Datatype would be estimated based on the contents of the spreadsheet column. When done, data would be loaded to the table.\nI have an Excel file of ~200 columns that I want to start normalizing.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":7011,"Q_Id":3070094,"Users Score":1,"Answer":"As far as I know, there is no tool that can automate this process (I would love for someone to prove me wrong as I've had this exact problem before).\nWhen I did this, I came up with two options: \n(1) Manually create the columns in the db with the appropriate types and then import, or \n(2) Write some kind of filter that could \"figure out\" what data types the columns should be.\nI went with the first option mainly because I didn't think I could actually write a program to do the type inference.\nIf you do decide to write a type inference tool\/conversion, here are a couple of issues you may have to deal with:\n(1) Excel dates are actually stored as the number of days since December 31st, 1899; how does one infer then that a column is dates as opposed to some piece of numerical data (population for example)?\n(2) For text fields, do you just make the columns of type varchar(n) where n is the longest entry in that column, or do you make it an unbounded char field if one of the entries is longer than some upper limit? If so, what's a good upper limit?\n(3) How do you automatically convert a float to a decimal with the correct precision and without loosing any places?\nObviously, this doesn't mean that you won't be able to (I'm a pretty bad programmer). I hope you do, because it'd be a really useful tool to have.","Q_Score":6,"Tags":"python,mysql,excel,csv,import-from-excel","A_Id":3072109,"CreationDate":"2010-06-18T13:40:00.000","Title":"Generate table schema inspecting Excel(CSV) and import data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I go around creating a MYSQL table schema inspecting an Excel(or CSV) file.\nAre there any ready Python libraries for the task?\nColumn headers would be sanitized to column names. Datatype would be estimated based on the contents of the spreadsheet column. When done, data would be loaded to the table.\nI have an Excel file of ~200 columns that I want to start normalizing.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":7011,"Q_Id":3070094,"Users Score":1,"Answer":"Quick and dirty workaround with phpmyadmin:\n\nCreate a table with the right amount of columns. Make sure the data fits the columns.\nImport the CSV into the table.\nUse the propose table structure.","Q_Score":6,"Tags":"python,mysql,excel,csv,import-from-excel","A_Id":3071074,"CreationDate":"2010-06-18T13:40:00.000","Title":"Generate table schema inspecting Excel(CSV) and import data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I go around creating a MYSQL table schema inspecting an Excel(or CSV) file.\nAre there any ready Python libraries for the task?\nColumn headers would be sanitized to column names. Datatype would be estimated based on the contents of the spreadsheet column. When done, data would be loaded to the table.\nI have an Excel file of ~200 columns that I want to start normalizing.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":7011,"Q_Id":3070094,"Users Score":1,"Answer":"Just for (my) reference, I documented below what I did:\n\nXLRD is practical, however I've just saved the Excel data as CSV, so I can use LOAD DATA INFILE\nI've copied the header row and started writing the import and normalization script\nScript does: CREATE TABLE with all columns as TEXT, except for Primary key\nquery mysql: LOAD DATA LOCAL INFILE loading all CSV data into TEXT fields.\nbased on the output of PROCEDURE ANALYSE, I was able to ALTER TABLE to give columns the right types and lengths. PROCEDURE ANALYSE returns ENUM for any column with few distinct values, which is not what I needed, but I found that useful later for normalization. Eye-balling 200 columns was a breeze with PROCEDURE ANALYSE. Output from PhpMyAdmin propose table structure was junk.\nI wrote some normalization mostly using SELECT DISTINCT on columns and INSERTing results to separate tables. I have added to the old table a column for FK first. Just after the INSERT, I've got its ID and UPDATEed the FK column. When loop finished I've dropped old column leaving only FK column. Similarly with multiple dependent columns. It was much faster than I expected.\nI ran (django) python manage.py inspctdb, copied output to models.py and added all those ForeignkeyFields as FKs do not exist on MyISAM. Wrote a little python views.py, urls.py, few templates...TADA","Q_Score":6,"Tags":"python,mysql,excel,csv,import-from-excel","A_Id":3169710,"CreationDate":"2010-06-18T13:40:00.000","Title":"Generate table schema inspecting Excel(CSV) and import data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently work with Google's AppEngine and I could not find out, whether a Google DataStorage Object Entry has an ID by default, and if not, how I add such a field and let it increase automatically?\nregards,","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":282,"Q_Id":3077156,"Users Score":4,"Answer":"An object has a Key, part of which is either an automatically-generated numeric ID, or an assigned key name. IDs are not guaranteed to be increasing, and they're almost never going to be consecutive because they're allocated to an instance in big chunks, and IDs unused by the instance to which they're allocated will never be used by another instance (at least, not currently). They're also only unique within the same entity group for a kind; they're not unique to the entire kind if you have parent relationships.","Q_Score":1,"Tags":"python,google-app-engine,gql","A_Id":3078018,"CreationDate":"2010-06-19T20:38:00.000","Title":"Does GQL automatically add an \"ID\" Property","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I currently work with Google's AppEngine and I could not find out, whether a Google DataStorage Object Entry has an ID by default, and if not, how I add such a field and let it increase automatically?\nregards,","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":282,"Q_Id":3077156,"Users Score":3,"Answer":"Yes, they have id's by default, and it is named ID as you mentioned.","Q_Score":1,"Tags":"python,google-app-engine,gql","A_Id":3077170,"CreationDate":"2010-06-19T20:38:00.000","Title":"Does GQL automatically add an \"ID\" Property","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to use the SimpleDB in following way. \nI want to keep 48 hrs worth data at anytime into simpledb and query it for different purposes. \nEach domain has 1 hr worth data, so at any time there are 48 domains present in the simpledb.\nAs the new data is constantly uploaded, I delete the oldest domain and create a new domain for each new hour.\nEach domain is about 50MB in size, the total size of all the domains is around 2.2 GB.\nThe item in the domain has following type of attributes\nidentifier - around 50 characters long -- 1 per item\ntimestamp - timestamp value -- 1 per item\nserial_n_data - 500-1000 bytes data -- 200 per item \nI'm using python boto library to upload and query the data. \nI send 1 item\/sec with around 200 attributes in the domain.\nFor one of the application of this data, I need to get all the data from all the 48 domains. \nThe Query looks like, \"SELECT * FROM domain\", for all the domains.\nI use 8 threads to query data with each thread taking responsibility of few domains.\ne.g domain 1-6 thread 1\n domain 7-12 thread 2 and so on \nIt takes close to 13 minutes to get the entire data.I am using boto's select method for this.I need much more faster performance than this. Any suggestions on speed up the querying process? Is there any other language that I can use, which can speed up the things?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1743,"Q_Id":3103145,"Users Score":0,"Answer":"I have had the same issue as you Charlie. After profiling the code, I have narrowed the performance problem down to SSL. It seems like that is where it is spending most of it's time and hence CPU cycles.\nI have read of a problem in the httplib library (which boto uses for SSL) where the performance doesn't increase unless the packets are over a certain size, though that was for Python 2.5 and may have already been fixed.","Q_Score":2,"Tags":"python,amazon-simpledb,boto","A_Id":9012699,"CreationDate":"2010-06-23T15:38:00.000","Title":"SimpleDB query performance improvement using boto","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can I somehow work with remote databases (if they can do it) with the Django ORM?\nIt is understood that the sitting has spelled out the local database. And periodically to make connection to various external databases and perform any sort of commands such as load dump.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":121,"Q_Id":3123801,"Users Score":1,"Answer":"If you can connect to the database remotely, then you can simply specify its host\/port in settings.py exactly as you would a local one.","Q_Score":0,"Tags":"python,django,orm","A_Id":3125012,"CreationDate":"2010-06-26T12:05:00.000","Title":"Remote execution of commands using the Django ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My two main requirements for the site are related to degrees of separation and graph matching (given two graphs, return some kind of similarity score).\nMy first thought was to use MySql to do it, which would probably work out okay for storing how I want to manage 'friends' (similar to Twitter), but I'm thinking if I want to show users results which will make use of graphing algorithms (like shortest path between two people) maybe it isn't the way to go for that.\nMy language of choice for the front end, would be Python using something like Pylons but I haven't committed to anything specific yet and would be willing to budge if it fitted well with a good backend solution.\nI'm thinking of using MySQL for storing user profile data, neo4j for the graph information of relations between users and then have a Python application talk to both of them.\nMaybe there is a simpler\/more efficient way to do this kind of thing. At the moment for me it's more getting a suitable prototype done than worrying about scalability but I'm willing to invest some time learning something new if it'll save me time rewriting\/porting in the future.\nPS: I'm more of a programmer than a database designer, so I'd prefer having rewrite the frontend later rather than say porting over the database, which is the main reason I'm looking for advice.","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":749,"Q_Id":3126155,"Users Score":2,"Answer":"MySQL is really your best choice for the database unless you want to go proprietary.\nAs for the actual language, pick whatever you are familiar with. While Youtube and Reddit are written in python, many of the other large sites use Ruby (Hulu, Twitter, Techcrunch) or C++ (Google) or PHP (Facebook, Yahoo, etc).","Q_Score":4,"Tags":"python,sql,mysql,database","A_Id":3126208,"CreationDate":"2010-06-27T02:19:00.000","Title":"What should I use for the backend of a 'social' website?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"With PostgreSQL, one of my tables has an 'interval' column, values of which I would like to extract as something I can manipulate (datetime.timedelta?); however I am using PyGreSQL which seems to be returning intervals as strings, which is less than helpful.\nWhere should I be looking to either parse the interval or make PyGreSQL return it as a ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1424,"Q_Id":3134699,"Users Score":3,"Answer":"Use Psycopg 2. It correctly converts between Postgres's interval data type and Python's timedelta.","Q_Score":0,"Tags":"python,sql,postgresql,pygresql","A_Id":3137124,"CreationDate":"2010-06-28T17:33:00.000","Title":"python sql interval","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've decided to give Python a try on Netbeans. The problem so far is when try to run program I know works, i.e. if I ran it through the terminal. For the project I selected the correct Python version (2.6.5). And received the following error:\n\nTraceback (most recent call last): File\n \"\/Users\/XXX\/NetBeansProjects\/NewPythonProject3\/src\/newpythonproject3.py\",\n line 4, in \n import sqlite3 ImportError: No module named sqlite3","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":184,"Q_Id":3149370,"Users Score":0,"Answer":"Search for PYTHONPATH. You probably have different settings in your OS and Netbeans.","Q_Score":0,"Tags":"python,sqlite","A_Id":3151119,"CreationDate":"2010-06-30T12:50:00.000","Title":"Netbeans + sqlite3 = Fail?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script for exporting some data.\nSome details about the environment:\n\nThe project is Django based\nI'm using raw\/custom SQL for the export\nThe database engine is MySQL.\nThe database and code are on the same box.-\n\nDetails about the SQL:\n\nA bunch of inner joins\nA bunch of columns selected, some with a basic multiplication calculation.\nThe sql result has about 55K rows\n\nWhen I run the SQL statement in the mysql command line, it takes 3-4 seconds\nWhen I run the SQL in my python script the line cursor.execute(sql, [id]) takes over 60 seconds.\nAny ideas on what might be causing this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":705,"Q_Id":3188289,"Users Score":0,"Answer":"Two ideas: \n\nMySQL may have query caching enabled, which makes it difficult to get accurate timing when you run the same query repeatedly. Try changing the ID in your query to make sure that it really does run in 3-4 seconds consistently.\nTry using strace on the python process to see what it is doing during this time.","Q_Score":0,"Tags":"python,mysql,performance","A_Id":3188555,"CreationDate":"2010-07-06T16:39:00.000","Title":"Python MySQL Performance: Runs fast in mysql command line, but slow with cursor.execute","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As is mentioned in the doc for google app engine, it does not support group by and other aggregation functions. Is there any alternatives to implement the same functionality?\nI am working on a project where I need it on urgent basis, being a large database its not efficient to iterate the result set and then perform the logic.\nPlease suggest.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":322,"Q_Id":3210577,"Users Score":1,"Answer":"The best way is to populate the summaries (aggregates) at the time of write. This way your reads will be faster, since they just read - at the cost of writes which will have to update the summaries if its likely to be effected by the write. \nHopefully you will be reading more often than writing\/updating summaries.","Q_Score":0,"Tags":"python,google-app-engine","A_Id":3211471,"CreationDate":"2010-07-09T07:21:00.000","Title":"Google application engine Datastore - any alternatives to aggregate functions and group by?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using Google Appengine to store a list of favorites, linking a Facebook UserID to one or more IDs from Bing. I need function calls returning the number of users who have favorited an item, and the number of times an item has been favorited (and by whom).\nMy question is, should I resolve this relationship into two tables for efficiency? If I have a table with columns for Facebook ID and Bing ID, I can easily use select queries for both of the functions above, however this will require that each row is searched in each query. The alternative is having two tables, one for each Facebook user's favorites and the other for each Bing item's favorited users, and using transactions to keep them in sync. The two tables option has the advantage of being able to use JSON or CSV in the database so that only one row needs to be fetched, and little manipulation needs to be done for an API.\nWhich option is better, in terms of efficiency and minimising cost?\nThanks,\nMatt","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":280,"Q_Id":3210994,"Users Score":0,"Answer":"I don't think there's a hard and fast answer to questions like this. \"Is this optimization worth it\" always depends on many variables such as, is the lack of optimization actually a problem to start with? How much of a problem is it? What's the cost in terms of extra time and effort and risk of bugs of a more complex optimized implementation, relative to the benefits? What might be the extra costs of implementing the optimization later, such as data migration to a new schema?","Q_Score":1,"Tags":"python,google-app-engine,performance,many-to-many","A_Id":3213988,"CreationDate":"2010-07-09T08:21:00.000","Title":"Many-to-many relationships in Google AppEngine - efficient?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using MySQLdb and when I perform an UPDATE to a table row I sometimes get an infinite process hang.\nAt first I thought, maybe its COMMIT since the table is Innodb, but even with autocommit(True) and db.commit() after each update I still get the hang.\nIs it possible there is a row lock and the query just fails to carry out? Is there a way to handle potential row locks or maybe even handle slow queries?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":544,"Q_Id":3216027,"Users Score":1,"Answer":"Depending on your user privileges, you can execute SHOW PROCESSLIST or SELECT from information_schema.processlist while the UPDATE hangs to see if there is a contention issue with another query. Also do an EXPLAIN on a SELECT of the WHERE clause used in the UPDATE to see if you need to change the statement. \nIf it's a lock contention, then you should eventually encounter a Lock Wait Timeout (default = 50 sec, I believe). Otherwise, if you have timing constraints, you can make use of KILL QUERY and KILL CONNECTION to unblock the cursor execution.","Q_Score":0,"Tags":"python,mysql","A_Id":3216500,"CreationDate":"2010-07-09T19:47:00.000","Title":"MySQLdb Handle Row Lock","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a good step by step online guide to install xampp (apache server,mysql server) together with zope-plone on the same linux machine and make it play nicely or do I have to go through their confusing documentations?\nOr how can I install this configuration in the best way? I can install and use both seperately but in tandem is an issue for me. Any help is appreciated.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":486,"Q_Id":3233246,"Users Score":0,"Answer":"sorry for wrong site but I just figured out that it was not a problem at all. I installed XAMPP (a snap) and downloaded and ran the plone install script. Both sites XAMPP on port 80 and zope\/plone on 8080 are working without problems. Just to let everyone know. I don't know why I got nervous about this :)","Q_Score":0,"Tags":"python,apache,xampp,plone,zope","A_Id":3247954,"CreationDate":"2010-07-13T00:09:00.000","Title":"Guide to install xampp with zope-plone on the same linux machine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I never thought I'd ever say this but I'd like to have something like the report generator in Microsoft Access. Very simple, just list data from a SQL query.\nI don't really care what language is used as long as I can get it done fast. \nC#,C++,Python,Javascript...\nI want to know the quickest (development sense) way to display data from a database.\nedit :\nI'm using MySQL with web interface for data input. I would be much better if the user had some kind of GUI.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":325,"Q_Id":3242448,"Users Score":0,"Answer":"Some suggestions:\n1) ASP.NET Gridview\n ---use the free Visual Studio to create an asp.net page\n ...can do VB, C#, etc.\n ---drag\/drop a gridview control on your page, then connect it to your data and display fields, all via wizard (you did say quick and dirty, correct?). No coding required if you can live within the wizard's limitations (which aren't too bad actually).\nThe type of database (mySQL or otherwise) isn't relevant.\nOther quick and dirty approach might be Access itself -- it can create 'pages', I think, that are web publishable.\nIf you want to put a little more work into it, ASP.NET has some other great controls \/ layout capability (non-wizard derived).\nAlso, you could look at SSRS if you have access to it. More initial setup work, but has the option to let your users create their own reports in a semi-Access-like fashion. Web accessible.\nGood luck.","Q_Score":0,"Tags":"c#,javascript,python,sql,database","A_Id":3249163,"CreationDate":"2010-07-13T23:59:00.000","Title":"Quick and dirty reports based on a SQL query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Installed Django from source (python setup.py install and such), installed MySQLdb from source (python setup.py build, python setup.py install). Using Python 2.4 which came installed on the OS (CentOS 5.5). Getting the following error message after launching the server:\nError loading MySQLdb module: No module named MySQLdb\nThe pythonpath the debug info provides includes\n'\/usr\/lib\/python2.4\/site-packages'\nand yet, if I ls that directory, I can plainly see\nMySQL_python-1.2.3-py2.4-linux-i686.egg\nUsing the python interactive shell, I can type import MySQLdb and it produces no errors. This leads me to believe it's a Django pathing issue, but I haven't the slightest clue where to start looking as I'm new to both Django and python.\nEDIT: And to be a bit more specific, everything is currently running as root. I haven't setup any users yet on the machine, so none exist other than root.\nEDITx2: And to be even more specific, web server is Cherokee, and deploying using uWSGI. All installed from source.","AnswerCount":10,"Available Count":1,"Score":0.0199973338,"is_accepted":false,"ViewCount":38547,"Q_Id":3243073,"Users Score":1,"Answer":"Try this if you are using \nlinux:- sudo apt-get install python-mysqldb\nwindows:- pip install python-mysqldb or\n easy_install python-mysqldb\nHope this should work","Q_Score":19,"Tags":"python,django,mysql","A_Id":23076238,"CreationDate":"2010-07-14T02:49:00.000","Title":"Django unable to find MySQLdb python module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a PHP guy. In PHP I mainly use Doctrine ORM to deal with database issues. I am considering move to Python + Django recently. I know Python but don't have experience with Django. Can anyone who has good knowledge of both Doctrine and ORM in Django give me a comparison of features of these two ORM implementations?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":5103,"Q_Id":3249977,"Users Score":1,"Answer":"Ive used Doctrine over a 2 year project that ended 1.5 years ago, since then i've been doing mostly Django.\nI prefer Djangos ORM over Doctrine any day, more features, more consistency, faster and shinier.","Q_Score":3,"Tags":"php,python,django,orm,doctrine","A_Id":8543708,"CreationDate":"2010-07-14T20:01:00.000","Title":"ORM in Django vs. PHP Doctrine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a PHP guy. In PHP I mainly use Doctrine ORM to deal with database issues. I am considering move to Python + Django recently. I know Python but don't have experience with Django. Can anyone who has good knowledge of both Doctrine and ORM in Django give me a comparison of features of these two ORM implementations?","AnswerCount":4,"Available Count":3,"Score":0.2449186624,"is_accepted":false,"ViewCount":5103,"Q_Id":3249977,"Users Score":5,"Answer":"I am a rare person who had to switch from Django 1.4 to Symfony 2.1 so I had to use Doctrine 2 instead of current Django ORM.\nMaybe Doctrine can do many things but let me tell you that it is a nightmare for me to use it coming from Django.\nI'm bored with the verbosity of php\/Symfony\/Doctrine ...\nAlso I never needed something that Django's ORM didn't manage already (maybe projects not big enough to reach the limits).\nSimply compare the description of data between both orms (including setters & getters)...","Q_Score":3,"Tags":"php,python,django,orm,doctrine","A_Id":12267439,"CreationDate":"2010-07-14T20:01:00.000","Title":"ORM in Django vs. PHP Doctrine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a PHP guy. In PHP I mainly use Doctrine ORM to deal with database issues. I am considering move to Python + Django recently. I know Python but don't have experience with Django. Can anyone who has good knowledge of both Doctrine and ORM in Django give me a comparison of features of these two ORM implementations?","AnswerCount":4,"Available Count":3,"Score":-0.049958375,"is_accepted":false,"ViewCount":5103,"Q_Id":3249977,"Users Score":-1,"Answer":"Django isn't just an orm. It is a web framework like symfony. The form framework in symfony is modeled on django for example. It's orm part is more like doctrine 2 I think, but I haven't played with either much.","Q_Score":3,"Tags":"php,python,django,orm,doctrine","A_Id":3250203,"CreationDate":"2010-07-14T20:01:00.000","Title":"ORM in Django vs. PHP Doctrine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We need to be able to inform a Delphi application in case there are changes to some of our tables in MySQL.\nDelphi clients are in the Internet behind a firewall, and they have to be authenticated before connecting to the notification server we need to implement. The server can be programmed using for example Java, PHP or Python, and it has to support thousands of clients.\nTypically one change in the database needs to be informed only to a single client, and I don't believe performance will be a bottleneck. It just has to be possible to inform any of those thousands of clients when a change affecting the specific client occurs.\nI have been thinking of a solution where:\n\nMySQL trigger would inform to notification server\nDelphi client connects to a messaging queue and gets the notification using it\n\nMy questions:\n\nWhat would be the best to way from the trigger to inform the external server of the change\nWhich message queue solution to pick?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1596,"Q_Id":3255330,"Users Score":0,"Answer":"Why not use the XMPP protocol (aka Jabbber) ?","Q_Score":4,"Tags":"java,php,python,mysql,delphi","A_Id":3255344,"CreationDate":"2010-07-15T12:02:00.000","Title":"How to create a notification server which informs Delphi application when database changes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We need to be able to inform a Delphi application in case there are changes to some of our tables in MySQL.\nDelphi clients are in the Internet behind a firewall, and they have to be authenticated before connecting to the notification server we need to implement. The server can be programmed using for example Java, PHP or Python, and it has to support thousands of clients.\nTypically one change in the database needs to be informed only to a single client, and I don't believe performance will be a bottleneck. It just has to be possible to inform any of those thousands of clients when a change affecting the specific client occurs.\nI have been thinking of a solution where:\n\nMySQL trigger would inform to notification server\nDelphi client connects to a messaging queue and gets the notification using it\n\nMy questions:\n\nWhat would be the best to way from the trigger to inform the external server of the change\nWhich message queue solution to pick?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1596,"Q_Id":3255330,"Users Score":1,"Answer":"There is apache camel and spring intergration, both provides some ways to send messages across.","Q_Score":4,"Tags":"java,php,python,mysql,delphi","A_Id":3255395,"CreationDate":"2010-07-15T12:02:00.000","Title":"How to create a notification server which informs Delphi application when database changes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Due to the nature of my application, I need to support fast inserts of large volumes of data into the database. Using executemany() increases performance, but there's a caveat. For example, MySQL has a configuration parameter called max_allowed_packet, and if the total size of my insert queries exceeds its value, MySQL throws an error.\nQuestion #1: Is there a way to tell SQLAlchemy to split the packet into several smaller ones?\nQuestion #2: If other RDBS have similar constraints, how should I work around them as well?\n\nP.S. I had posted this question earlier but deleted it when I wrongly assumed that likely I will not encounter this problem after all. Sadly, that's not the case.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2103,"Q_Id":3267580,"Users Score":2,"Answer":"I had a similar problem recently and used the - not very elegant - work-around:\n\nFirst I parsed my.cnf for a value for max_allow_packets, if I can't find it, the maximum is set to a default value.\nAll data items are stored in a list.\nNext, for each data item I count the approximate byte length (with strings, it's the length of the string in bytes, for other data types I take the maximum bytes used to be safe.)\nI add them up, committing after I have reached approx. 75% of max_allow_packets (as SQL queries will take up space as well, just to be on the safe side).\n\nThis approach is not really beautiful, but it worked flawlessly for me.","Q_Score":1,"Tags":"python,mysql,sqlalchemy,large-query","A_Id":3271153,"CreationDate":"2010-07-16T17:54:00.000","Title":"SQLAlchemy and max_allowed_packet problem","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to setup a website in django which allows the user to send queries to a database containing information about their representatives in the European Parliament. I have the data in a comma seperated .txt file with the following format:\n\nParliament, Name, Country, Party_Group, National_Party, Position\n7, Marta Andreasen, United Kingdom, Europe of freedom and democracy Group, United Kingdom Independence Party, Member\netc....\n\nI want to populate a SQLite3 database with this data, but so far all the tutorials I have found only show how to do this by hand. Since I have 736 observations in the file I dont really want to do this.\nI suspect this is a simple matter, but I would be very grateful if someone could show me how to do this.\nThomas","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":7034,"Q_Id":3270952,"Users Score":2,"Answer":"You asked what the create(**dict(zip(fields, row))) line did.\nI don't know how to reply directly to your comment, so I'll try to answer it here.\nzip takes multiple lists as args and returns a list of their correspond elements as tuples. \nzip(list1, list2) => [(list1[0], list2[0]), (list1[1], list2[1]), .... ]\ndict takes a list of 2-element tuples and returns a dictionary mapping each tuple's first element (key) to its second element (value).\ncreate is a function that takes keyword arguments. You can use **some_dictionary to pass that dictionary into a function as keyword arguments. \ncreate(**{'name':'john', 'age':5}) => create(name='john', age=5)","Q_Score":13,"Tags":"python,django,sqlite","A_Id":3275298,"CreationDate":"2010-07-17T09:38:00.000","Title":"Populating a SQLite3 database from a .txt file with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm having all sorts of trouble trying to instal MySQLdb (1.2.2) on snow leopard. I am running python 2.5.1 and MySQL 5.1 32bit.\nPython and MySQL are running just fine.\nI've also installed django 1.2.1, although I don't think thats all that important, but wanted to give an idea of the stack i'm trying to install. I am using python 2.5.x as my webhost only has that version as an option and I want to be as close to my production env as possible.\nanyway...\nAfter following many of the existing articles and tutorials which mention modifying _mysql.c and setup_posix.py etc, I am still running into trouble.\nHere is my stack trace:\nxxxxxxx-mbp:MySQL-python-1.2.2 xxxxxxx$ sudo ARCHFLAGS=\"-arch x86_64\" python setup.py build\nrunning build\nrunning build_py\ncreating build\ncreating build\/lib.macosx-10.3-i386-2.5\ncopying _mysql_exceptions.py -> build\/lib.macosx-10.3-i386-2.5\ncreating build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncopying MySQLdb\/init.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncopying MySQLdb\/converters.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncopying MySQLdb\/connections.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncopying MySQLdb\/cursors.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncopying MySQLdb\/release.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncopying MySQLdb\/times.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\ncreating build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/init.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/CR.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/FIELD_TYPE.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/ER.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/FLAG.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/REFRESH.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\ncopying MySQLdb\/constants\/CLIENT.py -> build\/lib.macosx-10.3-i386-2.5\/MySQLdb\/constants\nrunning build_ext\nbuilding '_mysql' extension\ncreating build\/temp.macosx-10.3-i386-2.5\ngcc -isysroot \/Developer\/SDKs\/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,2,'final',0) -D__version__=1.2.2 -I\/usr\/local\/mysql-5.1.48-osx10.6-x86\/include -I\/Library\/Frameworks\/Python.framework\/Versions\/2.5\/include\/python2.5 -c _mysql.c -o build\/temp.macosx-10.3-i386-2.5\/_mysql.o -g -Os -arch i386 -fno-common -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL\nIn file included from \/Developer\/SDKs\/MacOSX10.4u.sdk\/usr\/include\/wchar.h:112,\n from \/Library\/Frameworks\/Python.framework\/Versions\/2.5\/include\/python2.5\/unicodeobject.h:118,\n from \/Library\/Frameworks\/Python.framework\/Versions\/2.5\/include\/python2.5\/Python.h:83,\n from pymemcompat.h:10,\n from _mysql.c:29:\n\/Developer\/SDKs\/MacOSX10.4u.sdk\/usr\/include\/stdarg.h:4:25: error: stdarg.h: No such file or directory\nIn file included from _mysql.c:35:\n\/usr\/local\/mysql-5.1.48-osx10.6-x86\/include\/my_config.h:1062:1: warning: \"HAVE_WCSCOLL\" redefined\nIn file included from \/Library\/Frameworks\/Python.framework\/Versions\/2.5\/include\/python2.5\/Python.h:8,\n from pymemcompat.h:10,\n from _mysql.c:29:\n\/Library\/Frameworks\/Python.framework\/Versions\/2.5\/include\/python2.5\/pyconfig.h:724:1: warning: this is the location of the previous definition\nerror: command 'gcc' failed with exit status 1\nDoes anyone have any ideas?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":461,"Q_Id":3285631,"Users Score":0,"Answer":"I tried to solve this one for days myself and finally gave up. \nI switched to postgres. It works pretty well with django on snow leopard,\nwith one minor problem. For some reason auto_inc pk ids don't get assigned\nto some models. I solved the problem by randomly assigning an id from a large\nrandom range, and relying on the unique column designation to prevent collisions.\nMy production server is linux. Mysql and postgres install fine on it.\nIn fact, many on the #django irc channel recommended running a virtual linux\ninstance on the mac to get around my mysql install problems on it.","Q_Score":2,"Tags":"python,mysql,django","A_Id":3285926,"CreationDate":"2010-07-19T22:34:00.000","Title":"Installing MySQLdb on Snow Leopard","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm playing around with a little web app in web.py, and am setting up a url to return a JSON object. What's the best way to convert a SQL table to JSON using python?","AnswerCount":14,"Available Count":1,"Score":0.0142847425,"is_accepted":false,"ViewCount":157377,"Q_Id":3286525,"Users Score":1,"Answer":"If you are using an MSSQL Server 2008 and above, you can perform your SELECT query to return json by using the FOR JSON AUTO clause E.G\nSELECT name, surname FROM users FOR JSON AUTO\nWill return Json as \n[{\"name\": \"Jane\",\"surname\": \"Doe\" }, {\"name\": \"Foo\",\"surname\": \"Samantha\" }, ..., {\"name\": \"John\", \"surname\": \"boo\" }]","Q_Score":54,"Tags":"python,sql,json","A_Id":55329857,"CreationDate":"2010-07-20T02:16:00.000","Title":"return SQL table as JSON in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 5 python cgi pages. I can navigate from one page to another. All pages get their data from the same database table just that they use different queries.\nThe problem is that the application as a whole is slow. Though they connect to the same database, each page creates a new handle every time I visit it and handles are not shared by the pages.\nI want to improve performance.\nCan I do that by setting up sessions for the user?\nSuggestions\/Advices are welcome.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":3289330,"Users Score":0,"Answer":"Django and Pylons are both frameworks that solve this problem quite nicely, namely by abstracting the DB-frontend integration. They are worth considering.","Q_Score":1,"Tags":"python,cgi","A_Id":3289546,"CreationDate":"2010-07-20T11:15:00.000","Title":"Improving performance of cgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to save an image file into sqlite database in python. I could not find a solution. How can I do it?\nThanks in advance.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":15288,"Q_Id":3309957,"Users Score":0,"Answer":"It's never a good idea to record raw types in databases. Couldn't you just save the file on the filesystem, and record the path to it in database?","Q_Score":8,"Tags":"python,image,sqlite,blob,pysqlite","A_Id":3310034,"CreationDate":"2010-07-22T14:29:00.000","Title":"pysqlite - how to save images","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to save an image file into sqlite database in python. I could not find a solution. How can I do it?\nThanks in advance.","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":15288,"Q_Id":3309957,"Users Score":11,"Answer":"write - cursor.execute('insert into File \n(id, name, bin) values (?,?,?)', (id, name, sqlite3.Binary(file.read())))\nread - file = cursor.execute('select bin from File where id=?', (id,)).fetchone()\nif you need to return bin data in web app - return cStringIO.StringIO(file['bin'])","Q_Score":8,"Tags":"python,image,sqlite,blob,pysqlite","A_Id":3310995,"CreationDate":"2010-07-22T14:29:00.000","Title":"pysqlite - how to save images","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a tricky question, we've been talking about this for a while (days) and haven't found a convincingly good solution. This is the situation:\n\nWe have users and groups. A user can belong to many groups (many to many relation)\nThere are certain parts of the site that need access control, but:\nThere are certain ROWS of certain tables that need access control, ie. a certain user (or certain group) should not be able to delete a certain row, but other rows of the same table could have a different permission setting for that user (or group)\n\nIs there an easy way to acomplish this? Are we missing something?\nWe need to implement this in python (if that's any help).","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":214,"Q_Id":3327279,"Users Score":2,"Answer":"This problem is not really new; it's basically the general problem of authorization and access rights\/control.\nIn order to avoid having to model and maintain a complete graph of exactly what objects each user can access in each possible way, you have to make decisions (based on what your application does) about how to start reigning in the multiplicative scale factors. So first: where do users get their rights? If each user is individually assigned rights, you're going to pose a significant ongoig management challenge to whoever needs to add users, modify users, etc.\nPerhaps users can get their rights from the groups they're members of. Now you have a scale factor that simplifies management and makes the system easier to understand. Changing a group changes the effective rights for all users who are members.\nNow, what do these rights look like? It's still probably not wise to assign rights on a target object by object basis. Thus maybe rights should be thought of as a set of abstract \"access cards\". Objects in the system can be marked as requiring \"blue\" access for read, \"red\" access for update, and \"black\" access for delete. Those abstract rights might be arranged in some sort of topology, such that having \"black\" access means you implicitly also have \"red\" and \"blue\", or maybe they're all disjoint; it's up to you and how your application has to work. (Note also that you may want to consider that object types \u2014 tables, if you like \u2014 may need their own access rules, at least for \"create\".\nBy introducing collection points in the graph pictures you draw relating actors in the system to objects they act upon, you can handle scale issues and keep the complexity of authorization under control. It's never easy, however, and often it's the case that voiced customer desires result in something that will never work out and never in fact achieve what the customer (thinks she) wants.\nThe implementation language doesn't have a lot to do with the architectural decisions you need to make.","Q_Score":2,"Tags":"python,access-control","A_Id":3327313,"CreationDate":"2010-07-24T23:11:00.000","Title":"Control access to parts of a system, but also to certain pieces of information","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is a tricky question, we've been talking about this for a while (days) and haven't found a convincingly good solution. This is the situation:\n\nWe have users and groups. A user can belong to many groups (many to many relation)\nThere are certain parts of the site that need access control, but:\nThere are certain ROWS of certain tables that need access control, ie. a certain user (or certain group) should not be able to delete a certain row, but other rows of the same table could have a different permission setting for that user (or group)\n\nIs there an easy way to acomplish this? Are we missing something?\nWe need to implement this in python (if that's any help).","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":214,"Q_Id":3327279,"Users Score":0,"Answer":"It's hard to be specific without knowing more about your setup and about why exactly you need different users to have different permissions on different rows. But generally, I would say that whenever you access any data in the database in your code, you should precede it by an authorization check, which examines the current user and group and the row being inserted\/updated\/deleted\/etc. and decides whether the operation should be allowed or not. Consider designing your system in an encapsulated manner - for example you could put all the functions that directly access the database in one module, and make sure that each of them contains the proper authorization check. (Having them all in one file makes it less likely that you'll miss one)\nIt might be helpful to add a permission_class column to the table, and have another table specifying which users or groups have which permission classes. Then your authorization check simply has to take the value of the permission class for the current row, and see if the permissions table contains an association between that permission class and either the current user or any of his\/her groups.","Q_Score":2,"Tags":"python,access-control","A_Id":3327325,"CreationDate":"2010-07-24T23:11:00.000","Title":"Control access to parts of a system, but also to certain pieces of information","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is a tricky question, we've been talking about this for a while (days) and haven't found a convincingly good solution. This is the situation:\n\nWe have users and groups. A user can belong to many groups (many to many relation)\nThere are certain parts of the site that need access control, but:\nThere are certain ROWS of certain tables that need access control, ie. a certain user (or certain group) should not be able to delete a certain row, but other rows of the same table could have a different permission setting for that user (or group)\n\nIs there an easy way to acomplish this? Are we missing something?\nWe need to implement this in python (if that's any help).","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":214,"Q_Id":3327279,"Users Score":0,"Answer":"Add additional column \"category\" or \"type\" to the table(s), that will categorize the rows (or if you will, group\/cluster them) - and then create a pivot table that defines the access control between (rowCategory, userGroup). So for each row, by its category you can pull which userGroups have access (and what kind of access).","Q_Score":2,"Tags":"python,access-control","A_Id":3327726,"CreationDate":"2010-07-24T23:11:00.000","Title":"Control access to parts of a system, but also to certain pieces of information","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using MongoDB an nosql database. Basically as a result of a query I have a list of dicts which themselves contains lists of dictionaries... which I need to work with.\nUnfortunately dealing with all this data within Python can be brought to a crawl when the data is too much.\n\nI have never had to deal with this problem, and it would be great if someone with experience could give a few suggestions. =)","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":406,"Q_Id":3330668,"Users Score":1,"Answer":"Are you loading all the data into memory at once? If so you could be causing the OS to swap memory to disk, which can bring any system to a crawl. Dictionaries are hashtables so even an empty dict will use up a lot of memory, and from what you say you are creating a lot of them at once. I don't know the MongoDB API, but I presume there is a way of iterating through the results one at a time instead of reading in the entire set of result at once - try using that. Or rewrite your query to return a subset of the data.\nIf disk swapping is not the problem then profile the code to see what the bottleneck is, or put some sample code in your question. Without more specific information it is hard to give a more specific answer.","Q_Score":0,"Tags":"python,parsing,list,sorting,dictionary","A_Id":3333193,"CreationDate":"2010-07-25T19:23:00.000","Title":"Speeding up parsing of HUGE lists of dictionaries - Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using MongoDB an nosql database. Basically as a result of a query I have a list of dicts which themselves contains lists of dictionaries... which I need to work with.\nUnfortunately dealing with all this data within Python can be brought to a crawl when the data is too much.\n\nI have never had to deal with this problem, and it would be great if someone with experience could give a few suggestions. =)","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":406,"Q_Id":3330668,"Users Score":3,"Answer":"Do you really want all of that data back in your Python program? If so fetch it back a little at a time, but if all you want to do is summarise the data then use mapreduce in MongoDB to distribute the processing and just return the summarised data.\nAfter all, the point about using a NoSQL database that cleanly shards all the data across multiple machines is precisely to avoid having to pull it all back onto a single machine for processing.","Q_Score":0,"Tags":"python,parsing,list,sorting,dictionary","A_Id":3333236,"CreationDate":"2010-07-25T19:23:00.000","Title":"Speeding up parsing of HUGE lists of dictionaries - Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was using Python 2.6.5 to build my application, which came with sqlite3 3.5.9. Apparently though, as I found out in another question of mine, foreign key support wasn't introduced in sqlite3 until version 3.6.19. However, Python 2.7 comes with sqlite3 3.6.21, so this work -- I decided I wanted to use foreign keys in my application, so I tried upgrading to python 2.7.\nI'm using twisted, and I couldn't for the life of me get it to build. Twisted relies on zope.interface and I can't find zope.interface for python 2.7 -- I thought it might just \"work\" anyway, but I'd have to just copy all the files over myself, and get everything working myself, rather than just using the self-installing packages.\nSo I thought it might be wiser to just re-build python 2.6 and link it against a new version of sqlite3. But I don't know how--\nHow would I do this?\nI have Visual Studio 2008 installed as a compiler, I read that that is the only one that is really supported for Windows, and I am running a 64 bit operating system","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":4281,"Q_Id":3333095,"Users Score":6,"Answer":"download the latest version of sqlite3.dll from sqlite website and replace the the sqlite3.dll in the python dir.","Q_Score":9,"Tags":"python,build,linker,sqlite","A_Id":3341117,"CreationDate":"2010-07-26T07:54:00.000","Title":"How can I upgrade the sqlite3 package in Python 2.6?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was using Python 2.6.5 to build my application, which came with sqlite3 3.5.9. Apparently though, as I found out in another question of mine, foreign key support wasn't introduced in sqlite3 until version 3.6.19. However, Python 2.7 comes with sqlite3 3.6.21, so this work -- I decided I wanted to use foreign keys in my application, so I tried upgrading to python 2.7.\nI'm using twisted, and I couldn't for the life of me get it to build. Twisted relies on zope.interface and I can't find zope.interface for python 2.7 -- I thought it might just \"work\" anyway, but I'd have to just copy all the files over myself, and get everything working myself, rather than just using the self-installing packages.\nSo I thought it might be wiser to just re-build python 2.6 and link it against a new version of sqlite3. But I don't know how--\nHow would I do this?\nI have Visual Studio 2008 installed as a compiler, I read that that is the only one that is really supported for Windows, and I am running a 64 bit operating system","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4281,"Q_Id":3333095,"Users Score":1,"Answer":"I decided I'd just give this a shot when I realized that every library I've ever installed in python 2.6 resided in my site-packages folder. I just... copied site-packages to my 2.7 installation, and it works so far. This is by far the easiest route for me if this works -- I'll look further into it but at least I can continue to develop now.\nI won't accept this answer, because it doesn't even answer my question, but it does solve my problem, as far as I can tell so far.","Q_Score":9,"Tags":"python,build,linker,sqlite","A_Id":3333348,"CreationDate":"2010-07-26T07:54:00.000","Title":"How can I upgrade the sqlite3 package in Python 2.6?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some MySQL database server information that needs to be shared between a Python backend and a PHP frontend.\nWhat is the best way to go about storing the information in a manner wherein it can be read easily by Python and PHP?\nI can always brute force it with a bunch of str.replace() calls in Python and hope it works if nobody has a solution, or I can just maintain two separate files, but it would be a bunch easier if I could do this automatically.\nI assume it would be easiest to store the variables in PHP format directly and do conversions in Python, and I know there exist Python modules for serializing and unserializing PHP, but I haven't been able to get it all figured out.\nAny help is appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":200,"Q_Id":3349445,"Users Score":4,"Answer":"Store the shared configuration in a plain text file, preferably in a standard format.\nYou might consider yaml, ini, or json. \nI'm pretty sure both PHP and python can very trivially read and parse all three of those formats.","Q_Score":0,"Tags":"php,python,mysql,variables,share","A_Id":3349485,"CreationDate":"2010-07-28T02:23:00.000","Title":"Python - PHP Shared MySQL server connection info?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I store groups of entities in the google app engine Data Store with the same ancestor\/parent\/entityGroup. This is so that the entities can be updated in one atomic datastore transaction.\nThe problem is as follows:\n\nI start a db transaction\nI update entityX by setting entityX.flag = True\nI save entityX\nI query for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not.\n\nWhen I remove the transaction, my code works perfectly, so it must be the transaction that is causing this strange behavior.\nShould updates to entities in the entity group not be visible elsewhere in the same transaction?\nPS: I am using Python. And GAE tells me I can't use nested transactions :(","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":3350068,"Users Score":0,"Answer":"Looks like you are not doing a commit on the transaction before querying\n\nstart a db transaction\nupdate entityX by setting entityX.flag = True\nsave entityX\nCOMMIT TRANSACTION\nquery for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not.\n\nIn a transaction, entities will not be persisted until the transaction is commited","Q_Score":2,"Tags":"python,google-app-engine","A_Id":3350082,"CreationDate":"2010-07-28T05:06:00.000","Title":"On the google app engine, why do updates not reflect in a transaction?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"We need to bulk load many long strings (>4000 Bytes, but <10,000 Bytes) using cx_Oracle. The data type in the table is CLOB. We will need to load >100 million of these strings. Doing this one by one would suck. Doing it in a bulk fashion, ie using cursor.arrayvar() would be ideal. However, CLOB does not support arrays. BLOB, LOB, LONG_STRING LONG_RAW don't either. Any help would be greatly appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1252,"Q_Id":3358666,"Users Score":0,"Answer":"In the interest of getting shit done that is good enough, we did the abuse of the CLOB I mentioned in my comment. It took less than 30 minutes to get coded up, runs fast and works.","Q_Score":0,"Tags":"python,oracle,cx-oracle","A_Id":3373296,"CreationDate":"2010-07-29T00:41:00.000","Title":"Passing an array of long strings ( >4000 bytes) to an Oracle (11gR2) stored procedure using cx_Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am connecting to an MS SQL Server db from Python in Linux. I am connecting via pyodbc using the FreeTDS driver. When I return a money field from MSSQL it comes through as a float, rather than a Python Decimal.\nThe problem is with FreeTDS. If I run the exact same Python code from Windows (where I do not need to use FreeTDS), pyodbc returns a Python Decimal.\nHow can I get back a Python Decimal when I'm running the code in Linux?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1284,"Q_Id":3371795,"Users Score":1,"Answer":"You could always just convert it to Decimal when it comes back...","Q_Score":0,"Tags":"python,sql-server,pyodbc,freetds","A_Id":3372035,"CreationDate":"2010-07-30T13:19:00.000","Title":"FreeTDS translating MS SQL money type to python float, not Decimal","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I set up Mysql5, mysql5-server and py26-mysql using Macports. I then started the mysql server and was able to start the prompt with mysql5\nIn my settings.py i changed database_engine to \"mysql\" and put \"dev.db\" in database_name.\nI left the username and password blank as the database doesnt exist yet.\nWhen I ran python manage.py syncdb, django raised an error\n'django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dynamic module does not define init function (init_mysql)`\nHow do I fix this? Do I have to create the database first? is it something else?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5039,"Q_Id":3376673,"Users Score":1,"Answer":"syncdb will not create a database for you -- it only creates tables that don't already exist in your schema. You need to:\n\nCreate a user to 'own' the database (root is a bad choice).\nCreate the database with that user.\nUpdate the Django database settings with the correct database name, user, and password.","Q_Score":0,"Tags":"python,mysql,django","A_Id":3377350,"CreationDate":"2010-07-31T03:34:00.000","Title":"Django MySql setup","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I had my sqlalchemy related code in my main() method in my script.\nBut then when I created a function, I wasn't able to reference my 'products' mapper because it was in the main() method.\nShould I be putting the sqlalchemy related code (session, mapper, and classes) in global scope so all functions in my single file script can refer to it?\nI was told a script is usually layout out as:\nglobals\nfunctions\nclasses\nmain\nBut if I put sqlalchemy at the top to make it global, I have to move my classes to the top also.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":3382739,"Users Score":2,"Answer":"Typical approach is to define all mappings in separate model module, with one file per class\/table.\nThen you just import needed classes whenever need them.","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":3382810,"CreationDate":"2010-08-01T16:20:00.000","Title":"Where to put my sqlalchemy code in my script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say I have a database table which consists of three columns: id, field1 and field2. This table may have anywhere between 100 and 100,000 rows in it. I have a python script that should insert 10-1,000 new rows into this table. However, if the new field1 already exists in the table, it should do an UPDATE, not an INSERT.\nWhich of the following approaches is more efficient?\n\nDo a SELECT field1 FROM table (field1 is unique) and store that in a list. Then, for each new row, use list.count() to determine whether to INSERT or UPDATE\nFor each row, run two queries. Firstly, SELECT count(*) FROM table WHERE field1=\"foo\" then either the INSERT or UPDATE.\n\nIn other words, is it more efficient to perform n+1 queries and search a list, or 2n queries and get sqlite to search?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2505,"Q_Id":3404556,"Users Score":0,"Answer":"You appear to be comparing apples with oranges.\nA python list is only useful if your data fit into the address-space of the process. Once the data get big, this won't work any more.\nMoreover, a python list is not indexed - for that you should use a dictionary.\nFinally, a python list is non-persistent - it is forgotten when the process quits.\nHow can you possibly compare these?","Q_Score":2,"Tags":"python,performance,sqlite","A_Id":3536835,"CreationDate":"2010-08-04T10:18:00.000","Title":"Python performance: search large list vs sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say I have a database table which consists of three columns: id, field1 and field2. This table may have anywhere between 100 and 100,000 rows in it. I have a python script that should insert 10-1,000 new rows into this table. However, if the new field1 already exists in the table, it should do an UPDATE, not an INSERT.\nWhich of the following approaches is more efficient?\n\nDo a SELECT field1 FROM table (field1 is unique) and store that in a list. Then, for each new row, use list.count() to determine whether to INSERT or UPDATE\nFor each row, run two queries. Firstly, SELECT count(*) FROM table WHERE field1=\"foo\" then either the INSERT or UPDATE.\n\nIn other words, is it more efficient to perform n+1 queries and search a list, or 2n queries and get sqlite to search?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2505,"Q_Id":3404556,"Users Score":0,"Answer":"I imagine using a python dictionary would allow for much faster searching than using a python list. (Just set the values to 0, you won't need them, and hopefully a '0' stores compactly.)\nAs for the larger question, I'm curious too. :)","Q_Score":2,"Tags":"python,performance,sqlite","A_Id":3404589,"CreationDate":"2010-08-04T10:18:00.000","Title":"Python performance: search large list vs sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. \ncan i continue to use python on a 2gig file or should i move the data into a database?","AnswerCount":5,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":1357,"Q_Id":3419624,"Users Score":4,"Answer":"I don't know exactly what you are doing. But a database will just change how the data is stored. and in fact it might take longer since most reasonable databases may have constraints put on columns and additional processing for the checks. In many cases having the whole file local, going through and doing calculations is going to be more efficient than querying and writing it back to the database (subject to disk speeds, network and database contention, etc...). But in some cases the database may speed things up, especially because if you do indexing it is easy to get subsets of the data.\nAnyway you mentioned logs, so before you go database crazy I have the following ideas for you to check out. Anyway I'm not sure if you have to keep going through every log since the beginning of time to download charts and you expect it to grow to 2 GB or if eventually you are expecting 2 GB of traffic per day\/week.\n\nARCHIVING -- you can archive old logs, say every few months. Copy the production logs to an archive location and clear the live logs out. This will keep the file size reasonable. If you are wasting time accessing the file to find the small piece you need then this will solve your issue.\nYou might want to consider converting to Java or C. Especially on loops and calculations you might see a factor of 30 or more speedup. This will probably reduce the time immediately. But over time as data creeps up, some day this will slow down as well. if you have no bound on the amount of data, eventually even hand optimized Assembly by the world's greatest programmer will be too slow. But it might give you 10x the time...\nYou also may want to think about figuring out the bottleneck (is it disk access, is it cpu time) and based on that figuring out a scheme to do this task in parallel. If it is processing, look into multi-threading (and eventually multiple computers), if it is disk access consider splitting the file among multiple machines...It really depends on your situation. But I suspect archiving might eliminate the need here.\nAs was suggested, if you are doing the same calculations over and over again, then just store them. Whether you use a database or a file this will give you a huge speedup. \nIf you are downloading stuff and that is a bottleneck, look into conditional gets using the if modified request. Then only download changed items. If you are just processing new charts then ignore this suggestion.\nOh and if you are sequentially reading a giant log file, looking for a specific place in the log line by line, just make another file storing the last file location you worked with and then do a seek each run.\nBefore an entire database, you may want to think of SQLite.\nFinally a \"couple of years\" seems like a long time in programmer time. Even if it is just 2, a lot can change. Maybe your department\/division will be laid off. Maybe you will have moved on and your boss. Maybe the system will be replaced by something else. Maybe there will no longer be a need for what you are doing. If it was 6 months I'd say fix it. but for a couple of years, in most cases, I'd say just use the solution you have now and once it gets too slow then look to do something else. You could make a comment in the code with your thoughts on the issue and even an e-mail to your boss so he knows it as well. But as long as it works and will continue doing so for a reasonable amount of time, I would consider it \"done\" for now. No matter what solution you pick, if data grows unbounded you will need to reconsider it. Adding more machines, more disk space, new algorithms\/systems\/developments. Solving it for a \"couple of years\" is probably pretty good.","Q_Score":4,"Tags":"python,sql","A_Id":3419835,"CreationDate":"2010-08-05T22:13:00.000","Title":"python or database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. \ncan i continue to use python on a 2gig file or should i move the data into a database?","AnswerCount":5,"Available Count":5,"Score":0.0798297691,"is_accepted":false,"ViewCount":1357,"Q_Id":3419624,"Users Score":2,"Answer":"I always reach for a database for larger datasets. \nA database gives me some stuff for \"free\"; that is, I don't have to code it.\n\nsearching \nsorting\nindexing\nlanguage-independent connections\n\nSomething like SQLite might be the answer for you. \nAlso, you should investigate the \"nosql\" databases; it sounds like your problem might fit well into one of them.","Q_Score":4,"Tags":"python,sql","A_Id":3419871,"CreationDate":"2010-08-05T22:13:00.000","Title":"python or database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. \ncan i continue to use python on a 2gig file or should i move the data into a database?","AnswerCount":5,"Available Count":5,"Score":0.1586485043,"is_accepted":false,"ViewCount":1357,"Q_Id":3419624,"Users Score":4,"Answer":"If you need to go through all lines each time you perform the \"fiddling\" it wouldn't really make much difference, assuming the actual \"fiddling\" is whats eating your cycles.\nPerhaps you could store the results of your calculations somehow, then a database would probably be nice. Also, databases have methods for ensuring data integrity and stuff like that, so a database is often a great place for storing large sets of data (duh! ;)).","Q_Score":4,"Tags":"python,sql","A_Id":3419726,"CreationDate":"2010-08-05T22:13:00.000","Title":"python or database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. \ncan i continue to use python on a 2gig file or should i move the data into a database?","AnswerCount":5,"Available Count":5,"Score":0.1586485043,"is_accepted":false,"ViewCount":1357,"Q_Id":3419624,"Users Score":4,"Answer":"I'd only put it into a relational database if:\n\nThe data is actually relational and expressing it that way helps shrink the size of the data set by normalizing it.\nYou can take advantage of triggers and stored procedures to offload some of the calculations that your Python code is performing now.\nYou can take advantage of queries to only perform calculations on data that's changed, cutting down on the amount of work done by Python.\n\nIf neither of those things is true, I don't see much difference between a database and a file. Both ultimately have to be stored on the file system.\nIf Python has to process all of it, and getting it into memory means loading an entire data set, then there's no difference between a database and a flat file.\n2GB of data in memory could mean page swapping and thrashing by your application. I would be careful and get some data before I blamed the problem on the file. Just because you access the data from a database won't solve a paging problem.\nIf your data's flat, I see less advantage in a database, unless \"flat\" == \"highly denormalized\".\nI'd recommend some profiling to see what's consuming CPU and memory before I made a change. You're guessing about the root cause right now. Better to get some data so you know where the time is being spent.","Q_Score":4,"Tags":"python,sql","A_Id":3419718,"CreationDate":"2010-08-05T22:13:00.000","Title":"python or database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. \ncan i continue to use python on a 2gig file or should i move the data into a database?","AnswerCount":5,"Available Count":5,"Score":0.0399786803,"is_accepted":false,"ViewCount":1357,"Q_Id":3419624,"Users Score":1,"Answer":"At 2 gigs, you may start running up against speed issues. I work with model simulations for which it calls hundreds of csv files and it takes about an hour to go through 3 iterations, or about 20 minutes per loop. \nThis is a matter of personal preference, but I would go with something like PostGreSql because it integrates the speed of python with the capacity of a sql-driven relational database. I encountered the same issue a couple of years ago when my Access db was corrupting itself and crashing on a daily basis. It was either MySQL or PostGres and I chose Postgres because of its python friendliness. Not to say MySQL would not work with Python, because it does, which is why I say its personal preference. \nHope that helps with your decision-making!","Q_Score":4,"Tags":"python,sql","A_Id":3419687,"CreationDate":"2010-08-05T22:13:00.000","Title":"python or database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an sqlite database whose data I need to transfer over the network, the server needs to modify the data, and then I need to get the db back and either update my local version or overwrite it with the new db. How should I do this? My coworker at first wanted to scrap the db and just use an .ini file, but this is going to be data that we have to parse pretty frequently (it's a user defined schedule that can change at the user's will, as well as the server's). I said we should just transfer the entire .db as a binary file and let them do with it what they will and then take it back. Or is there a way in sqlite to dump the db to a .sql file like you can do in MySQL so we can transfer it as text?\nAny other solutions? This is in python if it makes a difference\nupdate: This is on an embedded platform running linux (I'm not sure what version\/kernel or what OS commands we have except the basics that are obvious)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":471,"Q_Id":3451708,"Users Score":3,"Answer":"Use the copy command in your OS. No reason to overthink this.","Q_Score":0,"Tags":"python,sqlite,embedded,binary-data","A_Id":3451733,"CreationDate":"2010-08-10T17:29:00.000","Title":"Sending sqlite db over network","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to copy an excel sheet with python, but I keep getting \"access denied\" error message. The file is closed and is not shared. It has macros though.\nIs their anyway I can copy the file forcefully with python?\nthanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":94,"Q_Id":3465231,"Users Score":0,"Answer":"If you do not have sufficient file permissions you will not be able to access the file. In that case you will have to execute your Python program as an user with sufficient permissions.\nIf on the other hand the file is locked using other means specific to Excel then I am not sure what exactly is the solution. You might have to work around the protection using other means which will require a fair amount of understanding of how Excel sheets are \"locked\". I don't know of any Python libraries that will do this for you.","Q_Score":0,"Tags":"python,excel-2003","A_Id":3466751,"CreationDate":"2010-08-12T06:32:00.000","Title":"Copying a file with access locks, forcefully with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there a python ORM (object relational mapper) that has a tool for automatically creating python classes (as code so I can expand them) from a given database schema?\nI'm frequently faced with small tasks involving different databases (like importing\/exporting from various sources etc.) and I thought python together with the abovementioned tool would be perfect for that.\nIt should work like Visual Studios ADO.NET\/Linq for SQL designer, where I can just drop DB tables and VS creates classes for me ...\nThanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1411,"Q_Id":3478780,"Users Score":3,"Answer":"You do not need to produce a source code representation of your classes to be able to expand them.\nThe only trick is that you need the ORM to generate the classes BEFORE importing the module that defines the derived classes.\nEven better, don't use derivation, but use __getattr__ and __setattr__ to implement transparent delegation to the ORM classes.","Q_Score":2,"Tags":"python,orm,code-generation","A_Id":3481115,"CreationDate":"2010-08-13T16:18:00.000","Title":"Python ORM that automatically creates classes from DB schema","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a couple of sqlite dbs (i'd say about 15GBs), with about 1m rows in total - so not super big. I was looking at mongodb, and it looks pretty easy to work with, especially if I want to try and do some basic natural language processing on the documents which make up the databases.\nI've never worked with Mongo in the past, no would have to learn from scratch (will be working in python). After googling around a bit, I came across a number of somewhat horrific stories about Mongodb re. reliability. Is this still a major problem ? In a crunch, I will of course retain the sqlite backups, but I'd rather not have to reconstruct my mongo databases constantly.\nJust wondering what sort data corruption issues people have actually faced recently with Mongo ? Is this a big concern?\nThanks!","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":3302,"Q_Id":3487456,"Users Score":10,"Answer":"As others have said, MongoDB does not have single-server durability right now. Fortunately, it's dead easy to set up multi-node replication. You can even set up a second machine in another data center and have data automatically replicated to it live!\nIf a write must succeed, you can cause Mongo to not return from an insert\/update until that data has been replicated to n slaves. This ensures that you have at least n copies of the data. Replica sets allow you to add and remove nodes from your cluster on the fly without any significant work; just add a new node and it'll automatically sync a copy of the data. Remove a node and the cluster rebalances itself. It is very much designed to be used across multiple machines, with multiple nodes acting in parallel; this is it's preferred default setup, compared to something like MySQL, which expects one giant machine to do its work on, which you can then pair slaves against when you need to scale out. It's a different approach to data storage and scaling, but a very comfortable one if you take the time to understand its difference in assumptions, and how to build an architecture that capitalizes on its strengths.","Q_Score":12,"Tags":"python,sqlite,mongodb","A_Id":3491117,"CreationDate":"2010-08-15T13:00:00.000","Title":"Mongodb - are reliability issues significant still?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a couple of sqlite dbs (i'd say about 15GBs), with about 1m rows in total - so not super big. I was looking at mongodb, and it looks pretty easy to work with, especially if I want to try and do some basic natural language processing on the documents which make up the databases.\nI've never worked with Mongo in the past, no would have to learn from scratch (will be working in python). After googling around a bit, I came across a number of somewhat horrific stories about Mongodb re. reliability. Is this still a major problem ? In a crunch, I will of course retain the sqlite backups, but I'd rather not have to reconstruct my mongo databases constantly.\nJust wondering what sort data corruption issues people have actually faced recently with Mongo ? Is this a big concern?\nThanks!","AnswerCount":5,"Available Count":3,"Score":0.1194272985,"is_accepted":false,"ViewCount":3302,"Q_Id":3487456,"Users Score":3,"Answer":"Mongo does not have ACID properties, specifically durability. So you can face issues if the process does not shut down cleanly or the machine loses power. You are supposed to implement backups and redundancy to handle that.","Q_Score":12,"Tags":"python,sqlite,mongodb","A_Id":3488244,"CreationDate":"2010-08-15T13:00:00.000","Title":"Mongodb - are reliability issues significant still?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a couple of sqlite dbs (i'd say about 15GBs), with about 1m rows in total - so not super big. I was looking at mongodb, and it looks pretty easy to work with, especially if I want to try and do some basic natural language processing on the documents which make up the databases.\nI've never worked with Mongo in the past, no would have to learn from scratch (will be working in python). After googling around a bit, I came across a number of somewhat horrific stories about Mongodb re. reliability. Is this still a major problem ? In a crunch, I will of course retain the sqlite backups, but I'd rather not have to reconstruct my mongo databases constantly.\nJust wondering what sort data corruption issues people have actually faced recently with Mongo ? Is this a big concern?\nThanks!","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":3302,"Q_Id":3487456,"Users Score":2,"Answer":"I don't see the problem if you have the same data also in the sqlite backups. You can always refill your MongoDb databases. Refilling will only take a few minutes.","Q_Score":12,"Tags":"python,sqlite,mongodb","A_Id":3490547,"CreationDate":"2010-08-15T13:00:00.000","Title":"Mongodb - are reliability issues significant still?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to encrypt a string using RSA algorithm and then store that string into postgres database using SQLAlchemy in python. Then Retrieve the encrypted string and decrypt it using the same key. My problem is that the value gets stored in the database is not same as the actual encrypted string. The datatype of column which is storing the encrypted value is bytea. I am using pycrypto library. Do I need to change the data in a particular format before inserting it to database table?\nAny suggestions please.\nThanks,\nTara Singh","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3897,"Q_Id":3507543,"Users Score":1,"Answer":"By \"same key\" you mean \"the other key\", right? RSA gives you a keypair, if you encrypt with one you decrypt with the other ...\nOther than that, it sounds like a encoding problem. Try storing the data as binary or encode the string with your databases collation.\nBasically encryption gives you bytes but you store them as a string (encoded bytes).","Q_Score":1,"Tags":"python,postgresql,sqlalchemy,rsa,pycrypto","A_Id":3507558,"CreationDate":"2010-08-17T22:39:00.000","Title":"Inserting Encrypted Data in Postgres via SQLALchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to query the database and get read-only objects with session object. I need to save the objects in my server and use them through the user session. If I use a object outside of the function that calls the database, I get this error:\n\"DetachedInstanceError: Parent instance is not bound to a Session; lazy load operation of attribute 'items' cannot proceed\"\nI don't need to make any change in those objects, so I don't need to load them again.\nIs there any way that I can get that?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":244,"Q_Id":3513433,"Users Score":0,"Answer":"You must load the parent object again.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":3513490,"CreationDate":"2010-08-18T14:57:00.000","Title":"How to get read-only objects from database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm in the settings.py module, and I'm supposed to add the directory to the sqlite database. How do I know where the database is and what the full directory is?\nI'm using Windows 7.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3575,"Q_Id":3524236,"Users Score":1,"Answer":"if you don't provide full path, it will use the current directory of settings.py,\nand if you wish to specify static path you can specify it like: c:\/projects\/project1\/my_proj.db\nor in case you want to make it dynamic then you can use os.path module\nso os.path.dirname(file) will give you the path of settings.py and accordingly you can alter the path for your database like os.path.join(os.path.dirname(file),'my_proj.db')","Q_Score":5,"Tags":"python,database,django,sqlite","A_Id":3524305,"CreationDate":"2010-08-19T17:02:00.000","Title":"Trouble setting up sqlite3 with django! :\/","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working with two databases, a local version and the version on the server. The server is the most up to date version and instead of recopying all values on all tables from the server to my local version, \nI would like to enter each table and only insert\/update the values that have changed, from server, and copy those values to my local version.\nIs there some simple method to handling such a case? Some sort of batch insert\/update? Googl'ing up the answer isn't working and I've tried my hand at coding one but am starting to get tied up in error handling..\nI'm using Python and MySQLDB... Thanks for any insight\nSteve","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1035,"Q_Id":3526629,"Users Score":0,"Answer":"If all of your tables' records had timestamps, you could identify \"the values that have changed in the server\" -- otherwise, it's not clear how you plan to do that part (which has nothing to do with insert or update, it's a question of \"selecting things right\").\nOnce you have all the important values, somecursor.executemany will let you apply them all as a batch. Depending on your indexing it may be faster to put them into a non-indexed auxiliary temporary table, then insert\/update from all of that table into the real one (before dropping the aux\/temp one), the latter of course being a single somecursor.execute.\nYou can reduce wall-clock time for the whole job by using one (or a few) threads to do the selects and put the results onto a Queue.Queue, and a few worker threads to apply results plucked from the queue into the internal\/local server. (Best balance of reading vs writing threads is best obtained by trying a few and measuring -- writing per se is slower than reading, but your bandwidth to your local server may be higher than to the other one, so it's difficult to predict).\nHowever, all of this is moot unless you do have a strategy to identify \"the values that have changed in the server\", so it's not necessarily very useful to enter into more discussion about details \"downstream\" from that identification.","Q_Score":1,"Tags":"python,mysql,batch-file","A_Id":3527732,"CreationDate":"2010-08-19T21:59:00.000","Title":"Python + MySQLDB Batch Insert\/Update command for two of the same databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sometimes, when fetching data from the database either through the python shell or through a python script, the python process dies, and one single word is printed to the terminal: Killed\nThat's literally all it says. It only happens with certain scripts, but it always happens for those scripts. It consistently happens with this one single query that takes a while to run, and also with a south migration that adds a bunch of rows one-by-one to the database.\nMy initial hunch was that a single transaction was taking too long, so I turned on autocommit for Postgres. Didn't solve the problem.\nI checked the Postgres logs, and this is the only thing in there:\n2010-08-19 22:06:34 UTC LOG: could not receive data from client: Connection reset by peer\n2010-08-19 22:06:34 UTC LOG: unexpected EOF on client connection\nI've tried googling, but as you might expect, a one-word error message is tough to google for. \nI'm using Django 1.2 with Postgres 8.4 on a single Ubuntu 10.4 rackspace cloud VPS, stock config for everything.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1944,"Q_Id":3526748,"Users Score":6,"Answer":"Only one thing I could think of that will kill automatically a process on Linux - the OOM killer. What's in the system logs?","Q_Score":7,"Tags":"python,django,postgresql","A_Id":3529637,"CreationDate":"2010-08-19T22:19:00.000","Title":"Why do some Django ORM queries end abruptly with the message \"Killed\"?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been diving into MongoDB with kind help of MongoKit and MongoEngine, but then I started thinking whether the data mappers are necessary here. Both mappers I mentioned enable one to do simple things without any effort. But is any effort required to do simple CRUD? It appears to me that in case of NoSQL the mappers just substitute one api with another (but of course there is data validation, more strict schema, automatic referencing\/dereferencing) \nDo you use Data Mappers in your applications? How big are they (apps)? Why yes, why no?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":366,"Q_Id":3533064,"Users Score":1,"Answer":"We are running a production site using Mongodb for the backend (no direct queries to Mongo, we have a search layer in between). We wrote our own business \/ object layer, i suppose it just seemed natural enough for the programmers to write in the custom logic. We did separate the database and business layers, but they just didn't see a need to go for a separate library. As the software keeps evolving I think it makes sense. We have 15 million records.","Q_Score":2,"Tags":"python,orm,mongodb,mongoengine,mongokit","A_Id":3553262,"CreationDate":"2010-08-20T16:54:00.000","Title":"Do you use data mappers with MongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an SQL database and am wondering what command you use to just get a list of the table names within that database.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":62222,"Q_Id":3556305,"Users Score":10,"Answer":"SHOW tables \n15 chars","Q_Score":35,"Tags":"python,mysql,mysql-python","A_Id":3556313,"CreationDate":"2010-08-24T12:18:00.000","Title":"How to retrieve table names in a mysql database with Python and MySQLdb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While I see a bunch of links\/binaries for mysql connector for python 2.6, I don't see one for 2.7\nTo use django, should I just revert to 2.6 or is there a way out ?\nI'm using windows 7 64bit\ndjango - 1.1\nMysql 5.1.50\nAny pointers would be great.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2224,"Q_Id":3562406,"Users Score":1,"Answer":"For Python 2.7 on specific programs:\n\nsudo chown -R $USER \/Library\/Python\/2.7\nbrew install mysql@5.7\nbrew install mysql-connector-c\nbrew link --overwrite mysql@5.7\necho 'export PATH=\"\/usr\/local\/opt\/mysql@5.7\/bin:$PATH\"' >> ~\/.bash_profile\nsed -i -e 's\/libs=\"$libs -l \"\/libs=\"$libs -lmysqlclient -lssl -lcrypto\"\/g' \/usr\/local\/bin\/mysql_config\npip install MySql-python\n\nThis solved all issues I was having running a program that ran on Python 2.7 on and older version of MySql","Q_Score":1,"Tags":"mysql,python-2.7","A_Id":58359370,"CreationDate":"2010-08-25T02:15:00.000","Title":"Is there no mysql connector for python 2.7 on windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers.\nMany of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary.\nIt would need to be:\n\nDynamic (so don't suggest Java or C!)\nEasily available on each platform (Windows, Solaris, Linux, perhaps AIX)\nRequire very little in the way of setup (root access not always available!)\nBe easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers.\nBe easy to understand other people's code\nFriendly with SQL Server and Oracle, without messing around.\nA few nice XML features wouldn't go amiss.\n\nIt would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here.\nI have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. \nNo one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction?\nWhat are people's opinions on this?\nthanks!\nChris","AnswerCount":6,"Available Count":5,"Score":0.0333209931,"is_accepted":false,"ViewCount":3213,"Q_Id":3564177,"Users Score":1,"Answer":"Although I prefer working on the JVM, one thing that turns me off is having to spin up a JVM to run a script. If you can work in a REPL this is not such a big deal, but it really slows you down when doing edit-run-debug scripting. \nNow of course Oracle has a lot of Java stuff where interaction moght be needed, but that is something only you can estimate how important it is. For plain Oracle DB work I have seen very little Java and lots fo PLSQL\/SQL.\nIf your dba now do their work in bash, then they will very likely pickup perl in a short time as there is a nice, logical progression path.\nSince ruby was designed to be an improved version of perl, it might fit in that category too. Actually python also. \nScala is statically typed like Java, albeit with much better type inference.\nMy recommendation would be to go the Perl route. The CPAN is its ace in the hole, you do not have to deal with the OO stuff which might turn off some DBA's (although it is there for the power users).","Q_Score":6,"Tags":"python,scala,groovy,shell,jython","A_Id":3564251,"CreationDate":"2010-08-25T08:47:00.000","Title":"Which cross platform scripting language should we adopt for a group of DBAs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers.\nMany of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary.\nIt would need to be:\n\nDynamic (so don't suggest Java or C!)\nEasily available on each platform (Windows, Solaris, Linux, perhaps AIX)\nRequire very little in the way of setup (root access not always available!)\nBe easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers.\nBe easy to understand other people's code\nFriendly with SQL Server and Oracle, without messing around.\nA few nice XML features wouldn't go amiss.\n\nIt would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here.\nI have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. \nNo one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction?\nWhat are people's opinions on this?\nthanks!\nChris","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":3213,"Q_Id":3564177,"Users Score":0,"Answer":"I've been in a similar situation, though on a small scale. The previous situation was that any automation on the SQL Server DBs was done with VBScript, which I did start out using. As I wanted something cross-platform (and less annoying than VBScript) I went with Python. \nWhat I learnt is:\n\nObviously you want a language that comes with libraries to access your databases comfortably. I wasn't too concerned with abstracting the differences away (ie, I still wrote SQL queries in the relevant dialect, with parameters). However, I'd be a bit less happy with PHP, for example, which has only very vendor-specific libraries and functions for certain databases. I see it's not on your list.\nTHE major obstacle was authentication. If your SQL Server uses Windows domain authentication, you'll have to work to get in. Another system also had specific needs as it required RSA tokens to be supported. \n\nFor the second point, Python is quite versatile enough to work around the difficulties, but it was getting into \"badly supported\" territory, especially on Windows. It was easy to work around the first problem from a Windows host, and for a Unix host it is possible though not easy. If you're using SQL Server authentication, it becomes a lot easier.\nFrom your other choices, I'd expect various ways of authenticating and DB drivers to exist for Perl, which philosophically would be easier for DBAs used to shell scripting. Ruby - no experience, but it tends to have spotty support for some of the odder authentication methods and connectors. Scala I'd expect to be a bit too much of a \"programmer's programming language\" -- OOO and FP? It's a very interesting language, but maybe not the one I'd chose at first. As for the rest of the Java-based options, I don't have an opinion, but do check that all the connection types you want to make are solidly supported.","Q_Score":6,"Tags":"python,scala,groovy,shell,jython","A_Id":3564285,"CreationDate":"2010-08-25T08:47:00.000","Title":"Which cross platform scripting language should we adopt for a group of DBAs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers.\nMany of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary.\nIt would need to be:\n\nDynamic (so don't suggest Java or C!)\nEasily available on each platform (Windows, Solaris, Linux, perhaps AIX)\nRequire very little in the way of setup (root access not always available!)\nBe easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers.\nBe easy to understand other people's code\nFriendly with SQL Server and Oracle, without messing around.\nA few nice XML features wouldn't go amiss.\n\nIt would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here.\nI have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. \nNo one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction?\nWhat are people's opinions on this?\nthanks!\nChris","AnswerCount":6,"Available Count":5,"Score":0.1325487884,"is_accepted":false,"ViewCount":3213,"Q_Id":3564177,"Users Score":4,"Answer":"The XML thing almost calls for Scala. Now, I love Scala, but I suggest Python here.","Q_Score":6,"Tags":"python,scala,groovy,shell,jython","A_Id":3565446,"CreationDate":"2010-08-25T08:47:00.000","Title":"Which cross platform scripting language should we adopt for a group of DBAs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers.\nMany of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary.\nIt would need to be:\n\nDynamic (so don't suggest Java or C!)\nEasily available on each platform (Windows, Solaris, Linux, perhaps AIX)\nRequire very little in the way of setup (root access not always available!)\nBe easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers.\nBe easy to understand other people's code\nFriendly with SQL Server and Oracle, without messing around.\nA few nice XML features wouldn't go amiss.\n\nIt would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here.\nI have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. \nNo one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction?\nWhat are people's opinions on this?\nthanks!\nChris","AnswerCount":6,"Available Count":5,"Score":0.1651404129,"is_accepted":false,"ViewCount":3213,"Q_Id":3564177,"Users Score":5,"Answer":"I think your best three options are Groovy, Python, and Scala. All three let you write code at a high level (compared to C\/Java). Python has its own perfectly adequate DB bindings, and Groovy and Scala can use ones made for Java.\nThe advantages of Python are that it is widely used already, so there are tons of tools, libraries, expertise, etc. available around it. It has a particularly clean syntax, which makes working with it aesthetically pleasing. The disadvantages are that it is slow (which may not be an issue for you), untyped (so you have runtime errors instead of compile-time errors), and you can't really switch back and forth between Jython and Python, so you have to pick whether you want the large amount of Python stuff, or the huge amount of Java stuff, minus a lot of the nice Python stuff.\nThe advantages of Groovy are that you know it already and it interoperates well with Java libraries. Its disadvantages are also slowness and lack of static typing. (So in contrast to Python, the choice is: do you value Python's clean syntax and wide adoption more, or do you value the vast set of Java libraries more in a language made to work well in that environment?)\nThe advantages of Scala are that it is statically typed (i.e. if the code gets past the compiler, it has a greater chance of working), is fast (as fast as Java if you care to work hard enough), and interoperates well with Java libraries. The disadvantages are that it imposes a bit more work on you to make the static typing work (though far, far less than Java while simultaneously being more safe), and that the canonical style for Scala is a hybrid object\/functional blend that feels more different than the other two (and thus requires more training to use at full effectiveness IMO). In contrast to Groovy, the question would be whether familiarity and ease of getting started is more important than speed and correctness.\nPersonally, I now do almost all of my work in Scala because my work requires speed and because the compiler catches those sort of errors in coding that I commonly make (so it is the only language I've used where I am not surprised when large blocks of code run correctly once I get them to compile). But I've had good experiences with Python in other contexts--interfacing with large databases seems like a good use-case.\n(I'd rule out Perl as being harder to maintain with no significant benefits over e.g. Python, and I'd rule out Ruby as being not enough more powerful than Python to warrant the less-intuitive syntax and lower rate of adoption\/tool availability.)","Q_Score":6,"Tags":"python,scala,groovy,shell,jython","A_Id":3568609,"CreationDate":"2010-08-25T08:47:00.000","Title":"Which cross platform scripting language should we adopt for a group of DBAs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers.\nMany of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary.\nIt would need to be:\n\nDynamic (so don't suggest Java or C!)\nEasily available on each platform (Windows, Solaris, Linux, perhaps AIX)\nRequire very little in the way of setup (root access not always available!)\nBe easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers.\nBe easy to understand other people's code\nFriendly with SQL Server and Oracle, without messing around.\nA few nice XML features wouldn't go amiss.\n\nIt would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here.\nI have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. \nNo one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction?\nWhat are people's opinions on this?\nthanks!\nChris","AnswerCount":6,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":3213,"Q_Id":3564177,"Users Score":6,"Answer":"You can opt for Python. Its dynamic(interpreted) , is available on Windows\/Linux\/Solaris, has easy to read syntax so that your code maintenance is easy. There modules\/libraries for Oracle interaction and various other database servers as well. there are also library support for XML. All 7 points are covered.","Q_Score":6,"Tags":"python,scala,groovy,shell,jython","A_Id":3564413,"CreationDate":"2010-08-25T08:47:00.000","Title":"Which cross platform scripting language should we adopt for a group of DBAs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been developing under Python\/Snowleopard happily for the part 6 months. I just upgraded Python to 2.6.5 and a whole bunch of libraries, including psycopg2 and Turbogears. I can start up tg-admin and run some queries with no problems. Similarly, I can run my web site from the command line with no problems. \nHowever, if I try to start my application under Aptana Studio, I get the following exception while trying to import psychopg2:\n('dlopen(\/Library\/Frameworks\/Python.framework\/Versions\/2.6\/lib\/python2.6\/site-packages\/psycopg2\/_psycopg.so, 2): Symbol not found: _PQbackendPID\\n Referenced from: \/Library\/Frameworks\/Python.framework\/Versions\/2.6\/lib\/python2.6\/site-packages\/psycopg2\/_psycopg.so\\n Expected in: flat namespace\\n in \/Library\/Frameworks\/Python.framework\/Versions\/2.6\/lib\/python2.6\/site-packages\/psycopg2\/_psycopg.so',)\nThis occurs after running the following code:\n try:\n import psycopg2 as psycopg\n except ImportError as ex:\n print \"import failed :-( xxxxxxxx = \"\n print ex.args\nI have confirmed that the same version of python is being run as follows:\n import sys\n print \"python version: \", sys.version_info\nDoes anyone have any ideas? I've seem some references alluding to this being a 64-bit issue.\n- dave","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":296,"Q_Id":3571495,"Users Score":0,"Answer":"Problem solved (to a point). I was running 64 bit python from Aptana Studio and 32 bit python on the command line. By forcing Aptana to use 32 bit python, the libraries work again and all is happy.","Q_Score":0,"Tags":"python,turbogears,psycopg","A_Id":3571749,"CreationDate":"2010-08-26T01:41:00.000","Title":"Psycopg2 under osx works on commandline but fails in Aptana studio","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using a linux machine to make a little python program that needs to input its result in a SQL Server 2000 DB.\nI'm new to python so I'm struggling quite a bit to find what's the best solution to connect to the DB using python 3, since most of the libs I looked only work in python 2.\nAs an added bonus question, the finished version of this will be compiled to a windows program using py2exe. Is there anything I should be aware of, any changes to make?\nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1786,"Q_Id":3571819,"Users Score":0,"Answer":"If you want to have portable mssql server library, you can try the module from www.pytds.com. It works with 2.5+ AND 3.1, have a good stored procedure support. It's api is more \"functional\", and has some good features you won't find anywhere else.","Q_Score":1,"Tags":"python,sql-server,python-3.x,py2exe","A_Id":4062244,"CreationDate":"2010-08-26T03:18:00.000","Title":"How to access a MS SQL Server using Python 3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using a linux machine to make a little python program that needs to input its result in a SQL Server 2000 DB.\nI'm new to python so I'm struggling quite a bit to find what's the best solution to connect to the DB using python 3, since most of the libs I looked only work in python 2.\nAs an added bonus question, the finished version of this will be compiled to a windows program using py2exe. Is there anything I should be aware of, any changes to make?\nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1786,"Q_Id":3571819,"Users Score":0,"Answer":"I can't answer your question directly, but given that many popular Python packages and frameworks are not yet fully supported on Python 3, you might consider just using Python 2.x. Unless there are features you absolutely cannot live without in Python 3, of course.\nAnd it isn't clear from your post if you plan to deploy to Windows only, or Windows and Linux. If it's only Windows, then you should probably just develop on Windows to start with: the native MSSQL drivers are included in most recent versions so you don't have anything extra to install, and it gives you more options, such as adodbapi.","Q_Score":1,"Tags":"python,sql-server,python-3.x,py2exe","A_Id":3573005,"CreationDate":"2010-08-26T03:18:00.000","Title":"How to access a MS SQL Server using Python 3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Since mongo doesn't have a schema, does that mean that we won't have to do migrations when we change the models?\nWhat does the migration process look like with a non-relational db?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":5660,"Q_Id":3604565,"Users Score":1,"Answer":"What does the migration process look like with a non-relational db?\n\nDepends on if you need to update all the existing data or not. \nIn many cases, you may not need to touch the old data, such as when adding a new optional field. If that field also has a default value, you may also not need to update the old documents, if your application can handle a missing field correctly. However, if you want to build an index on the new field to be able to search\/filter\/sort, you need to add the default value back into the old documents.\nSomething like field renaming (trivial in a relational db, because you only need to update the catalog and not touch any data) is a major undertaking in MongoDB (you need to rewrite all documents).\nIf you need to update the existing data, you usually have to write a migration function that iterates over all the documents and updates them one by one (although this process can be shared and run in parallel). For large data sets, this can take a lot of time (and space), and you may miss transactions (if you end up with a crashed migration that went half-way through).","Q_Score":18,"Tags":"python,django,mongodb","A_Id":3605615,"CreationDate":"2010-08-30T22:01:00.000","Title":"Does django with mongodb make migrations a thing of the past?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Since mongo doesn't have a schema, does that mean that we won't have to do migrations when we change the models?\nWhat does the migration process look like with a non-relational db?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":5660,"Q_Id":3604565,"Users Score":2,"Answer":"There is no silver bullet. Adding or removing fields is easier with non-relational db (just don't use unneeded fields or use new fields), renaming a field is easier with traditional db (you'll usually have to change a lot of data in case of field rename in schemaless db), data migration is on par - depending on task.","Q_Score":18,"Tags":"python,django,mongodb","A_Id":3604687,"CreationDate":"2010-08-30T22:01:00.000","Title":"Does django with mongodb make migrations a thing of the past?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it even possible to create an abstraction layer that can accommodate relational and non-relational databases? The purpose of this layer is to minimize repetition and allows a web application to use any kind of database by just changing\/modifying the code in one place (ie, the abstraction layer). The part that sits on top of the abstraction layer must not need to worry whether the underlying database is relational (SQL) or non-relational (NoSQL) or whatever new kind of database that may come out later in the future.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1721,"Q_Id":3606215,"Users Score":0,"Answer":"Thank you for all the answers. To summarize the answers, currently only web2py and Django supports this kind of abstraction. \nIt is not about a SQL-NoSQL holy grail, using abstraction can make the apps more flexible. Lets assume that you started a project using NoSQL, and then later on you need to switch over to SQL. It is desirable that you only make changes to the codes in a few spots instead of all over the place. For some cases, it does not really matter whether you store the data in a relational or non-relational db. For example, storing user profiles, text content for dynamic page, or blog entries.\nI know there must be a trade off by using the abstraction, but my question is more about the existing solution or technical insight, instead of the consequences.","Q_Score":2,"Tags":"python,sql,database,google-app-engine,nosql","A_Id":3649176,"CreationDate":"2010-08-31T05:18:00.000","Title":"Is there any python web app framework that provides database abstraction layer for SQL and NoSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it even possible to create an abstraction layer that can accommodate relational and non-relational databases? The purpose of this layer is to minimize repetition and allows a web application to use any kind of database by just changing\/modifying the code in one place (ie, the abstraction layer). The part that sits on top of the abstraction layer must not need to worry whether the underlying database is relational (SQL) or non-relational (NoSQL) or whatever new kind of database that may come out later in the future.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":1721,"Q_Id":3606215,"Users Score":1,"Answer":"Regarding App Engine, all existing attempts limit you in some way (web2py doesn't support transactions or namespaces and probably many other stuff, for example). If you plan to work with GAE, use what GAE provides and forget looking for a SQL-NoSQL holy grail. Existing solutions are inevitably limited and affect performance negatively.","Q_Score":2,"Tags":"python,sql,database,google-app-engine,nosql","A_Id":3609648,"CreationDate":"2010-08-31T05:18:00.000","Title":"Is there any python web app framework that provides database abstraction layer for SQL and NoSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it even possible to create an abstraction layer that can accommodate relational and non-relational databases? The purpose of this layer is to minimize repetition and allows a web application to use any kind of database by just changing\/modifying the code in one place (ie, the abstraction layer). The part that sits on top of the abstraction layer must not need to worry whether the underlying database is relational (SQL) or non-relational (NoSQL) or whatever new kind of database that may come out later in the future.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":1721,"Q_Id":3606215,"Users Score":1,"Answer":"Yo may also check web2py, they support relational databases and GAE on the core.","Q_Score":2,"Tags":"python,sql,database,google-app-engine,nosql","A_Id":3606610,"CreationDate":"2010-08-31T05:18:00.000","Title":"Is there any python web app framework that provides database abstraction layer for SQL and NoSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can you recommend a high-performance, thread-safe and stable ORM for Python? The data I need to work with isn't complex, so SQLAlchemy is probably an overkill.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3631,"Q_Id":3607285,"Users Score":6,"Answer":"If you are looking for something thats high performance, and based on one of your comments \"something that can handle >5k queries per second\". You need to keep in mind that an ORM is not built specifically for speed and performance, it is built for maintainability and ease of use. If the data is so basic that even SqlAlchemy might be overkill, and your mostly doing writes, it might be easier to just do straight inserts and skip the ORM altogether.","Q_Score":3,"Tags":"python,orm","A_Id":3609616,"CreationDate":"2010-08-31T08:25:00.000","Title":"Fast, thread-safe Python ORM?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm using IronPython 2.6 for .Net4 to build an GUI logging application.\nThis application received data via serialport and stores these data in an sqlite3 database while showing the last 100 received items in an listview. The listview gathers it's data via an SQL SELECT from the database every 100ms. It only querys data that is not already visible in the listview.\nAt first, the useage of the sqlite3 module was good and solid but i'm now stuck with several issues that i can't solve. \nThe sqlite3 module throws after a while exceptions like: \n\ndatabase disk image is malformed\ndatabase or disk is full.\n\nThese errors occur sporadic and never under high system load.\nI stuck with this kind if issues for some weeks now and i'm looking for an alternative way to store binary and ascii data in a database-like object.\nPlease, does somebody know a good database solution i could use with IronPython 2.6 for .Net4?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":495,"Q_Id":3616078,"Users Score":0,"Answer":"good \n\nThat is highly subjective without far more detailed requirements.\nYou should be able to use any database with .NET support, whether out of the box (notably SQL Server Express and Compact) or installed separately (SQL Server-other editions, DB2, MySQL, Oracle, ...).\nTen select commands per second should be easily in each of any of the databases above, unless there is some performance issue (e.g. huge amount of data and not able to use an index).","Q_Score":0,"Tags":"database,ironpython","A_Id":3616111,"CreationDate":"2010-09-01T08:04:00.000","Title":"IronPython - What kind of database is useable","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use Python and MySQLdb to download web pages and store them into database. The problem I have is that I can't save complicated strings in the database because they are not properly escaped.\nIs there a function in Python that I can use to escape a string for MySQL? I tried with ''' (triple simple quotes) and \"\"\", but it didn't work. I know that PHP has mysql_escape_string(), is something similar in Python?\nThanks.","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":144313,"Q_Id":3617052,"Users Score":0,"Answer":"One other way to work around this is using something like this when using mysqlclient in python.\nsuppose the data you want to enter is like this
  1. Saurav\\'s List<\/strong><\/li><\/ol>. It contains both double qoute and single quote.\nYou can use the following method to escape the quotes:\n\nstatement = \"\"\" Update chats set html='{}' \"\"\".format(html_string.replace(\"'\",\"\\\\\\'\"))\n\nNote: three \\ characters are needed to escape the single quote which is there in unformatted python string.","Q_Score":77,"Tags":"python,mysql,escaping","A_Id":61042304,"CreationDate":"2010-09-01T10:23:00.000","Title":"Escape string Python for MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having a right old nightmare with JPype. I have got my dev env on Windows and so tried installing it there with no luck. I then tried on Ubunto also with no luck. I'm getting a bit desperate now. I am using Mingw32 since I tried installing VS2008 but it told me I had to install XP SP2 but I am on Vista. I tried VS2010 but no luck, I got the 'error: Unable to find vcvarsall.bat' error. Anyway, I am now on Mingw32\nUltimately I am trying to use Neo4j and Python hence my need to use JPype. I have found so many references to the problem on the net for MySQL etc but they don't help me with JPype. \nIf I could fix unix or windows I could get going so help on either will be really appreciated. \nHere's the versions..\nWindows: Vista 64\nPython: 2.6\nCompiler Mingw32: latest version\nJpype: 0.5.4.1\nJava info: \njava version \"1.6.0_13\"\nJava(TM) SE Runtime Environment (build 1.6.0_13-b03)\nJava HotSpot(TM) 64-Bit Server VM (build 11.3-b02, mixed mode)\nI run:\npython setup.py install --compiler=wingw32 \nand get the following output.\nChoosing the Windows profile\nrunning install\nrunning build\nrunning build_py\nrunning build_ext\nbuilding '_jpype' extension\nC:\\MinGW\\bin\\gcc.exe -mno-cygwin -mdll -O -Wall -DWIN32=1 \"-IC:\\Program Files (x86)\\Java\\jdk1.6.0_21\/include\" \"-IC:\\Program Files (x86)\\Java\\jdk1.6.0_21\/include\/win32\" -Isrc\/native\/common\/include -Isrc\/native\/python\/include -Ic:\\Python26\\include -Ic:\\Python26\\PC -c src\/native\/common\/jp_array.cpp -o build\\temp.win32-2.6\\Release\\src\\native\\common\\jp_array.o \/EHsc\nsrc\/native\/common\/jp_array.cpp: In member function 'void JPArray::setRange(int, int, std::vector&)':\nsrc\/native\/common\/jp_array.cpp:56:13: warning: comparison between signed and unsigned integer expressions\nsrc\/native\/common\/jp_array.cpp:68:4: warning: deprecated conversion from string constant to 'char*'\nsrc\/native\/common\/jp_array.cpp: In member function 'void JPArray::setItem(int, HostRef*)':\nsrc\/native\/common\/jp_array.cpp:80:3: warning: deprecated conversion from string constant to 'char*'\ngcc: \/EHsc: No such file or directory\nerror: command 'gcc' failed with exit status 1\nSo on unix Ubunto the problem is as follows:\nJava version: 1.6.0_18\nJPype: 0.5.4.1\nPython: 2.6\nJava is in the path and I did apt-get install build-essentials just now so have latest GCC etc. \nI won't paste all the output as it's massive. So many errors it's like I have missed the install of Java or similar but I haven't. typing java takes me into version above. This is the beginning:\nrunning install\nrunning build\nrunning build_py\nrunning build_ext\nbuilding '_jpype' extension\ngcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I\/usr\/lib\/jvm\/java-1.5.0-sun-1.5.0.08\/include -I\/usr\/lib\/jvm\/java-1.5.0-sun-1.5.0.08\/include\/linux -Isrc\/native\/common\/include -Isrc\/native\/python\/include -I\/usr\/include\/python2.6 -c src\/native\/common\/jp_javaenv_autogen.cpp -o build\/temp.linux-i686-2.6\/src\/native\/common\/jp_javaenv_autogen.o\ncc1plus: warning: command line option \"-Wstrict-prototypes\" is valid for Ada\/C\/ObjC but not for C++\nIn file included from src\/native\/common\/jp_javaenv_autogen.cpp:21:\nsrc\/native\/common\/include\/jpype.h:45:17: error: jni.h: No such file or directory\nIn file included from src\/native\/common\/jp_javaenv_autogen.cpp:21:\nsrc\/native\/common\/include\/jpype.h:77: error: ISO C++ forbids declaration of \u2018jchar\u2019 with no type\nsrc\/native\/common\/include\/jpype.h:77: error: expected \u2018,\u2019 or \u2018...\u2019 before \u2018\u2019 token\nsrc\/native\/common\/include\/jpype.h:82: error: ISO C++ forbids declaration of \u2018jchar\u2019 with no type\nsrc\/native\/common\/include\/jpype.h:82: error: expected \u2018;\u2019 before \u2018\u2019 token\nsrc\/native\/common\/include\/jpype.h:86: error: ISO C++ forbids declaration of \u2018jchar\u2019 with no type\nsrc\/native\/common\/include\/jpype.h:86: error: expected \u2018;\u2019 before \u2018&\u2019 token\nsrc\/native\/common\/include\/jpype.h:88: error: expected \u2018;\u2019 before \u2018private\u2019\nsrc\/native\/common\/include\/jpype.h:89: error: ISO C++ forbids declaration of \u2018jchar\u2019 with no type\nsrc\/native\/common\/include\/jpype.h:89: error: expected \u2018;\u2019 before \u2018*\u2019 token\nIn file included from src\/native\/common\/include\/jpype.h:96,\n from src\/native\/common\/jp_javaenv_autogen.cpp:21:\nAnd this is the end:\nsrc\/native\/common\/include\/jp_monitor.h:27: error: \u2018jobject\u2019 does not name a type\nsrc\/native\/common\/jp_javaenv_autogen.cpp:30: error: \u2018jbyte\u2019 does not name a type\nsrc\/native\/common\/jp_javaenv_autogen.cpp:38: error: \u2018jbyte\u2019 does not name a type\nsrc\/native\/common\/jp_javaenv_autogen.cpp:45: error: variable or field \u2018SetStaticByteField\u2019 declared void\nsrc\/native\/common\/jp_javaenv_autogen.cpp:45: error: \u2018jclass\u2019 was not declared in this scope\nsrc\/native\/common\/jp_javaenv_autogen.cpp:45: error: \u2018jfieldID\u2019 was not declared in this scope\nsrc\/native\/common\/jp_javaenv_autogen.cpp:45: error: \u2018jbyte\u2019 was not declared in this scope\nerror: command 'gcc' failed with exit status 1","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3736,"Q_Id":3649577,"Users Score":1,"Answer":"Edit the Setup.py and remove the \/EHsc option.","Q_Score":3,"Tags":"java,python","A_Id":6258169,"CreationDate":"2010-09-06T06:54:00.000","Title":"JPype compile problems","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Twisted application that runs in an x86 64bit machine with Win 2008 server. \nIt needs to be connected to a SQL Server database that runs in another machine (in a cloud actually but I have IP, port, db name, credentials).\nDo I need to install anything more that Twisted to my machine?\nAnd which API should be used?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1128,"Q_Id":3657271,"Users Score":1,"Answer":"If you want to have portable mssql server library, you can try the module from www.pytds.com.\nIt works with 2.5+ and 3.1, have a good stored procedure support. It's api is more \"functional\", and has some good features you won't find anywhere else.","Q_Score":0,"Tags":"python,sql-server,twisted","A_Id":4059366,"CreationDate":"2010-09-07T09:07:00.000","Title":"Twisted and connection to SQL Server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm creating a basic database utility class in Python. I'm refactoring an old module into a class. I'm now working on an executeQuery() function, and I'm unsure of whether to keep the old design or change it. Here are the 2 options:\n\n(The old design:) Have one generic executeQuery method that takes the query to execute and a boolean commit parameter that indicates whether to commit (insert, update, delete) or not (select), and determines with an if statement whether to commit or to select and return.\n(This is the way I'm used to, but that might be because you can't have a function that sometimes returns something and sometimes doesn't in the languages I've worked with:) Have 2 functions, executeQuery and executeUpdateQuery (or something equivalent). executeQuery will execute a simple query and return a result set, while executeUpdateQuery will make changes to the DB (insert, update, delete) and return nothing.\n\nIs it accepted to use the first way? It seems unclear to me, but maybe it's more Pythonistic...? Python is very flexible, maybe I should take advantage of this feature that can't really be accomplished in this way in more strict languages...\nAnd a second part of this question, unrelated to the main idea - what is the best way to return query results in Python? Using which function to query the database, in what format...?","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":174,"Q_Id":3662134,"Users Score":4,"Answer":"It's propably just me and my FP fetish, but I think a function executed solely for side effects is very different from a non-destructive function that fetches some data, and therefore have different names. Especially if the generic function would do something different depending on exactly that (the part on the commit parameter seems to imply that).\nAs for how to return results... I'm a huge fan of generators, but if the library you use for database connections returns a list anyway, you might as well pass this list on - a generator wouldn't buy you anything in this case. But if it allows you to iterate over the results (one at a time), seize the opportunity to save a lot of memory on larger queries.","Q_Score":3,"Tags":"python,oop","A_Id":3662258,"CreationDate":"2010-09-07T19:47:00.000","Title":"Design question in Python: should this be one generic function or two specific ones?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"AFAIK SQLite returns unicode objects for TEXT in Python. Is it possible to get SQLite to return string objects instead?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7275,"Q_Id":3666328,"Users Score":0,"Answer":"Use Python 3.2+. It will automatically return string instead of unicode (as in Python 2.7)","Q_Score":3,"Tags":"python,string,sqlite,unicode","A_Id":25273292,"CreationDate":"2010-09-08T09:31:00.000","Title":"Can I get SQLite to string instead of unicode for TEXT in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"AFAIK SQLite returns unicode objects for TEXT in Python. Is it possible to get SQLite to return string objects instead?","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":7275,"Q_Id":3666328,"Users Score":4,"Answer":"TEXT is intended to store text. Use BLOB if you want to store bytes.","Q_Score":3,"Tags":"python,string,sqlite,unicode","A_Id":3666433,"CreationDate":"2010-09-08T09:31:00.000","Title":"Can I get SQLite to string instead of unicode for TEXT in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta.\nUnfortunately, I don't know where to instantiate the connection to make it available in my application. \nThe goal would be to :\n\nCreate a pool when the application is launched\nGet a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool \"lazily\", i.e. only if needed\nIf a connexion has been used, release it when the request has been processed\n\nAdditionally, is there something important I should know about it ? When I see some comments like \"Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.\", what does it mean exactly ?\nThanks.\n--\nPierre","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":885,"Q_Id":3671535,"Users Score":2,"Answer":"Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be.\nI ended up with just pycassa.connect_thread_local() in app_globals, and there I go.","Q_Score":10,"Tags":"python,pylons,cassandra","A_Id":3687133,"CreationDate":"2010-09-08T20:14:00.000","Title":"How to connect to Cassandra inside a Pylons app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have a Django project which runs on Google App Engine and used db.UserProperty in several models. We don't have an own User model.\nMy boss would like to use RPXNow (Janrain) for authentication, but after I integrated it, the users.get_current_user() method returned None. It makes sense, because not Google authenticated me. But what should I use for db.UserProperty attributes? Is it possible to use rpxnow and still can have Google's User object as well?\nAfter this I tried to use OpenID authentication (with federated login) in my application, and it works pretty good: I still have users.get_current_user() object. As far as I know, rpxnow using openID as well, which means (for me) that is should be possible to get User objects with rpxnow. But how?\nCheers,\npsmith","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":389,"Q_Id":3699751,"Users Score":1,"Answer":"You can only get a User object if you're using one of the built-in authentication methods. User objects provide an interface to the Users API, which is handled by the App Engine infrastructure. If you're using your own authentication library, regardless of what protocol it uses, you will have to store user information differently.","Q_Score":0,"Tags":"python,google-app-engine,rpxnow","A_Id":3707639,"CreationDate":"2010-09-13T10:55:00.000","Title":"Google App Engine's db.UserProperty with rpxnow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Short story\nI have a technical problem with a third-party library at my hands that I seem to be unable to easily solve in a way other than creating a surrogate key (despite the fact that I'll never need it). I've read a number of articles on the Net discouraging the use of surrogate keys, and I'm a bit at a loss if it is okay to do what I intend to do.\nLong story\nI need to specify a primary key, because I use SQLAlchemy ORM (which requires one), and I cannot just set it in __mapper_args__, since the class is being built with classobj, and I have yet to find a way to reference the field of a not-yet-existing class in the appropriate PK definition argument. Another problem is that the natural equivalent of the PK is a composite key that is too long for the version of MySQL I use (and it's generally a bad idea to use such long primary keys anyway).","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":698,"Q_Id":3712949,"Users Score":2,"Answer":"I always make surrogate keys when using ORMs (or rather, I let the ORMs make them for me). They solve a number of problems, and don't introduce any (major) problems.\nSo, you've done your job by acknowledging that there are \"papers on the net\" with valid reasons to avoid surrogate keys, and that there's probably a better way to do it.\nNow, write \"# TODO: find a way to avoid surrogate keys\" somewhere in your source code and go get some work done.","Q_Score":2,"Tags":"python,sqlalchemy,primary-key","A_Id":3713061,"CreationDate":"2010-09-14T21:11:00.000","Title":"How badly should I avoid surrogate primary keys in SQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Short story\nI have a technical problem with a third-party library at my hands that I seem to be unable to easily solve in a way other than creating a surrogate key (despite the fact that I'll never need it). I've read a number of articles on the Net discouraging the use of surrogate keys, and I'm a bit at a loss if it is okay to do what I intend to do.\nLong story\nI need to specify a primary key, because I use SQLAlchemy ORM (which requires one), and I cannot just set it in __mapper_args__, since the class is being built with classobj, and I have yet to find a way to reference the field of a not-yet-existing class in the appropriate PK definition argument. Another problem is that the natural equivalent of the PK is a composite key that is too long for the version of MySQL I use (and it's generally a bad idea to use such long primary keys anyway).","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":698,"Q_Id":3712949,"Users Score":0,"Answer":"I use surrogate keys in a db that I use reflection on with sqlalchemy. The pro is that you can more easily manage the foreign keys \/ relationships that exists in your tables \/ models. Also, the rdbms is managing the data more efficiently. The con is the data inconsistency: duplicates. To avoid this - always use the unique constraint on your natural key.\nNow, I understand from your long story that you can't enforce this uniqueness because of your mysql limitations. For long composite keys mysql causes problems. I suggest you move to postgresql.","Q_Score":2,"Tags":"python,sqlalchemy,primary-key","A_Id":4160811,"CreationDate":"2010-09-14T21:11:00.000","Title":"How badly should I avoid surrogate primary keys in SQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Short story\nI have a technical problem with a third-party library at my hands that I seem to be unable to easily solve in a way other than creating a surrogate key (despite the fact that I'll never need it). I've read a number of articles on the Net discouraging the use of surrogate keys, and I'm a bit at a loss if it is okay to do what I intend to do.\nLong story\nI need to specify a primary key, because I use SQLAlchemy ORM (which requires one), and I cannot just set it in __mapper_args__, since the class is being built with classobj, and I have yet to find a way to reference the field of a not-yet-existing class in the appropriate PK definition argument. Another problem is that the natural equivalent of the PK is a composite key that is too long for the version of MySQL I use (and it's generally a bad idea to use such long primary keys anyway).","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":698,"Q_Id":3712949,"Users Score":0,"Answer":"\"Using a surrogate key allows duplicates to be created when using a natural key would have prevented such problems\" Exactly, so you should have both keys, not just a surrogate. The error you seem to be making is not that you are using a surrogate, it's that you are assuming the table only needs one key. Make sure you create all the keys you need to ensure the integrity of your data.\nHaving said that, in this case it seems like a deficiency of the ORM software (apparently not being able to use a composite key) is the real cause of your problems. It's unfortunate that a software limitation like that should force you to create keys you don't otherwise need. Maybe you could consider using different software.","Q_Score":2,"Tags":"python,sqlalchemy,primary-key","A_Id":3713270,"CreationDate":"2010-09-14T21:11:00.000","Title":"How badly should I avoid surrogate primary keys in SQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a database in PostgreSQL, let's call it testdb.\nI have a generic set of tables inside this database, xxx_table_one, xxx_table_two and xxx_table_three.\nNow, I have Python code where I want to dynamically create and remove \"sets\" of these 3 tables to my database with a unique identifier in the table name distinguishing different \"sets\" from each other, e.g. \nSet 1\ntestdb.aaa_table_one\ntestdb.aaa_table_two\ntestdb.aaa_table_three \nSet 2\ntestdb.bbb_table_one\ntestdb.bbb_table_two\ntestdb.bbb_table_three \nThe reason I want to do it this way is to keep multiple LARGE data collections of related data separate from each other. I need to regularly overwrite individual data collections, and it's easy if we can just drop the data collections table and recreate a complete new set of tables. Also, I have to mention, the different data collections fit into the same schemas, so I could save all the data collections in 1 set of tables using an identifier to distinguish data collections instead of separating them by using different tables.\nI want to know, a few things \n Does PostgreSQL limit the number of tables per database?\nWhat is the effect on performance, if any, of having a large number of tables in 1 database?\nWhat is the effect on performance of saving the data collections in different sets of tables compared to saving them all in the same set, e.g. I guess would need to write more queries if I want to query multiple data collections at once when the data is spread accross tables as compared to just 1 set of tables.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":5102,"Q_Id":3715456,"Users Score":3,"Answer":"PostgreSQL doesn't impose a direct limit on this, your OS does (it depends on maximum directory size)\nThis may depend on your OS as well. Some filesystems get slower with large directories.\nPostgreSQL won't be able to optimize queries if they're across different tables. So using less tables (or a single table) should be more efficient","Q_Score":7,"Tags":"python,mysql,database,database-design,postgresql","A_Id":3715621,"CreationDate":"2010-09-15T07:15:00.000","Title":"Is there a limitation on the number of tables a PostgreSQL database can have?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a database in PostgreSQL, let's call it testdb.\nI have a generic set of tables inside this database, xxx_table_one, xxx_table_two and xxx_table_three.\nNow, I have Python code where I want to dynamically create and remove \"sets\" of these 3 tables to my database with a unique identifier in the table name distinguishing different \"sets\" from each other, e.g. \nSet 1\ntestdb.aaa_table_one\ntestdb.aaa_table_two\ntestdb.aaa_table_three \nSet 2\ntestdb.bbb_table_one\ntestdb.bbb_table_two\ntestdb.bbb_table_three \nThe reason I want to do it this way is to keep multiple LARGE data collections of related data separate from each other. I need to regularly overwrite individual data collections, and it's easy if we can just drop the data collections table and recreate a complete new set of tables. Also, I have to mention, the different data collections fit into the same schemas, so I could save all the data collections in 1 set of tables using an identifier to distinguish data collections instead of separating them by using different tables.\nI want to know, a few things \n Does PostgreSQL limit the number of tables per database?\nWhat is the effect on performance, if any, of having a large number of tables in 1 database?\nWhat is the effect on performance of saving the data collections in different sets of tables compared to saving them all in the same set, e.g. I guess would need to write more queries if I want to query multiple data collections at once when the data is spread accross tables as compared to just 1 set of tables.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5102,"Q_Id":3715456,"Users Score":0,"Answer":"If your data were not related, I think your tables could be in different schema, and then you would use SET search_path TO schema1, public for example, this way you wouldn't have to dynamically generate table names in your queries. I am planning to try this structure on a large database which stores logs and other tracking information.\nYou can also change your tablespace if your os has a limit or suffers from large directory size.","Q_Score":7,"Tags":"python,mysql,database,database-design,postgresql","A_Id":5603789,"CreationDate":"2010-09-15T07:15:00.000","Title":"Is there a limitation on the number of tables a PostgreSQL database can have?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am into a project where zope web server is used. With this PostgreSQL database is used. But I am not able to add a new PostgreSQL connection via zope. Actually, I am not aware of what else I need to install so that I can use PostgreSQL dB with zope. From whatever I have explored about this I have come to know that I will require a Zope Database Adapter so that I can use PostgreSQL dB with Zope. But still I am not confirmed about this. Also I don't know which version of Zope Database Adapter will I require to install? The zope version I am using is 2.6 and PostgreSQL dB version is 7.4.13 and the Python version is 2.1.3 . Also from where should I download that Zope Database Adapter?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":3719145,"Users Score":0,"Answer":"Look at psycopg, it ships with a Zope Database Adapter.","Q_Score":3,"Tags":"python,zope","A_Id":3719408,"CreationDate":"2010-09-15T15:24:00.000","Title":"What are the essentials I need to install if I want to use PostgreSQL DB with zope? for eg: Zope Database Adapter?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running a web crawler that gets called as a separate thread via Django. When it tries to store the scraped information I get this error:\nFile \"\/usr\/lib\/pymodules\/python2.6\/MySQLdb\/cursors.py\", line 147, in execute\n charset = db.character_set_name()\nInterfaceError: (0, '')\nIf I manually run the script from the command line I don't get this error. Any ideas?\nMy guess is that I do about 4 cursor.execute()s in one iteration of a loop. Could this be throwing something off?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1854,"Q_Id":3722120,"Users Score":0,"Answer":"Since it mentions the character set, my gut says you are running a different Django\/Python\/something from the command line than you are from the webserver. In your settings file, turn on DEBUG=True, restart the server, and then run this again. In particular, look at the list of paths shown. If they are not exactly what you expect them to be, then this is a Red Flag.","Q_Score":0,"Tags":"python,mysql,django,multithreading","A_Id":3722799,"CreationDate":"2010-09-15T21:49:00.000","Title":"mySQL interface error only occuring if ran in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have zope 2.11 installed. Now i want to use Posgresql 7.4.13 DB with it. So i know i need to install psycopg2 Database Adapter. Can any one tell me Is psycopg2 compatible with zope2??","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":142,"Q_Id":3725699,"Users Score":1,"Answer":"Yes, you can use psycopg2 with Zope2. \nJust install it in your Python with easy_install or setup.py. You will also need a matching ZPsycopgDA Product in Zope. You find the ZPsycopgDA folder in the psycopg2 source distribution tarball.","Q_Score":1,"Tags":"python,database,zope","A_Id":4018666,"CreationDate":"2010-09-16T10:19:00.000","Title":"Is Zpsycopg2 compatible with zope 2?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building an application with objects which have their data stored in mysql tables (across multiple tables). When I need to work with the object (retrieve object attributes \/ change the attributes) I am querying the sql database using mysqldb (select \/ update). However, since the application is quite computation intensive, the execution time is killing me.\nWanted to understand if there are approaches where all of the data is loaded into python, the computations \/ modifications are done on those objects and then subsequently a full data update is done to the mysql database? Will loading the data initially into lists of those objects in one go from the database improve the performance? Also since the db size is close to around 25 mb, will it cause any memory problems.\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":1287,"Q_Id":3770394,"Users Score":5,"Answer":"25Mb is tiny. Microscopic. SQL is slow. Glacial.\nDo not waste time on SQL unless you have transactions (with locking and multiple users).\nIf you're doing \"analysis\", especially computationally-intensive analysis, load all the data into memory.\nIn the unlikely event that data doesn't fit into memory, then do this.\n\nQuery data into flat files. This can be fast. It's fastest if you don't use Python, but use the database native tools to extract data into CSV or something small.\nRead flat files and do computations, writing flat files. This is really fast.\nDo bulk updates from the flat files. Again, this is fastest if you use database native toolset for insert or update.\n\n\nIf you didn't need SQL in the first place, consider the data as you originally received it and what you're going to do with it.\n\nRead the original file once, parse it, create your Python objects and pickle the entire list or dictionary. This means that each subsequent program can simply load the pickled file and start doing analysis. However. You can't easily update the pickled file. You have to create a new one. This is not a bad thing. It gives you complete processing history.\nRead the original file once, parse it, create your Python objects using shelve. This means you can \nupdate the file. \nRead the original file once, parse it, create your Python objects and save the entire list or dictionary as a JSON or YAML file. This means that each subsequent program can simply load the JSON (or YAML) file and start doing analysis. However. You can't easily update the file. You have to create a new one. This is not a bad thing. It gives you complete processing history. \nThis will probably be slightly slower than pickling. And it will require that you write some helpers so that the JSON objects are dumped and loaded properly. However, you can read JSON (and YAML) giving you some advantages in working with the file.","Q_Score":2,"Tags":"python,mysql,optimization","A_Id":3770439,"CreationDate":"2010-09-22T14:43:00.000","Title":"Optimizing Python Code for Database Access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We've worked hard to work up a full dimensional database model of our problem, and now it's time to start coding. Our previous projects have used hand-crafted queries constructed by string manipulation.\nIs there any best\/standard practice for interfacing between python and a complex database layout?\nI've briefly evaluated SQLAlchemy, SQLObject, and Django-ORM, but (I may easily be missing something) they seem tuned for tiny web-type (OLTP) transactions, where I'm doing high-volume analytical (OLAP) transactions.\nSome of my requirements, that may be somewhat different than usual:\n\nload large amounts of data relatively quickly\nupdate\/insert small amounts of data quickly and easily\nhandle large numbers of rows easily (300 entries per minute over 5 years)\nallow for modifications in the schema, for future requirements\n\nWriting these queries is easy, but writing the code to get the data all lined up is tedious, especially as the schema evolves. This seems like something that a computer might be good at?","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":3376,"Q_Id":3782386,"Users Score":3,"Answer":"I'm using SQLAlchemy with a pretty big datawarehouse and I'm using it for the full ETL process with success. Specially in certain sources where I have some complex transformation rules or with some heterogeneous sources (such as web services). I'm not using the Sqlalchemy ORM but rather using its SQL Expression Language because I don't really need to map anything with objects in the ETL process. Worth noticing that when I'm bringing a verbatim copy of some of the sources I rather use the db tools for that -such as PostgreSQL dump utility-. You can't beat that.\nSQL Expression Language is the closest you will get with SQLAlchemy (or any ORM for the matter) to handwriting SQL but since you can programatically generate the SQL from python you will save time, specially if you have some really complex transformation rules to follow.\nOne thing though, I rather modify my schema by hand. I don't trust any tool for that job.","Q_Score":10,"Tags":"python,django-models,sqlalchemy,data-warehouse,olap","A_Id":3782627,"CreationDate":"2010-09-23T20:40:00.000","Title":"Python: interact with complex data warehouse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We've worked hard to work up a full dimensional database model of our problem, and now it's time to start coding. Our previous projects have used hand-crafted queries constructed by string manipulation.\nIs there any best\/standard practice for interfacing between python and a complex database layout?\nI've briefly evaluated SQLAlchemy, SQLObject, and Django-ORM, but (I may easily be missing something) they seem tuned for tiny web-type (OLTP) transactions, where I'm doing high-volume analytical (OLAP) transactions.\nSome of my requirements, that may be somewhat different than usual:\n\nload large amounts of data relatively quickly\nupdate\/insert small amounts of data quickly and easily\nhandle large numbers of rows easily (300 entries per minute over 5 years)\nallow for modifications in the schema, for future requirements\n\nWriting these queries is easy, but writing the code to get the data all lined up is tedious, especially as the schema evolves. This seems like something that a computer might be good at?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":3376,"Q_Id":3782386,"Users Score":2,"Answer":"SQLAlchemy definitely. Compared to SQLAlchemy, all other ORMs look like child's toy. Especially the Django-ORM. What's Hibernate to Java, SQLAlchemy is to Python.","Q_Score":10,"Tags":"python,django-models,sqlalchemy,data-warehouse,olap","A_Id":3782432,"CreationDate":"2010-09-23T20:40:00.000","Title":"Python: interact with complex data warehouse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We've worked hard to work up a full dimensional database model of our problem, and now it's time to start coding. Our previous projects have used hand-crafted queries constructed by string manipulation.\nIs there any best\/standard practice for interfacing between python and a complex database layout?\nI've briefly evaluated SQLAlchemy, SQLObject, and Django-ORM, but (I may easily be missing something) they seem tuned for tiny web-type (OLTP) transactions, where I'm doing high-volume analytical (OLAP) transactions.\nSome of my requirements, that may be somewhat different than usual:\n\nload large amounts of data relatively quickly\nupdate\/insert small amounts of data quickly and easily\nhandle large numbers of rows easily (300 entries per minute over 5 years)\nallow for modifications in the schema, for future requirements\n\nWriting these queries is easy, but writing the code to get the data all lined up is tedious, especially as the schema evolves. This seems like something that a computer might be good at?","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":3376,"Q_Id":3782386,"Users Score":6,"Answer":"Don't get confused by your requirements. One size does not fit all.\n\nload large amounts of data relatively quickly\n\nWhy not use the databases's native loaders for this? Use Python to prepare files, but use database tools to load. You'll find that this is amazingly fast. \n\nupdate\/insert small amounts of data quickly and easily\n\nThat starts to bend the rules of a data warehouse. Unless you're talking about Master Data Management to update reporting attributes of a dimension.\nThat's what ORM's and web frameworks are for.\n\nhandle large numbers of rows easily (300 entries per minute over 5 years)\n\nAgain, that's why you use a pipeline of Python front-end processing, but the actual INSERT's are done by database tools. Not Python. \n\nalter schema (along with python interface) easily, for future requirements\n\nYou have almost no use for automating this. It's certainly your lowest priority task for \"programming\". You'll often do this manually in order to preserve data properly.\nBTW, \"hand-crafted queries constructed by string manipulation\" is probably the biggest mistake ever. These are hard for the RDBMS parser to handle -- they're slower than using queries that have bind variables inserted.","Q_Score":10,"Tags":"python,django-models,sqlalchemy,data-warehouse,olap","A_Id":3782509,"CreationDate":"2010-09-23T20:40:00.000","Title":"Python: interact with complex data warehouse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Howdie stackoverflow people!\nSo I've been doing some digging regarding these NoSQL databases, MongoDB, CouchDB etc. Though I am still not sure about real time-ish stuff therefore I thought i'd ask around to see if someone have any practical experience.\nLet's think about web stuff, let's say we've got a very dynamic super ajaxified webapp that asks for various types of data every 5-20 seconds, our backend is python or php or anything other than java really... in cases such as these obviously a MySQL or similar db would be under heavy pressure (with lots of users), would MongoDB \/ CouchDB run this without breaking a sweat and without the need to create some super ultra complex cluster\/caching etc solution?\nYes, that's basically my question, if you think that no.. then yes I know there are several types of solutions for this, nodeJS\/websockets\/antigravity\/worm-hole super tech, but I am just interested in these NoSQL things atm and more specifically if they can handle this type of thing.\nLet's say we have 5000 users at the same time, every 5, 10 or 20 seconds ajax requests that updates various interfaces.\nShoot ;]","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1738,"Q_Id":3798728,"Users Score":0,"Answer":"It depends heavily on the server running said NoSQL solution, amount of data etc... I have played around with Mongo a bit and it is very easy to setup multiple servers to run simultaneously and you would most likely be able to accomplish high concurrency by starting multiple instances on the same box and having them act like a cluster. Luckily Mongo, at least, handles all the specifics so servers can be killed and introduced without skipping a beat (depending on version). By default I believe the max connections is 1000 so starting 5 servers with said configuration would suffice (if your server can handle it obviously) but realistically you would most likely never be hitting 5000 users at the exact same time.\nI hope for your hardware's sake you would at least come up with a solution that can check to see if new data is available before a full-on fetch. Either via timestamps or Memcache etc...\nOverall I would tend to believe NoSQL would be much faster than traditional databases assuming you are fetching data and not running reports etc... and your datastore design is intelligent enough to compensate for the lack of complex joins.","Q_Score":2,"Tags":"php,python,ajax,mongodb,real-time","A_Id":3799207,"CreationDate":"2010-09-26T16:31:00.000","Title":"MongoDB for realtime ajax stuff?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Howdie stackoverflow people!\nSo I've been doing some digging regarding these NoSQL databases, MongoDB, CouchDB etc. Though I am still not sure about real time-ish stuff therefore I thought i'd ask around to see if someone have any practical experience.\nLet's think about web stuff, let's say we've got a very dynamic super ajaxified webapp that asks for various types of data every 5-20 seconds, our backend is python or php or anything other than java really... in cases such as these obviously a MySQL or similar db would be under heavy pressure (with lots of users), would MongoDB \/ CouchDB run this without breaking a sweat and without the need to create some super ultra complex cluster\/caching etc solution?\nYes, that's basically my question, if you think that no.. then yes I know there are several types of solutions for this, nodeJS\/websockets\/antigravity\/worm-hole super tech, but I am just interested in these NoSQL things atm and more specifically if they can handle this type of thing.\nLet's say we have 5000 users at the same time, every 5, 10 or 20 seconds ajax requests that updates various interfaces.\nShoot ;]","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1738,"Q_Id":3798728,"Users Score":2,"Answer":"Let's say we have 5000 users at the\n same time, every 5, 10 or 20 seconds\n ajax requests that updates various\n interfaces.\n\nOK, so to get this right, you're talking about 250 to 1000 writes per second? Yeah, MongoDB can handle that.\nThe real key on performance is going to be whether or not these are queries, updates or inserts.\nFor queries, Mongo can probably handle this load. It's really going to be about data size to memory size ratios. If you have a server with 1GB of RAM and 150GB of data, then you're probably not going to get 250 queries \/ second (with any DB technology). But with reasonable hardware specs, Mongo can hit this speed on a single 64-bit server.\nIf you have 5,000 active users and you're constantly updating existing records then Mongo will be really fast (on par with updating memcached on a single machine). The reason here is simply that Mongo will likely keep the record in memory. So a user will send updates every 5 seconds and the in-memory object will be updated.\nIf you are constantly inserting new records, then the limitation is really going to be one of throughput. When you're writing lots of new data, you're also forcing the index to expand. So if you're planning to pump in Gigs of new data, then you risk saturating the disk throughput and you'll need to shard.\nSo based on your questions, it looks like you're mostly querying\/updating. You'll be writing new records, but not 1000 new records \/ second. If this is the case, then MongoDB is probably right for you. It will definitely get around a lot of caching concerns.","Q_Score":2,"Tags":"php,python,ajax,mongodb,real-time","A_Id":3801074,"CreationDate":"2010-09-26T16:31:00.000","Title":"MongoDB for realtime ajax stuff?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've heard of redis-cache but how exactly does it work? Is it used as a layer between django and my rdbms, by caching the rdbms queries somehow? \nOr is it supposed to be used directly as the database? Which I doubt, since that github page doesn't cover any login details, no setup.. just tells you to set some config property.","AnswerCount":5,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":75541,"Q_Id":3801379,"Users Score":61,"Answer":"Just because Redis stores things in-memory does not mean that it is meant to be a cache. I have seen people using it as a persistent store for data.\nThat it can be used as a cache is a hint that it is useful as a high-performance storage. If your Redis system goes down though you might loose data that was not been written back onto the disk again. There are some ways to mitigate such dangers, e.g. a hot-standby replica.\nIf your data is 'mission-critical', like if you run a bank or a shop, Redis might not be the best pick for you. But if you write a high-traffic game with persistent live data or some social-interaction stuff and manage the probability of data-loss to be quite acceptable, then Redis might be worth a look.\nAnyway, the point remains, yes, Redis can be used as a database.","Q_Score":107,"Tags":"python,django,redis","A_Id":7722260,"CreationDate":"2010-09-27T05:48:00.000","Title":"How can I use redis with Django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"FTS3\/FTS4 doesn't work in python by default (up to 2.7). I get the error: sqlite3.OperationalError: no such module: fts3\nor\nsqlite3.OperationalError: no such module: fts4\nHow can this be resolved?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6571,"Q_Id":3823659,"Users Score":0,"Answer":"What Naveen said but =>\nFor Windows installations:\nWhile running setup.py for for package installations... Python 2.7 searches for an installed Visual Studio 2008. You can trick Python to use Visual Studio by setting\n\nSET VS90COMNTOOLS=%VS100COMNTOOLS%\n\nbefore calling setup.py.","Q_Score":13,"Tags":"python,sqlite,full-text-search,fts3,fts4","A_Id":12372189,"CreationDate":"2010-09-29T16:22:00.000","Title":"How to setup FTS3\/FTS4 with python2.7 on Windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"FTS3\/FTS4 doesn't work in python by default (up to 2.7). I get the error: sqlite3.OperationalError: no such module: fts3\nor\nsqlite3.OperationalError: no such module: fts4\nHow can this be resolved?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":6571,"Q_Id":3823659,"Users Score":2,"Answer":"never mind.\ninstalling pysqlite from source was easy and sufficient.\npython setup.py build_static install fts3 is enabled by default when installing from source.","Q_Score":13,"Tags":"python,sqlite,full-text-search,fts3,fts4","A_Id":3826412,"CreationDate":"2010-09-29T16:22:00.000","Title":"How to setup FTS3\/FTS4 with python2.7 on Windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 100 mega bytes sqlite db file that I would like to load to memory before performing sql queries. Is it possible to do that in python?\nThanks","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":12928,"Q_Id":3826552,"Users Score":2,"Answer":"If you are using Linux, you can try tmpfs which is a memory-based file system.\nIt's very easy to use it:\n\nmount tmpfs to a directory.\ncopy sqlite db file to the directory.\nopen it as normal sqlite db file.\n\nRemember, anything in tmpfs will be lost after reboot. So, you may copy db file back to disk if it changed.","Q_Score":8,"Tags":"python,sql,memory,sqlite","A_Id":25521707,"CreationDate":"2010-09-29T23:10:00.000","Title":"In python, how can I load a sqlite db completely to memory before connecting to it?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been working on developing this analytical tool to help interpret and analyze a database that is bundled within the package. It is very important for us to secure the database in a way that can only be accessed with our software. What is the best way of achieving it in Python? \nI am aware that there may not be a definitive solution, but deterrence is what really matters here.\nThank you very much.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4184,"Q_Id":3848658,"Users Score":3,"Answer":"This question comes up on the SQLite users mailing list about once a month.\nNo matter how much encryption etc you do, if the database is on the client machine then the key to decrypt will also be on the machine at some point. An attacker will be able to get that key since it is their machine.\nA better way of looking at this is in terms of money - how much would a bad guy need to spend in order to get the data. This will generally be a few hundred dollars at most. And all it takes is any one person to get the key and they can then publish the database for everyone.\nSo either go for a web service as mentioned by Donal or just spend a few minutes obfuscating the database. For example if you use APSW then you can write a VFS in a few lines that XORs the database content so regular SQLite will not open it, nor will a file viewer show the normal SQLite header. (There is example code in APSW showing how to do this.)\nConsequently anyone who does have the database content had to knowingly do so.","Q_Score":4,"Tags":"python,database,sqlite,encryption","A_Id":3850560,"CreationDate":"2010-10-03T04:55:00.000","Title":"Encrypting a Sqlite db file that will be bundled in a pyexe file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an existing sqlite3 db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory.\nIs there a Pythonic way to load the existing file into memory in order to speed up the calculations?","AnswerCount":10,"Available Count":2,"Score":-0.0199973338,"is_accepted":false,"ViewCount":46619,"Q_Id":3850022,"Users Score":-1,"Answer":"sqlite supports in-memory databases.\nIn python, you would use a :memory: database name for that.\nPerhaps you could open two databases (one from the file, an empty one in-memory), migrate everything from the file database into memory, then use the in-memory database further to do calculations.","Q_Score":72,"Tags":"python,performance,sqlite","A_Id":3850164,"CreationDate":"2010-10-03T13:55:00.000","Title":"How to load existing db file to memory in Python sqlite3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an existing sqlite3 db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory.\nIs there a Pythonic way to load the existing file into memory in order to speed up the calculations?","AnswerCount":10,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":46619,"Q_Id":3850022,"Users Score":0,"Answer":"With the solution of Cenk Alti, I always had a MemoryError with Python 3.7, when the process reached 500MB. Only with the use of the backup functionality of sqlite3 (mentioned by thinwybk), I was able to to load and save bigger SQLite databases. Also you can do the same with just 3 lines of code, both ways.","Q_Score":72,"Tags":"python,performance,sqlite","A_Id":57569063,"CreationDate":"2010-10-03T13:55:00.000","Title":"How to load existing db file to memory in Python sqlite3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Looking around for a noSQL database implementation that has an ORM syntax (pref. like Django's), lets me store and retrieve nested dictionary attributes but written entirely in Python to ease deployment and avoids Javascript syntax for map\/reduce. Even better if it has a context-aware (menus), python-based console, as well as being able to run as a separate daemon task. Is there such an initiative already (I can't find it) or should I start one?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1830,"Q_Id":3865283,"Users Score":2,"Answer":"I don't know about a noSQL solution, but sqlite+sqlalchemy's ORM works pretty well for me. As long as it gives you the interface and features you need, I don't see a reason to care whether it uses sql internally.","Q_Score":4,"Tags":"python,mongodb,nosql","A_Id":3865523,"CreationDate":"2010-10-05T15:39:00.000","Title":"Pure Python implementation of MongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Python and SQLAlchemy to query a SQLite FTS3 (full-text) store and I would like to prevent my users from using the - as an operator. How should I escape the - so users can search for a term containing the - (enabled by changing the default tokenizer) instead of it signifying \"does not contain the term following the -\"?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1200,"Q_Id":3865733,"Users Score":1,"Answer":"From elsewhere on the internet it seems it may be possible to surround each search term with double quotes \"some-term\". Since we do not need the subtraction operation, my solution was to replace hyphens - with underscores _ when populating the search index and when performing searches.","Q_Score":2,"Tags":"python,sqlite,sqlalchemy,fts3","A_Id":3942449,"CreationDate":"2010-10-05T16:32:00.000","Title":"How do I escape the - character in SQLite FTS3 queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to restore the current working database to the data stored in a .sql file from within Django. Whats the best way to do this? Does django have an good way to do this or do I need to grab the connection string from the settings.py file and send command line mysql commands to do this?\nThanks for your help.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":182,"Q_Id":3866989,"Users Score":1,"Answer":"You can't import sql dumps through django; import it through mysql directly, if you run mysql locally you can find various graphical mysql clients that can help you with doing so; if you need to do it remotely, find out if your server has any web interfaces for that installed!","Q_Score":0,"Tags":"python,mysql,django","A_Id":3868544,"CreationDate":"2010-10-05T19:28:00.000","Title":"How do I replace the current working MySQL database with a .sql file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i just want to use an entity modify it to show something,but don't want to change to the db,\nbut after i use it ,and in some other place do the session.commit()\nit will add this entity to db,i don't want this happen,\nany one could help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":3881364,"Users Score":1,"Answer":"You can expunge it from session before modifying object, then this changes won't be accounted on next commits unless you add the object back to session. Just call session.expunge(obj).","Q_Score":0,"Tags":"python,sqlalchemy,entity","A_Id":3896280,"CreationDate":"2010-10-07T11:56:00.000","Title":"use sqlalchemy entity isolately","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying out Sphinx search in my Django project. All setup done & it works but need some clarification from someone who has actually used this setup.\nIn my Sphinx search while indexing, I have used 'name' as the field in my MySQL to be searchable & all other fields in sql_query to be as attributes (according to Sphinx lingo). \nSo when I search from my Model instance in Django, I get the search results alright but it does not have the 'name' field in the search results. I get all the other attributes. \nHowever, I get the 'id' of the search term. Technically, I could get the 'name' by again querying MySQL but I want to avoid this. Is there anything I am not doing here?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":395,"Q_Id":3897650,"Users Score":1,"Answer":"Here's a shot in the dark - \nTry to get the name of your index in sphinx.conf same as the table_name you are trying to index. This is a quirk which is missed by lot of people.","Q_Score":0,"Tags":"python,django,search,full-text-search,django-sphinx","A_Id":4121651,"CreationDate":"2010-10-09T20:02:00.000","Title":"Django Sphinx Text Search","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently busy making a Python ORM which gets all of its information from a RDBMS via introspection (I would go with XRecord if I was happy with it in other respects) \u2014 meaning, the end-user only tells which tables\/views to look at, and the ORM does everything else automatically (if it makes you actually write something and you're not looking for weird things and dangerous adventures, it's a bug).\nThe major part of that is detecting relationships, provided that the database has all relevant constraints in place and you have no naming conventions at all \u2014\u00a0I want to be able to have this ORM work with a database made by any crazy DBA which has his own views on what the columns and tables should be named like. And I'm stuck at many-to-many relationships.\nFirst, there can be compound keys. Then, there can be MTM relationships with three or more tables. Then, a MTM intermediary table might have its own data apart from keys \u2014\u00a0some data common to all tables it ties together.\nWhat I want is a method to programmatically detect that a table X is an intermediary table tying tables A and B, and that any non-key data it has must belong to both A and B (and if I change a common attribute from within A, it should affect the same attribute in B). Are there common algorithms to do that? Or at least to make guesses which are right in 80% of the cases (provided the DBA is sane)?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":241,"Q_Id":3901961,"Users Score":0,"Answer":"So far, I see the only one technique covering more than two tables in relation. A table X is assumed related to table Y, if and only if X is referenced to Y no more than one table away. That is:\n\"Zero tables away\" means X contains the foreign key to Y. No big deal, that's how we detect many-to-ones.\n\"One table away\" means there is a table Z which itself has a foreign key referencing table X (these are easy to find), and a foreign key referencing table Y.\nThis reduces the scope of traits to look for a lot (we don't have to care if the intermediary table has any other attributes), and it covers any number of tables tied together in a MTM relation.\nIf there are some interesting links or other methods, I'm willing to hear them.","Q_Score":2,"Tags":"python,orm,metaprogramming,introspection,relationships","A_Id":3902410,"CreationDate":"2010-10-10T19:59:00.000","Title":"What are methods of programmatically detecting many-to-many relationships in a RDMBS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently busy making a Python ORM which gets all of its information from a RDBMS via introspection (I would go with XRecord if I was happy with it in other respects) \u2014 meaning, the end-user only tells which tables\/views to look at, and the ORM does everything else automatically (if it makes you actually write something and you're not looking for weird things and dangerous adventures, it's a bug).\nThe major part of that is detecting relationships, provided that the database has all relevant constraints in place and you have no naming conventions at all \u2014\u00a0I want to be able to have this ORM work with a database made by any crazy DBA which has his own views on what the columns and tables should be named like. And I'm stuck at many-to-many relationships.\nFirst, there can be compound keys. Then, there can be MTM relationships with three or more tables. Then, a MTM intermediary table might have its own data apart from keys \u2014\u00a0some data common to all tables it ties together.\nWhat I want is a method to programmatically detect that a table X is an intermediary table tying tables A and B, and that any non-key data it has must belong to both A and B (and if I change a common attribute from within A, it should affect the same attribute in B). Are there common algorithms to do that? Or at least to make guesses which are right in 80% of the cases (provided the DBA is sane)?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":241,"Q_Id":3901961,"Users Score":1,"Answer":"If you have to ask, you shouldn't be doing this. I'm not saying that to be cruel, but Python already has several excellent ORMs that are well-tested and widely used. For example, SQLAlchemy supports the autoload=True attribute when defining tables that makes it read the table definition - including all the stuff you're asking about - directly from the database. Why re-invent the wheel when someone else has already done 99.9% of the work?\nMy answer is to pick a Python ORM (such as SQLAlchemy) and add any \"missing\" functionality to that instead of starting from scratch. If it turns out to be a good idea, release your changes back to the main project so that everyone else can benefit from them. If it doesn't work out like you hoped, at least you'll already be using a common ORM that many other programmers can help you with.","Q_Score":2,"Tags":"python,orm,metaprogramming,introspection,relationships","A_Id":3902041,"CreationDate":"2010-10-10T19:59:00.000","Title":"What are methods of programmatically detecting many-to-many relationships in a RDMBS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently busy making a Python ORM which gets all of its information from a RDBMS via introspection (I would go with XRecord if I was happy with it in other respects) \u2014 meaning, the end-user only tells which tables\/views to look at, and the ORM does everything else automatically (if it makes you actually write something and you're not looking for weird things and dangerous adventures, it's a bug).\nThe major part of that is detecting relationships, provided that the database has all relevant constraints in place and you have no naming conventions at all \u2014\u00a0I want to be able to have this ORM work with a database made by any crazy DBA which has his own views on what the columns and tables should be named like. And I'm stuck at many-to-many relationships.\nFirst, there can be compound keys. Then, there can be MTM relationships with three or more tables. Then, a MTM intermediary table might have its own data apart from keys \u2014\u00a0some data common to all tables it ties together.\nWhat I want is a method to programmatically detect that a table X is an intermediary table tying tables A and B, and that any non-key data it has must belong to both A and B (and if I change a common attribute from within A, it should affect the same attribute in B). Are there common algorithms to do that? Or at least to make guesses which are right in 80% of the cases (provided the DBA is sane)?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":3901961,"Users Score":0,"Answer":"Theoretically, any table with multiple foreign keys is in essence a many-to-many relation, which makes your question trivial. I suspect that what you need is a heuristic of when to use MTM patterns (rather than standard classes) in the object model. In that case, examine what are the limitations of the patterns you chose.\nFor example, you can model a simple MTM relationship (two tables, no attributes) by having lists as attributes on both types of objects. However, lists will not be enough if you have additional data on the relationship itself. So only invoke this pattern for tables with two columns, both with foreign keys.","Q_Score":2,"Tags":"python,orm,metaprogramming,introspection,relationships","A_Id":3902030,"CreationDate":"2010-10-10T19:59:00.000","Title":"What are methods of programmatically detecting many-to-many relationships in a RDMBS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Well, the question pretty much summarises it. My db activity is very update intensive, and I want to programmatically issue a Vacuum Analyze. However I get an error that says that the query cannot be executed within a transaction. Is there some other way to do it?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5030,"Q_Id":3931951,"Users Score":14,"Answer":"This is a flaw in the Python DB-API: it starts a transaction for you. It shouldn't do that; whether and when to start a transaction should be up to the programmer. Low-level, core APIs like this shouldn't babysit the developer and do things like starting transactions behind our backs. We're big boys--we can start transactions ourself, thanks.\nWith psycopg2, you can disable this unfortunate behavior with an API extension: run connection.autocommit = True. There's no standard API for this, unfortunately, so you have to depend on nonstandard extensions to issue commands that must be executed outside of a transaction.\nNo language is without its warts, and this is one of Python's. I've been bitten by this before too.","Q_Score":9,"Tags":"python,postgresql,sqlalchemy,psycopg2,vacuum","A_Id":3932055,"CreationDate":"2010-10-14T09:49:00.000","Title":"Is it possible to issue a \"VACUUM ANALYZE \" from psycopg2 or sqlalchemy for PostgreSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi so this is what I understand how Openid works:-\n\nthe user enters his openid url on the site say\"hii.com\"\nThe app does a redirect to the openid provider and either does the login or denies it and sends the response back to the site i.e\"hii.com\"\nIf authentication was succesful then the response object provided by the openid provider can contain other data too like email etc if \"hii.com\" had requested for it.\nI can save this data in the database.\n\nPlease correct me if I am wrong. However what I am not understanding here is the concept of stores. I see openid.store.filestore,nonce,sqlstore. Could someone please provide some clarity on it. What role does this store play here.\nI have gone through python openid docs but end up feeling clueless.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":488,"Q_Id":3937456,"Users Score":1,"Answer":"upd.: my previous answer was wrong\nThe store you are referring to is where your app stores the data during auth.\nStoring it in a shared memcached instance should be the best option (faster than db and reliable enough).","Q_Score":3,"Tags":"python,openid,store","A_Id":3937506,"CreationDate":"2010-10-14T20:51:00.000","Title":"what is the concept of store in OpenID","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"... vs declarative sqlalchemy ?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":4024,"Q_Id":3957938,"Users Score":1,"Answer":"The Elixir syntax is something I find useful when building a database for a given app from scratch and everything is all figured out beforehand.\nI have had my best luck with SQLAlchemy when using it on legacy databases (and on other similarly logistically immutable schemas). Particularly useful is the plugin SQLSoup, for read-only one-time extractions of data in preparation for migrating it elsewhere.\nYMMV but Elixir isn't really designed to adapt to older schemas -- and SQLAlchemy proper is overkill for most small- to mid-size projects (in my opinion of course).","Q_Score":7,"Tags":"python,sqlalchemy,python-elixir","A_Id":3975114,"CreationDate":"2010-10-18T09:36:00.000","Title":"What are the benefits of using Elixir","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux.\nI intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment.\nThe db has IIRC well over half a million rows of data. My questions are:\n\nIs the number of records a cause for concern? (i.e. Will I hit some limits)?\nIs there a better file format for the transitory data (instead of CSV)?\n\nI chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but \nI would like to hear from someone who may have done something similar before.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":8978,"Q_Id":3964378,"Users Score":5,"Answer":"Memory usage for csvfile.reader and csvfile.writer isn't proportional to the number of records, as long as you iterate correctly and don't try to load the whole file into memory. That's one reason the iterator protocol exists. Similarly, csvfile.writer writes directly to disk; it's not limited by available memory. You can process any number of records with these without memory limitations.\nFor simple data structures, CSV is fine. It's much easier to get fast, incremental access to CSV than more complicated formats like XML (tip: pulldom is painfully slow).","Q_Score":1,"Tags":"python,ms-access,csv,odbc","A_Id":3964635,"CreationDate":"2010-10-18T23:49:00.000","Title":"is there a limit to the (CSV) filesize that a Python script can read\/write?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux.\nI intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment.\nThe db has IIRC well over half a million rows of data. My questions are:\n\nIs the number of records a cause for concern? (i.e. Will I hit some limits)?\nIs there a better file format for the transitory data (instead of CSV)?\n\nI chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but \nI would like to hear from someone who may have done something similar before.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":8978,"Q_Id":3964378,"Users Score":1,"Answer":"I wouldn't bother using an intermediate format. Pulling from Access via ADO and inserting right into MySQL really shouldn't be an issue.","Q_Score":1,"Tags":"python,ms-access,csv,odbc","A_Id":3964404,"CreationDate":"2010-10-18T23:49:00.000","Title":"is there a limit to the (CSV) filesize that a Python script can read\/write?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux.\nI intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment.\nThe db has IIRC well over half a million rows of data. My questions are:\n\nIs the number of records a cause for concern? (i.e. Will I hit some limits)?\nIs there a better file format for the transitory data (instead of CSV)?\n\nI chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but \nI would like to hear from someone who may have done something similar before.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":8978,"Q_Id":3964378,"Users Score":0,"Answer":"The only limit should be operating system file size.\nThat said, make sure when you send the data to the new database, you're writing it a few records at a time; I've seen people do things where they try to load the entire file first, then write it.","Q_Score":1,"Tags":"python,ms-access,csv,odbc","A_Id":3964398,"CreationDate":"2010-10-18T23:49:00.000","Title":"is there a limit to the (CSV) filesize that a Python script can read\/write?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || \" \" || second_string, but it looks very ugly.","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2632,"Q_Id":3976313,"Users Score":0,"Answer":"There isn't one.","Q_Score":0,"Tags":"python,sqlite,string","A_Id":3976347,"CreationDate":"2010-10-20T09:16:00.000","Title":"SQLite equivalent of Python's \"'%s %s' % (first_string, second_string)\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || \" \" || second_string, but it looks very ugly.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":2632,"Q_Id":3976313,"Users Score":2,"Answer":"I can understand not liking first_string || ' ' || second_string, but that's the equivalent. Standard SQL (which SQLite speaks in this area) just isn't the world's prettiest string manipulation language. You could try getting the results of the query back into some other language (e.g., Python which you appear to like) and doing the concatenation there; it's usually best to not do \"presentation\" in the database layer (and definitely not a good idea to use the result of concatenation as something to search against; that makes it impossible to optimize with indices!)","Q_Score":0,"Tags":"python,sqlite,string","A_Id":3976353,"CreationDate":"2010-10-20T09:16:00.000","Title":"SQLite equivalent of Python's \"'%s %s' % (first_string, second_string)\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been asked to encrypt various db fields within the db.\nProblem is that these fields need be decrypted after being read.\n\nI'm using Django and SQL Server 2005.\nAny good ideas?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":13590,"Q_Id":3979385,"Users Score":2,"Answer":"If you are storing things like passwords, you can do this:\n\nstore users' passwords as their SHA256 hashes\nget the user's password\nhash it\nList item\n\ncheck it against the stored password\nYou can create a SHA-256 hash in Python by using the hashlib module.\nHope this helps","Q_Score":17,"Tags":"python,sql,sql-server,django,encryption","A_Id":3979447,"CreationDate":"2010-10-20T15:12:00.000","Title":"A good way to encrypt database fields?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been asked to encrypt various db fields within the db.\nProblem is that these fields need be decrypted after being read.\n\nI'm using Django and SQL Server 2005.\nAny good ideas?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":13590,"Q_Id":3979385,"Users Score":6,"Answer":"Yeah. Tell whoever told you to get real. Makes no \/ little sense. If it is about the stored values - enterprise edition 2008 can store encrypted DB files.\nOtherwise, if you really need to (with all disadvantages) just encrypt them and store them as byte fields.","Q_Score":17,"Tags":"python,sql,sql-server,django,encryption","A_Id":3979446,"CreationDate":"2010-10-20T15:12:00.000","Title":"A good way to encrypt database fields?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'd like to build a \"feed\" for recent activity related to a specific section of my site. I haven't used memcache before, but I'm thinking of something like this:\n\nWhen a new piece of information is submitted to the site, assign a unique key to it and also add it to memcache.\nAdd this key to the end of an existing list in memcache, so it can later be referenced.\nWhen retrieving, first retrieve the list of keys from memcache\nFor each key retrieved, retrieve the individual piece of information\nString the pieces together and return them as the \"feed\"\n\nE.g., user comments: user writes, \"Nice idea\"\n\nAssign a unique key to \"Nice idea,\" let's say key \"1234\"\nInsert a key\/data pair into memcache, 1234 -> \"Nice Idea\"\nAppend \"1234\" to an existing list of keys: key_list -> {2341,41234,124,341,1234}\nNow when retrieving, first query the key list: {2341,41234,124,341,1234}\nFor each key in the key list, retrieve the data:\n2341 -> \"Yes\"\n41234 -> \"Good point\"\n124 -> \"That's funny\"\n341 -> \"I don't agree\"\n1234 -> \"Nice Idea\"\n\nIs this a good approach?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":351,"Q_Id":3999496,"Users Score":0,"Answer":"If the list of keys is bounded in size then it should be ok. memcache by default has a 1MB item size limit.\nSounds like memcache is the only storage for the data, is it a good idea?","Q_Score":0,"Tags":"python,memcached,feed","A_Id":4006612,"CreationDate":"2010-10-22T17:43:00.000","Title":"Best way to keep an activity log in memcached","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to implement a python script which writes and reads to a database to track changes within a 3d game (Minecraft) These changes are done by various clients and can be represented by player name, coordinates (x,y,z), and a description. I am storing a high volume of changes and would like to know what would be an easy and preferably fast way to store and retrieve these changes. What kinds of databases that would be suited to this job?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":4000072,"Users Score":0,"Answer":"Any kind. A NoSQL option like MongoDB might be especially interesting.","Q_Score":0,"Tags":"python,database,change-tracking","A_Id":4000101,"CreationDate":"2010-10-22T19:02:00.000","Title":"Suitable kind of database to track a high volume of changes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":1335,"Q_Id":4001314,"Users Score":2,"Answer":"I don't see why not. As a related real-world example, WordPress stores serialized PHP arrays as a single value in many instances.","Q_Score":1,"Tags":"python,mysql,json","A_Id":4001358,"CreationDate":"2010-10-22T22:05:00.000","Title":"Storing JSON in MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1335,"Q_Id":4001314,"Users Score":0,"Answer":"I think,It's beter serialize your XML.If you are using python language ,cPickle is good choice.","Q_Score":1,"Tags":"python,mysql,json","A_Id":4008102,"CreationDate":"2010-10-22T22:05:00.000","Title":"Storing JSON in MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1335,"Q_Id":4001314,"Users Score":5,"Answer":"If you need to query based on the values within the JSON, it would be better to store the values separately.\nIf you are just loading a set of configurations like you say you are doing, storing the JSON directly in the database works great and is a very easy solution.","Q_Score":1,"Tags":"python,mysql,json","A_Id":4001338,"CreationDate":"2010-10-22T22:05:00.000","Title":"Storing JSON in MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":1335,"Q_Id":4001314,"Users Score":2,"Answer":"No different than people storing XML snippets in a database (that doesn't have XML support). Don't see any harm in it, if it really doesn't need to be searched at the DB level. And the great thing about JSON is how parseable it is.","Q_Score":1,"Tags":"python,mysql,json","A_Id":4001334,"CreationDate":"2010-10-22T22:05:00.000","Title":"Storing JSON in MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"A new requirement has come down from the top: implement 'proprietary business tech' with the awesome, resilient Elixir database I have set up. I've tried a lot of different things, such as creating an implib from the provided interop DLL (which apparently doesn't work like COM dlls) which didn't work at all. CPython doesn't like the MFC stuff either, so all attempts to create a Python lib have failed (using C anyway, not sure you can create a python library from .NET directly).\nThe only saving grace is the developer saw fit to provide VBA, .NET and MFC Interop C++ hooks into his library, so there are \"some\" choices, though they all ultimately lead back to the same framework. What would be the best method to: \nA) Keep my model definitions in one place, in one language (Python\/Elixir\/SQLAlchemy)\nB) Have this new .NET access the models without resorting to brittle, hard-coded SQL.\nAny and all suggestions are welcome.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":502,"Q_Id":4017164,"Users Score":0,"Answer":"After a day or so of deliberation, I'm attempting to load the new business module in IronPython. Although I don't really want to introduce to python interpreters into my environment, I think that this will be the glue I need to get this done efficiently.","Q_Score":0,"Tags":"sqlalchemy,python-elixir","A_Id":4025154,"CreationDate":"2010-10-25T17:23:00.000","Title":"Loading Elixir\/SQLAlchemy models in .NET?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Django: If I added new tables to database, how can I query them?\nDo I need to create the relevant models first? Or django creates it by itself?\nMore specifically, I installed another django app, it created several database tables in database, and now I want to get some specific data from them? What are the correct approaches? Thank you very much!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":4042286,"Users Score":0,"Answer":"Django doen't follow convention over configuration philosophy. you have to explicitly create the backing model for the table and in the meta tell it about the table name...","Q_Score":0,"Tags":"python,django,django-models,django-admin","A_Id":4042305,"CreationDate":"2010-10-28T11:11:00.000","Title":"Django: If I added new tables to database, how can I query them?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Django: If I added new tables to database, how can I query them?\nDo I need to create the relevant models first? Or django creates it by itself?\nMore specifically, I installed another django app, it created several database tables in database, and now I want to get some specific data from them? What are the correct approaches? Thank you very much!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":4042286,"Users Score":1,"Answer":"I suppose another django app has all model files needed to access those tables, you should just try importing those packages and use this app's models.","Q_Score":0,"Tags":"python,django,django-models,django-admin","A_Id":4042337,"CreationDate":"2010-10-28T11:11:00.000","Title":"Django: If I added new tables to database, how can I query them?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm having a problem with file uploading. I'm using FastCGI on Apache2 (unix) to run a WSGI-compliant application. File uploads, in the form of images, are begin saved in a MySQL database. However, larger images are being truncated at 65535 bytes. As far as I can tell, nothing should be limiting the size of the files and I'm not sure which one of the pieces in my solution would be causing the problem.\nIs it FastCGI; can it limit file upload sizes?\nIs it Python? The cgi.FieldStorage object gives me a file handle to the uploaded file which I then read: file.read(). Does this limit file sizes in any way?\nIs it MySQL? The type of the column for saving the image data is a longblob. I figured this could store a couple of GB worth of data. So a few MB shouldn't be a problem, right?\nIs it the flups WSGIServer? I can't find any information regarding this.\nMy file system can definitely handle huge files, so that's not a problem. Any ideas?\nUPDATE:\nIt is MySQL. I got python to output the number of bytes uploaded and it's greater than 65535. So I looked into max_allowed_packet for mysqld and set it to 128M. Overkill, but wanting to be sure for the moment. \nMy only problem now is getting python's MySQLdb to allow the transfer of more than 65535 bytes. Does anyone know how to do this? Might post as a separate question.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1279,"Q_Id":4047899,"Users Score":2,"Answer":"If the web server\/gateway layer were truncating incoming form submissions I'd expect an error from FieldStorage, since the truncation would not just interrupt the file upload but also the whole multipart\/form-data structure. Even if cgi.py tolerated this, it would be very unlikely to have truncated the multipart at just the right place to leave exactly 2**16-1 bytes of file upload.\nSo I would suspect MySQL. LONGBLOB should be fine up to 2**32-1, but 65535 would be the maximum length of a normal BLOB. Are you sure the types are what you think? Check with SHOW CREATE TABLE x. Which database layer are you using to get the data in?","Q_Score":3,"Tags":"python,mysql,file-upload,apache2,fastcgi","A_Id":4047955,"CreationDate":"2010-10-28T23:16:00.000","Title":"Does FastCGI or Apache2 limit upload sizes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"It seems as if MySQLdb is restricting the maximum transfer size for SQL statements. I have set the max_allowed_packet to 128M for mysqld. MySQL documentation says that this needs to be done for the client as well.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1923,"Q_Id":4050257,"Users Score":1,"Answer":"You need to put max_allowed_packet into the [client] section of my.cnf on the machine where the client runs. If you want to, you can specify a different file or group in mysqldb.connect.","Q_Score":3,"Tags":"python,mysql","A_Id":4051531,"CreationDate":"2010-10-29T08:36:00.000","Title":"How do I set max_allowed_packet or equivalent for MySQLdb in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When connected to a postgresql database using psycopg and I pull the network cable I get no errors. How can I detect this in code to notify the user?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":214,"Q_Id":4061635,"Users Score":0,"Answer":"You will definitely get an error the next time you try and execute a query, so I wouldn't worry if you can't alert the user at the exact instance they lose there network connection.","Q_Score":2,"Tags":"python,postgresql,psycopg","A_Id":4061641,"CreationDate":"2010-10-31T02:54:00.000","Title":"Python and psycopg detect network error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When connected to a postgresql database using psycopg and I pull the network cable I get no errors. How can I detect this in code to notify the user?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":214,"Q_Id":4061635,"Users Score":0,"Answer":"psycopg can't detect what happens with the network. For example, if you unplug your ethernet cable, replug it and execute a query everything will work OK. You should definitely get an exception when psycopg tries to send some SQL to the backend and there is no network connection but depending on the exact netwokr problem it can take some time. In the worst case you'll have to wait for a TCP timeout on the connection (several tens of seconds).","Q_Score":2,"Tags":"python,postgresql,psycopg","A_Id":4069833,"CreationDate":"2010-10-31T02:54:00.000","Title":"Python and psycopg detect network error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Redis database where we store the navigational information. These data must be persistent and should be fetched faster. I don't have more than 200 MB data in this data set.\nI face problem when writing admin modules for redis db and I really missing the sql schema and power of django style admin modules.\nNow I am thinking of using MySQL. The requirement is, I want the persistent database but the data can be loaded into the memory like redis so that I can do the SQL queries REALLY faster.\nIs it possible to use MySQL in persistent mode and instruct MySQL to use the memory for querying purpose? What is the best suitable MySQL DB where I do not worry much on consistencies where our writes are very few.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1165,"Q_Id":4061828,"Users Score":1,"Answer":"I would create a read only slave to your mysql database and force its database engines to memory. You'd have to handle failures by re-initializing the read only database, but that can be scripted rather easily.\nThis way you still have your persistence in the regular mysql database and your read speed in the read only memory tables.","Q_Score":1,"Tags":"python,mysql,sqlalchemy,performance","A_Id":4061902,"CreationDate":"2010-10-31T04:18:00.000","Title":"fit mysql db in memory","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Redis database where we store the navigational information. These data must be persistent and should be fetched faster. I don't have more than 200 MB data in this data set.\nI face problem when writing admin modules for redis db and I really missing the sql schema and power of django style admin modules.\nNow I am thinking of using MySQL. The requirement is, I want the persistent database but the data can be loaded into the memory like redis so that I can do the SQL queries REALLY faster.\nIs it possible to use MySQL in persistent mode and instruct MySQL to use the memory for querying purpose? What is the best suitable MySQL DB where I do not worry much on consistencies where our writes are very few.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1165,"Q_Id":4061828,"Users Score":0,"Answer":"I would think you could have a persistent table, copy all of the data into a MEMORY engine table whenever the server starts, and have triggers on the memory db for INSERT UPDATE and DELETE write to the persistent table so it is hidden for the user. Correct me if I'm wrong though, it's just the approach I would first try.","Q_Score":1,"Tags":"python,mysql,sqlalchemy,performance","A_Id":4061848,"CreationDate":"2010-10-31T04:18:00.000","Title":"fit mysql db in memory","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.\nWhen I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).\nDoes anyone know why this is happenning and how I can fix it?\nEdit: I used the strip() function for all the attributes before writing a row.\nThanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":7209,"Q_Id":4122794,"Users Score":34,"Answer":"You're using open('file.csv', 'w')--try open('file.csv', 'wb'). \nThe Python csv module requires output files be opened in binary mode.","Q_Score":15,"Tags":"python,excel,csv","A_Id":4122980,"CreationDate":"2010-11-08T10:00:00.000","Title":"Csv blank rows problem with Excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.\nWhen I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).\nDoes anyone know why this is happenning and how I can fix it?\nEdit: I used the strip() function for all the attributes before writing a row.\nThanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7209,"Q_Id":4122794,"Users Score":0,"Answer":"the first that comes into my mind (just an idea) is that you might have used \"\\r\\n\" as row delimiter (which is shown as one linebrak in notepad) but excel expects to get only \"\\n\" or only \"\\r\" and so it interprets this as two line-breaks.","Q_Score":15,"Tags":"python,excel,csv","A_Id":4122816,"CreationDate":"2010-11-08T10:00:00.000","Title":"Csv blank rows problem with Excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple question. I'm doing some light crawling so new content arrives every few days. I've written a tokenizer and would like to use it for some text mining purposes. Specifically, I'm using Mallet's topic modeling tool and one of the pipe is to tokenize the text into tokens before further processing can be done. With the amount of text in my database, it takes a substantial amount of time tokenizing the text (I'm using regex here). \nAs such, is it a norm to store the tokenized text in the db so that tokenized data can be readily available and tokenizing can be skipped if I need them for other text mining purposes such as Topic modeling, POS tagging? What are the cons of this approach?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":894,"Q_Id":4122940,"Users Score":1,"Answer":"I store tokenized text in a MySQL database. While I don't always like the overhead of communication with the database, I've found that there are lots of processing tasks that I can ask the database to do for me (like search the dependency parse tree for complex syntactic patterns).","Q_Score":2,"Tags":"python,caching,postgresql,nlp,tokenize","A_Id":4151273,"CreationDate":"2010-11-08T10:17:00.000","Title":"Storing tokenized text in the db?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.\nI to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.\nHere's my thinking so far:\n\nPerformance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).\nWith SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).\nI won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.\nThe code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.\n\nAny corrections to the above, or anything else I should think about?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":903,"Q_Id":4136800,"Users Score":5,"Answer":"SQLite does not run in a separate process. So you don't actually have any extra overhead from IPC. But IPC overhead isn't that big, anyway, especially over e.g., UNIX sockets. If you need multiple writers (more than one process\/thread writing to the database simultaneously), the locking overhead is probably worse, and MySQL or PostgreSQL would perform better, especially if running on the same machine. The basic SQL supported by all three of these databases is the same, so benchmarking isn't that painful.\nYou generally don't have to do the same type of debugging on SQL statements as you do on your own implementation. SQLite works, and is fairly well debugged already. It is very unlikely that you'll ever have to debug \"OK, that row exists, why doesn't the database find it?\" and track down a bug in index updating. Debugging SQL is completely different than procedural code, and really only ever happens for pretty complicated queries.\nAs for debugging your code, you can fairly easily centralize your SQL calls and add tracing to log the queries you are running, the results you get back, etc. The Python SQLite interface may already have this (not sure, I normally use Perl). It'll probably be easiest to just make your existing Table class a wrapper around SQLite.\nI would strongly recommend not reinventing the wheel. SQLite will have far fewer bugs, and save you a bunch of time. (You may also want to look into Firefox's fairly recent switch to using SQLite to store history, etc., I think they got some pretty significant speedups from doing so.)\nAlso, SQLite's well-optimized C implementation is probably quite a bit faster than any pure Python implementation.","Q_Score":4,"Tags":"python,performance,sqlite","A_Id":4136841,"CreationDate":"2010-11-09T17:49:00.000","Title":"Pros and cons of using sqlite3 vs custom table implementation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.\nI to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.\nHere's my thinking so far:\n\nPerformance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).\nWith SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).\nI won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.\nThe code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.\n\nAny corrections to the above, or anything else I should think about?","AnswerCount":3,"Available Count":3,"Score":0.2605204458,"is_accepted":false,"ViewCount":903,"Q_Id":4136800,"Users Score":4,"Answer":"You could try to make a sqlite wrapper with the same interface as your class Table, so that you keep your code clean and you get the sqlite performences.","Q_Score":4,"Tags":"python,performance,sqlite","A_Id":4136876,"CreationDate":"2010-11-09T17:49:00.000","Title":"Pros and cons of using sqlite3 vs custom table implementation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.\nI to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.\nHere's my thinking so far:\n\nPerformance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).\nWith SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).\nI won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.\nThe code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.\n\nAny corrections to the above, or anything else I should think about?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":903,"Q_Id":4136800,"Users Score":0,"Answer":"If you're doing database work, use a database, if your not, then don't. Using tables, it sound's like you are. I'd recommend using an ORM to make it more pythonic. SQLAlchemy is the most flexible (though it's not strictly just an ORM).","Q_Score":4,"Tags":"python,performance,sqlite","A_Id":4136862,"CreationDate":"2010-11-09T17:49:00.000","Title":"Pros and cons of using sqlite3 vs custom table implementation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually.\nThe Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states: \n\nsh: \/etc\/mysql\/my.cnf: Permission denied\n\nalong with a Traceback that says setup.py couldn't find the file. \nNote that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is \/etc\/mysql\/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error.\nAny suggestions?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":429,"Q_Id":4138504,"Users Score":0,"Answer":"Are you sure that file isn't hardcoded in some other portion of the build process? Why not just add it to you $PATH for the duration of the build?\nDoes the script need to write that file for some reason? Does the build script use su or sudo to attempt to become some other user? Are you absolutely sure about both the permissions and the fact that you ran the script as root?\nIt's a really weird thing if you still can't get to it. Are you using a chroot or a virtualenv?","Q_Score":3,"Tags":"python,mysql,permissions,configuration-files","A_Id":4139191,"CreationDate":"2010-11-09T20:52:00.000","Title":"Trouble installing MySQLdb for second version of Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually.\nThe Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states: \n\nsh: \/etc\/mysql\/my.cnf: Permission denied\n\nalong with a Traceback that says setup.py couldn't find the file. \nNote that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is \/etc\/mysql\/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error.\nAny suggestions?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":429,"Q_Id":4138504,"Users Score":2,"Answer":"As far as I'm aware, there is a very significant difference between \"mysql_config\" and \"my.cnf\".\n\n\"mysql_config\" is usually located in the \"bin\" folder of your MySQL install and when executed, spits out various filesystem location information about your install.\n\"my.cnf\" is a configuration script used by MySQL itself.\n\nIn short, when the script asks for \"mysql_config\", it should be taken to literally mean the executable file with a name of \"mysql_config\" and not the textual configuration file you're feeding it. MYSQLdb needs the \"mysql_config\" file so that it knows which libraries to use. That's it. It does not read your MySQL configuration directly.\nThe errors you are experiencing can be put down to;\n\nIt's trying to open the wrong file and running into permission trouble.\nEven after it has tried to open that file, it still can't find the \"mysql_config\" file.\n\nFrom here, you need to locate your MySQL installation's \"bin\" folder and check it contains \"mysql_config\". Then you can edit the folder path into the \"site.cnf\" file and you should be good to go.","Q_Score":3,"Tags":"python,mysql,permissions,configuration-files","A_Id":4139563,"CreationDate":"2010-11-09T20:52:00.000","Title":"Trouble installing MySQLdb for second version of Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'd like to develop a small\/medium-size cross-platform application (including GUI).\nMy background: mostly web applications with MVC architectures, both Python (Pylons + SqlAlchemy) and Java (know the language well, but don't like it that much). I also know some C#. So far, I have no GUI programming experience (neither Windows Forms, Swing nor QT).\nI plan to use SQLite for data storage: It seems to be a nice cross-platform solution and has some powerful features (e.g. full text search, which SQL Server Compact lacks).\nI have done some research and these are my favorite options:\n\n\n1) QT, Python (PyQT or PySide), and SQLAlchemy\npros:\n\nPython the language\nopen source is strong in the Python world (lots of libraries and users)\nSQLAlchemy: A fantastic way to interact with a DB and incredibly well documented!\n\ncons:\n\ncompilation, distribution and deployment more difficult?\nno QT experience\nQT Designer not as nice as the Visual Studio Winforms designer\n\n\n\n2) .NET\/Mono, Windows Forms, C#, (Fluent) NHibernate, System.Data.SQLite\npros:\n\nC# (I like it, especially compared to Java and would like to get more experience in it)\nThe Winforms GUI designer in Visual Studio seems really slick\nIntelliSense\nClickOnce Deployment(?)\nWindows Forms look and feel good on Windows\n\ncons:\n\n(Fluent) NHibernate far less documented than SQLAlchemy; also annoying: Fluent docs refer to NHibernate docs which refer to Hibernate (aargh!). But plain NHibernate + XML does not look very comfortable.\nWindows Forms will not look + behave native on Linux\/Mac OS (correct?)\nfewer open source libraries in the .NET world, fewer OSS users, less documentation in general\nno WinForms and NHibernate experience\n\n\n\n3) JVM, Java + Jython, Swing, SQLAlchemy\n(I'm emotionally biased against this one, but listed for completeness sake)\npros:\n\nJVM\/Swing work well as cross-platform basis\nJython\nSQLAlchemy\nlots of open source libraries\n\ncons:\n\nSwing seems ugly and difficult to layout\nlacks a good GUI designer\nGuessing that I won't be able to avoid Java for UI stuff\nNot sure how stable the Jython\/Java integration is\n\n(Options that I have ruled out... just to avoid discussion on these):\n- wxWidgets\/wxPython (now that QT is LGPLed)\n- GTK\/PyGTK\nThe look and feel of the final application is very important to me. The above technology stacks are very different (PyQT, .NET WinForms, JVM Swing) and require some time to get proficient, so:\nWhich alternative would you recommend and why?","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":3111,"Q_Id":4145350,"Users Score":5,"Answer":"I'm a Python guy and use PyQt myself, and I can wholly recommend it. Concerning your cons:\n\ncompilation, distribution and deployment more difficult?\n\nNo, not really. For many projects, a full setup.py for e.g. cx_Freeze can be less than 30 lines that rarely need to change (most import dependencies are detected automatically, only need to specify the few modules that are not recognized), and then python setup.py will build a standalone executable. Then you can distribute it just like e.g. a C++ .exe.\n\nno QT experience\n\nI didn't have notable GUI experience either when I started out with Qt (only a bit of fiddling with Tkinter), but I grew to love Qt. Most of the time, all widgets work seamlessly and do what they're supposed to do - and there's a lot of them for many purposes. You name it, there's probably a widget that does it, and doesn't annoy the user by being half-assed. All the nice things we've been spoiled with are there.\nQt is huge, but the PyQt documentation answer most question with reasonable search effort. And if all else fails and you know a bit of C++, you can also look at Qt resources.\n\nQT Designer not as nice as the Visual Studio Winforms designer \n\nI don't know the VS Winforms designer, but I must admit that the Qt Designer is lacking. I ended up making a sketch of the UI in the designer, generating the code, cleaning that up and taking care all remaining details by hand. It works out okay so far, but my projects are rather small.\n\nPS:\n\n(now that QT is LGPLed)\n\nPyQt is still GPL only. PySide is LGPL, yes, but it's not that mature, if that's a concern. The project website states that \"starting development on PySide should be pretty safe now\" though.","Q_Score":11,"Tags":"c#,java,python,user-interface,cross-platform","A_Id":4145581,"CreationDate":"2010-11-10T14:11:00.000","Title":"Python + QT, Windows Forms or Swing for a cross-platform application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm programming a web application using sqlalchemy. Everything was smooth during the first phase of development when the site was not in production. I could easily change the database schema by simply deleting the old sqlite database and creating a new one from scratch. \nNow the site is in production and I need to preserve the data, but I still want to keep my original development speed by easily converting the database to the new schema. \nSo let's say that I have model.py at revision 50 and model.py a revision 75, describing the schema of the database. Between those two schema most changes are trivial, for example a new column is declared with a default value and I just want to add this default value to old records. \nEventually a few changes may not be trivial and require some pre-computation. \nHow do (or would) you handle fast changing web applications with, say, one or two new version of the production code per day ?\nBy the way, the site is written in Pylons if this makes any difference.","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":32073,"Q_Id":4165452,"Users Score":16,"Answer":"What we do.\n\nUse \"major version\".\"minor version\" identification of your applications. Major version is the schema version number. The major number is no some random \"enough new functionality\" kind of thing. It's a formal declaration of compatibility with database schema.\nRelease 2.3 and 2.4 both use schema version 2.\nRelease 3.1 uses the version 3 schema.\nMake the schema version very, very visible. For SQLite, this means keep the schema version number in the database file name. For MySQL, use the database name. \nWrite migration scripts. 2to3.py, 3to4.py. These scripts work in two phases. (1) Query the old data into the new structure creating simple CSV or JSON files. (2) Load the new structure from the simple CSV or JSON files with no further processing. These extract files -- because they're in the proper structure, are fast to load and can easily be used as unit test fixtures. Also, you never have two databases open at the same time. This makes the scripts slightly simpler. Finally, the load files can be used to move the data to another database server.\n\nIt's very, very hard to \"automate\" schema migration. It's easy (and common) to have database surgery so profound that an automated script can't easily map data from old schema to new schema.","Q_Score":63,"Tags":"python,sqlalchemy,pylons,data-migration,migrate","A_Id":4165496,"CreationDate":"2010-11-12T14:08:00.000","Title":"How to efficiently manage frequent schema changes using sqlalchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can anyone help me install Apache with mod_wsgi to run Python for implementation of RESTful Web services. We're trying to get rid of our existing Java REST services with Apache Tomcat.\nThe installation platform is SUSE Linux Enterprise. Please provide a step by step installation procedure with required modules, as I tried it and everytime was missinhg one module or other either in Python installation or Apache installation.\nI followed the standard Installation steps for all 3, Apache, Python and mod_wsgi, but didn't work out for me.\nWould this work at all? Do you have any other suggestions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2568,"Q_Id":4167684,"Users Score":0,"Answer":"Check if mod_wsgi is loaded as a module into the httpd.conf\nAdd apache host that points to a python\/wsgi module which contains the 'def application' definition for your web-service. \nResolve any path issues that maybe arise from your import handling.\n\nIf this doesn't work, drop some error-dump here and we'll check.","Q_Score":0,"Tags":"python,apache,rest,mod-wsgi,mod-python","A_Id":4168054,"CreationDate":"2010-11-12T18:07:00.000","Title":"Install Apache with mod_wsgi to use Python for RESTful web services and Apache for web pages","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i want to create application in windows. i need to use databases which would be preferable best for pyqt application\nlike\nsqlalchemy\nmysql\netc.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":535,"Q_Id":4168020,"Users Score":0,"Answer":"i guess its totally upto you ..but as far as i am concerned i personlly use sqlite, becoz it is easy to use and amazingly simple syntax whereas for MYSQL u can use it for complex apps and has options for performance tuning. but in end its totally upto u and wt your app requires","Q_Score":0,"Tags":"python,database,pyqt","A_Id":4208750,"CreationDate":"2010-11-12T18:49:00.000","Title":"which databases can be used better for pyqt application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to create application in windows. i need to use databases which would be preferable best for pyqt application\nlike\nsqlalchemy\nmysql\netc.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":535,"Q_Id":4168020,"Users Score":1,"Answer":"SQlite is fine for a single user. \nIf you are going over a network to talk to a central database, then you need a database woith a decent Python lirary. \nTake a serious look at MySQL if you need\/want SQL. \nOtherwise, there is CouchDB in the Not SQL camp, which is great if you are storing documents, and can express searches as Map\/reduce functions. Poor for adhoc queries.","Q_Score":0,"Tags":"python,database,pyqt","A_Id":4294636,"CreationDate":"2010-11-12T18:49:00.000","Title":"which databases can be used better for pyqt application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to create application in windows. i need to use databases which would be preferable best for pyqt application\nlike\nsqlalchemy\nmysql\netc.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":535,"Q_Id":4168020,"Users Score":1,"Answer":"If you want a relational database I'd recommend you to use SQLAlchemy, as you then get a choice as well as an ORM. Bu default go with SQLite, as per other recommendations here.\nIf you don't need a relational database, take a look at ZODB. It's an awesome Python-only object-oriented database.","Q_Score":0,"Tags":"python,database,pyqt","A_Id":4512428,"CreationDate":"2010-11-12T18:49:00.000","Title":"which databases can be used better for pyqt application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have my own unit testing suite based on the unittest library. I would like to track the history of each test case being run. I would also like to identify after each run tests which flipped from PASS to FAIL or vice versa.\nI have very little knowledge about databases, but it seems that I could utilize sqlite3 for this task.\nAre there any existing solutions which integrate unittest and a database?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":280,"Q_Id":4170442,"Users Score":0,"Answer":"Technically, yes. The only thing that you need is some kind of scripting language or shell script that can talk to sqlite.\nYou should think of a database like a file in a file system where you don't have to care about the file format. You just say, here are tables of data, with columns. And each row of that is one record. Much like in a Excel table.\nSo if you are familiar with shell scripts or calling command line tools, you can install sqlite and use the sqlitecommand to interact with the database.\nAlthough I think the first thing you should do is to learn basic SQL. There are a lot of SQL tutorials out there.","Q_Score":1,"Tags":"python,unit-testing,sqlite","A_Id":4170458,"CreationDate":"2010-11-13T01:33:00.000","Title":"Using sqlite3 to track unit test results","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use a MongoDB Database from a Google App Engine service is that possible? How do I install the PyMongo driver on Google App Engine? Thanks","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1355,"Q_Id":4178742,"Users Score":1,"Answer":"It's not possible because you don't have access to networks sockets in App Engine. As long as you cannot access the database via HTTP, it's impossible.","Q_Score":4,"Tags":"python,google-app-engine,mongodb,pymongo","A_Id":4179091,"CreationDate":"2010-11-14T17:42:00.000","Title":"is it possible to use PyMongo in Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"We're rewriting a website used by one of our clients. The user traffic on it is very low, less than 100 unique visitors a week. It's basically just a nice interface to their data in our databases. It allows them to query and filter on different sets of data of theirs.\nWe're rewriting the site in Python, re-using the same Oracle database that the data is currently on. The current version is written in an old, old version of Coldfusion. One of the things that Coldfusion does well though is displays tons of database records on a single page. It's capable of displaying hundreds of thousands of rows at once without crashing the browser. It uses a Java applet, and it looks like the contents of the rows are perhaps compressed and passed in through the HTML or something. There is a large block of data in the HTML but it's not displayed - it's just rendered by the Java applet.\nI've tried several JavaScript solutions but they all hinge on the fact that the data will be present in an HTML table or something along those lines. This causes browsers to freeze and run out of memory.\nDoes anyone know of any solutions to this situation? Our client loves the ability to scroll through all of this data without clicking a \"next page\" link.","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":3442,"Q_Id":4186384,"Users Score":1,"Answer":"Most people, in this case, would use a framework. The best documented and most popular framework in Python is Django. It has good database support (including Oracle), and you'll have the easiest time getting help using it since there's such an active Django community.\nYou can try some other frameworks, but if you're tied to Python I'd recommend Django.\nOf course, Jython (if it's an option), would make your job very easy. You could take the existing Java framework you have and just use Jython to build a frontend (and continue to use your Java applet and Java classes and Java server).\nThe memory problem is an interesting one; I'd be curious to see what you come up with.","Q_Score":7,"Tags":"python,html,oracle,coldfusion","A_Id":4186505,"CreationDate":"2010-11-15T16:18:00.000","Title":"How to display database query results of 100,000 rows or more with HTML?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am implementing a class that resembles a typical database table: \n\nhas named columns and unnamed rows\nhas a primary key by which I can refer to the rows\nsupports retrieval and assignment by primary key and column title\ncan be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column\nremoval of a row is fast and is implemented as \"soft-delete\": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations\naddition of a column is fast\nrows are rarely added\ncolumns are rarely deleted\n\nI decided to implement the class directly rather than use a wrapper around sqlite. \nWhat would be a good data structure to use?\n\nJust as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways:\n\nAs lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list.\nAs dictionaries. Column titles are the keys of this dictionary.\n\nNot sure about the pros\/cons of the two.\n\nThe reasons I want to write my own code are:\n\nI need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what \"reason\" (the \"reason\" is passed to my delete method).\nI need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated)","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1361,"Q_Id":4188202,"Users Score":2,"Answer":"I would consider building a dictionary with keys that are tuples or lists. Eg: my_dict((\"col_2\", \"row_24\")) would get you this element. Starting from there, it would be pretty easy (if not extremely fast for very large databases) to write 'get_col' and 'get_row' methods, as well as 'get_row_slice' and 'get_col_slice' from the 2 preceding ones to gain access to your methods.\nUsing a whole dictionary like that will have 2 advantages. 1) Getting a single element will be faster than your 2 proposed methods; 2) If you want to have different number of elements (or missing elements) in your columns, this will make it extremely easy and memory efficient.\nJust a thought :) I'll be curious to see what packages people will suggest!\nCheers","Q_Score":6,"Tags":"python,performance,data-structures,implementation","A_Id":4188260,"CreationDate":"2010-11-15T19:48:00.000","Title":"How to implement database-style table in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am implementing a class that resembles a typical database table: \n\nhas named columns and unnamed rows\nhas a primary key by which I can refer to the rows\nsupports retrieval and assignment by primary key and column title\ncan be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column\nremoval of a row is fast and is implemented as \"soft-delete\": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations\naddition of a column is fast\nrows are rarely added\ncolumns are rarely deleted\n\nI decided to implement the class directly rather than use a wrapper around sqlite. \nWhat would be a good data structure to use?\n\nJust as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways:\n\nAs lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list.\nAs dictionaries. Column titles are the keys of this dictionary.\n\nNot sure about the pros\/cons of the two.\n\nThe reasons I want to write my own code are:\n\nI need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what \"reason\" (the \"reason\" is passed to my delete method).\nI need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated)","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1361,"Q_Id":4188202,"Users Score":0,"Answer":"You really should use SQLite.\nFor your first reason (tracking deletion reasons) you can easily implement this by having a second table that you \"move\" rows to on deletion. The reason can be tracked in additional column in that table or another table you can join. If a deletion reason isn't always required then you can even use triggers on your source table to copy rows about to be deleted, and\/or have a user defined function that can get the reason.\nThe indexing reason is somewhat covered by constraints etc but I can't directly address it without more details.","Q_Score":6,"Tags":"python,performance,data-structures,implementation","A_Id":4231416,"CreationDate":"2010-11-15T19:48:00.000","Title":"How to implement database-style table in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the mysql connector (https:\/\/launchpad.net\/myconnpy) with SQLAlchemy and, though the table is definitely UTF8, any string columns returned are just normal strings not unicode. The documentation doesn't list any specific parameters for UTF8\/unicode support for the mysql connector driver so I borrowed from the mysqldb driver. Here is my connect string:\nmysql+mysqlconnector:\/\/user:pass@myserver.com\/mydbname?charset=utf8&use_unicode=0\nI'd really prefer to keep using this all-python mysql driver. Any suggestions?","AnswerCount":2,"Available Count":1,"Score":-0.2913126125,"is_accepted":false,"ViewCount":1239,"Q_Id":4191370,"Users Score":-3,"Answer":"Sorry, i don't know about the connector, i use MySQLDB and it is working quite nicely. I work in UTF8 as well and i didn't have any problem.","Q_Score":1,"Tags":"python,mysql,unicode,sqlalchemy","A_Id":4192633,"CreationDate":"2010-11-16T05:06:00.000","Title":"MySql Connector (python) and SQLAlchemy Unicode problem","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What the difference is between flush() and commit() in SQLAlchemy?\nI've read the docs, but am none the wiser - they seem to assume a pre-understanding that I don't have.\nI'm particularly interested in their impact on memory usage. I'm loading some data into a database from a series of files (around 5 million rows in total) and my session is occasionally falling over - it's a large database and a machine with not much memory. \nI'm wondering if I'm using too many commit() and not enough flush() calls - but without really understanding what the difference is, it's hard to tell!","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":180674,"Q_Id":4201455,"Users Score":0,"Answer":"commit () records these changes in the database. flush () is always called as part of the commit () (1) call. When you use a Session object to query a database, the query returns results from both the database and the reddened parts of the unrecorded transaction it is performing.","Q_Score":569,"Tags":"python,sqlalchemy","A_Id":65843088,"CreationDate":"2010-11-17T04:20:00.000","Title":"SQLAlchemy: What's the difference between flush() and commit()?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using sqlite with python. I'm implementing the POP3 protocol. I have a table \n\nmsg_id text\ndate text\nfrom_sender text\nsubject text\nbody text\nhashkey text\n\nNow I need to check for duplicate messages by checking the message id of the message retrieved against the existing msg_id's in the table. I encrypted the msg_id using md5 and put it in the hashkey column. Whenever I retrieve mail, I hash the message id and check it with the table values. Heres what I do.\n\n\ndef check_duplicate(new):\n conn = sql.connect(\"mail\")\n c = conn.cursor()\n m = hashlib.md5()\n m.update(new)\n c.execute(\"select hashkey from mail\")\n for row in c:\n if m.hexdigest() == row:\n return 0\n else:\n continue\n\n return 1\n\nIt just refuses to work correctly. I tried printing the row value, it shows it in unicode, thats where the problem lies as it cannot compare properly. \nIs there a better way to do this, or to improve my method?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":481,"Q_Id":4208146,"Users Score":0,"Answer":"The main issue is that you're trying to compare a Python string (m.hexdigest()) with a tuple.\nAdditionally, another poster's suggestion that you use SQL for the comparison is probably good advice. Another SQL suggestion would be to fix your columns -- TEXT for everything probably isn't what you want; an index on your hashkey column is very likely a good thing.","Q_Score":0,"Tags":"python,sql,sqlite","A_Id":4208359,"CreationDate":"2010-11-17T19:08:00.000","Title":"Comparing sql values","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which is more expensive to do in terms of resources and efficiency, File read\/write operation or Database Read\/Write operation? \nI'm using MongoDB, with Python. I't be preforming about 100k requests on the db\/file per minute. Also, there's about 15000 documents in the database \/ file.\nWhich would be faster? thanks in advance.","AnswerCount":5,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":3529,"Q_Id":4210057,"Users Score":6,"Answer":"It depends.. if you need to read sequenced data, file might be faster, if you need to read random data, database has better chances to be optimized to your needs.\n(after all - database reads it's records from a file as well, but it has an internal structure and algorithms to enhance performance, it can use the memory in a smarter way, and do a lot in the background so the results will come faster)\nin an intensive case of random reading - I will go with the database option.","Q_Score":4,"Tags":"python,performance,mongodb","A_Id":4210090,"CreationDate":"2010-11-17T22:58:00.000","Title":"Is a file read faster than reading data from the database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which is more expensive to do in terms of resources and efficiency, File read\/write operation or Database Read\/Write operation? \nI'm using MongoDB, with Python. I't be preforming about 100k requests on the db\/file per minute. Also, there's about 15000 documents in the database \/ file.\nWhich would be faster? thanks in advance.","AnswerCount":5,"Available Count":5,"Score":0.0399786803,"is_accepted":false,"ViewCount":3529,"Q_Id":4210057,"Users Score":1,"Answer":"Reading from a database can be more efficient, because you can access records directly and make use of indexes etc. With normal flat files you basically have to read them sequentially. (Mainframes support direct access files, but these are sort of halfway between flat files and databases).\nIf you are in a multi-user environment, you must make sure that your data remain consistent even if multiple users try updates at the same time. With flat files, you have to lock the file for all but one user until she is ready with her update, and then lock for the next. Databases can do locking on row level.\nYou can make a file based system as efficient as a database, but that effort amounts to writing a database system yourself.","Q_Score":4,"Tags":"python,performance,mongodb","A_Id":49248435,"CreationDate":"2010-11-17T22:58:00.000","Title":"Is a file read faster than reading data from the database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which is more expensive to do in terms of resources and efficiency, File read\/write operation or Database Read\/Write operation? \nI'm using MongoDB, with Python. I't be preforming about 100k requests on the db\/file per minute. Also, there's about 15000 documents in the database \/ file.\nWhich would be faster? thanks in advance.","AnswerCount":5,"Available Count":5,"Score":0.1194272985,"is_accepted":false,"ViewCount":3529,"Q_Id":4210057,"Users Score":3,"Answer":"There are too many factors to offer a concrete answer, but here's a list for you to consider:\n\nDisk bandwidth\nDisk latency\nDisk cache\nNetwork bandwidth\nMongoDB cluster size\nVolume of MongoDB client activity (the disk only has one \"client\" unless your machine is busy with other workloads)","Q_Score":4,"Tags":"python,performance,mongodb","A_Id":4210106,"CreationDate":"2010-11-17T22:58:00.000","Title":"Is a file read faster than reading data from the database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which is more expensive to do in terms of resources and efficiency, File read\/write operation or Database Read\/Write operation? \nI'm using MongoDB, with Python. I't be preforming about 100k requests on the db\/file per minute. Also, there's about 15000 documents in the database \/ file.\nWhich would be faster? thanks in advance.","AnswerCount":5,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":3529,"Q_Id":4210057,"Users Score":0,"Answer":"If caching is not used sequential IO operations are faster with files by definition. Databases eventually use files, but they have more layers to pass before data hit the file. But if you want to query data using database is more efficient, because if you choose files you will have to implement it yourselves. For your task i recommend to research clustering for different databases, they can scale to your rate.","Q_Score":4,"Tags":"python,performance,mongodb","A_Id":4210113,"CreationDate":"2010-11-17T22:58:00.000","Title":"Is a file read faster than reading data from the database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which is more expensive to do in terms of resources and efficiency, File read\/write operation or Database Read\/Write operation? \nI'm using MongoDB, with Python. I't be preforming about 100k requests on the db\/file per minute. Also, there's about 15000 documents in the database \/ file.\nWhich would be faster? thanks in advance.","AnswerCount":5,"Available Count":5,"Score":0.1586485043,"is_accepted":false,"ViewCount":3529,"Q_Id":4210057,"Users Score":4,"Answer":"Try it and tell us the answer.","Q_Score":4,"Tags":"python,performance,mongodb","A_Id":4210368,"CreationDate":"2010-11-17T22:58:00.000","Title":"Is a file read faster than reading data from the database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a noob question.\nI have a record in a table that looks like '\\1abc'\nI then use this string as a regex replacement in re.sub(\"([0-9])\",thereplacement,\"2\")\nI'm a little confused with the backslashes. The string i got back was \"\\\\1abc\"","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":191,"Q_Id":4224400,"Users Score":2,"Answer":"Note that you can make \\ stop being an escape character by setting standard_conforming_strings to on.","Q_Score":0,"Tags":"python,postgresql","A_Id":4226375,"CreationDate":"2010-11-19T11:11:00.000","Title":"regarding backslash from postgresql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Short Question:\nIs there any nosql flat-file database available as sqlite?\nExplanation:\nFlat file database can be opened in different processes to read, and keep one process to write. I think its perfect for read cache if there's no strict consistent needed. Say 1-2 secs write to the file or even memory block and the readers get updated data after that.\nSo I almost choose to use sqlite, as my python server read cache. But there's still one problem. I don't like to rewrite sqls again in another place and construct another copy of my data tables in sqlite just as the same as I did in PostgreSql which used as back-end database.\nso is there any other choice?thanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28380,"Q_Id":4245438,"Users Score":0,"Answer":"Something trivial but workable, if you are looking storage backed up key value data structure use pickled dictionary. Use cPickle for better performance if needed.","Q_Score":49,"Tags":"python,database,caching,sqlite,nosql","A_Id":15588028,"CreationDate":"2010-11-22T12:36:00.000","Title":"Is there any nosql flat file database just as sqlite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a Python module that writes Excel 2007+ files?\nI'm interested in writing a file longer than 65535 lines and only Excel 2007+ supports it.","AnswerCount":8,"Available Count":1,"Score":0.024994793,"is_accepted":false,"ViewCount":21620,"Q_Id":4257771,"Users Score":1,"Answer":"If you are on Windows and have Excel 2007+ installed, you should be able to use pywin32 and COM to write XLSX files using almost the same code as you would would to write XLS files ... just change the \"save as ....\" part at the end.\nProbably, you can also write XLSX files using Excel 2003 with the freely downloadable add-on kit but the number of rows per sheet would be limited to 64K.","Q_Score":14,"Tags":"python,excel,excel-2007,openpyxl","A_Id":4258896,"CreationDate":"2010-11-23T15:36:00.000","Title":"Python: Writing to Excel 2007+ files (.xlsx files)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've built a number of python driven sites that utilize mongodb as a database backend and am very happy with it's ObjectId system, however, I'd love to be able encode the ids in a shorter fashion without building a mapping collection or utilizing a url-shortener service.\nSuggestions? Success stories?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4165,"Q_Id":4261129,"Users Score":0,"Answer":"If you can generate auto-incrementing unique numbers, there's absolutely no need to use ObjectId for _id. Doing this in a distributed environment will most likely be more expensive than using ObjectId. That's your tradeoff.","Q_Score":14,"Tags":"python,mongodb","A_Id":8654689,"CreationDate":"2010-11-23T21:26:00.000","Title":"How can one shorten mongo ids for better use in URLs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've built a number of python driven sites that utilize mongodb as a database backend and am very happy with it's ObjectId system, however, I'd love to be able encode the ids in a shorter fashion without building a mapping collection or utilizing a url-shortener service.\nSuggestions? Success stories?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":4165,"Q_Id":4261129,"Users Score":1,"Answer":"If you are attempting to retain the original value then there really is not a good way. You could encode it, but the likeliness of it being smaller is minimal. You could hash it, but then it's not reversible.\nIf this is a REQUIREMENT, I'd probably recommend creating a lookup table or collection where a small incremental number references entries in a Mongo Collection.","Q_Score":14,"Tags":"python,mongodb","A_Id":4261319,"CreationDate":"2010-11-23T21:26:00.000","Title":"How can one shorten mongo ids for better use in URLs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When creating a virtual environment with no -site packages do I need to install mysql & the mysqldb adapter which is in my global site packages in order to use them in my virtual project environment?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":920,"Q_Id":4273729,"Users Score":5,"Answer":"You can also (on UNIX) symlink specific packages from the Python site-packages into your virtualenv's site-packages.","Q_Score":4,"Tags":"python,virtualenv","A_Id":4273823,"CreationDate":"2010-11-25T04:29:00.000","Title":"Python Virtualenv","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using SqlAlchemy in my Pylons application to access data and SqlAlchemy-migrate to maintain the database schema.\nIt works fine for managing the schema itself. However, I also want to manage seed data in a migrate-like way. E.g. when ProductCategory table is created it would make sense to seed it with categories data.\nLooks like SqlAlchemy-migrate does not support this directly. What would be a good approach to do this with Pylons+SqlAlchemy+SqlAlchemy-migrate?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2454,"Q_Id":4298886,"Users Score":2,"Answer":"Well what format is your seed data starting out in? The migrate calls are just python methods so you're free to open some csv, create SA object instances, loop, etc. I usually have my seed data as a series of sql insert statements and just loop over them executing a migate.execute(query) for each one. \nSo I'll first create the table, loop and run seed data, and then empty\/drop table on the downgrade method.","Q_Score":2,"Tags":"python,sqlalchemy,pylons,sqlalchemy-migrate","A_Id":4300116,"CreationDate":"2010-11-28T20:24:00.000","Title":"Managing seed data with SqlAlchemy and SqlAlchemy-migrate","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning to use SQLAlchemy connected to a SQL database for 12 standard relational tables (e.g. SQLite or PostgreSQL). But then I'd like to use Redis with Python for a couple of tables, particularly for Redis's fast set manipulation. I realise that Redis is NoSQL, but can I integrate this with SQLAlchemy for the benefit of the session and thread handling that SQLAlchemy has?\nIs there a Redis SA dialect? I couldn't find it, which probably means that I'm missing some basic point. Is there a better architecture I should look at to use two different types of database?","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":13868,"Q_Id":4324407,"Users Score":17,"Answer":"While it is possible to set up an ORM that puts data in redis, it isn't a particularly good idea. ORMs are designed to expose standard SQL features. Many things that are standard in SQL such as querying on arbitrary columns are not available in redis unless you do a lot of extra work. At the same time redis has features such as set manipulation that do not exist in standard SQL so will not be used by the ORM.\nYour best option is probably to write your code to interact directly with redis rather than trying to use an inappropriate abstraction - Generally you will find that the code to get data out of redis is quite a bit simpler than the SQL code that justifies using an ORM.","Q_Score":15,"Tags":"python,sqlalchemy,nosql,redis","A_Id":4331070,"CreationDate":"2010-12-01T12:36:00.000","Title":"How to integrate Redis with SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning to use SQLAlchemy connected to a SQL database for 12 standard relational tables (e.g. SQLite or PostgreSQL). But then I'd like to use Redis with Python for a couple of tables, particularly for Redis's fast set manipulation. I realise that Redis is NoSQL, but can I integrate this with SQLAlchemy for the benefit of the session and thread handling that SQLAlchemy has?\nIs there a Redis SA dialect? I couldn't find it, which probably means that I'm missing some basic point. Is there a better architecture I should look at to use two different types of database?","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":13868,"Q_Id":4324407,"Users Score":14,"Answer":"Redis is very good at what it does, storing key values and making simple atomic operations, but if you want to use it as a relational database you're really gonna SUFFER!, as I had... and here is my story...\nI've done something like that, making several objects to abstracting all the redis internals exposing primitives queries (I called filters in my code), get, set, updates, and a lot more methods that you can expect from a ORM and in fact if you are dealing only with localhost, you're not going to perceive any slowness in your application, you can use redis as a relational database but if in any time you try to move your database into another host, that will represent a lot of problems in terms of network transmission, I end up with a bunch of re-hacked classes using redis and his pipes, which it make my program like 900% faster, making it usable in the local network, anyway I'm starting to move my database library to postgres.\nThe lesson of this history is to never try to make a relational database with the key value model, works great at basic operations, but the price of not having the possibility to make relations in your server comes with a high cost.\nReturning to your question, I don't know any project to make an adapter to sqlalchemy for redis, and I think that nobody are going to be really interested in something like that, because of the nature of each project.","Q_Score":15,"Tags":"python,sqlalchemy,nosql,redis","A_Id":4332791,"CreationDate":"2010-12-01T12:36:00.000","Title":"How to integrate Redis with SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database full of data, including a date and time string, e.g. Tue, 21 Sep 2010 14:16:17 +0000\nWhat I would like to be able to do is extract various documents (records) from the database based on the time contained within the date string, Tue, 21 Sep 2010 14:16:17 +0000.\nFrom the above date string, how would I use python and regex to extract documents that have the time 15:00:00? I'm using MongoDB by the way, in conjunction with Python.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":563,"Q_Id":4325194,"Users Score":1,"Answer":"I agree with the other poster. Though this doesn't solve your immediate problem, if you have any control over the database, you should seriously consider creating a time\/column, with either a DATE or TIMESTAMP datatype. That would make your system much more robust, & completely avoid the problem of trying to parse dates from string (an inherently fragile technique).","Q_Score":3,"Tags":"python,regex,mongodb,datetime,database","A_Id":4325260,"CreationDate":"2010-12-01T14:10:00.000","Title":"Extracting Date and Time info from a string.","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am implementing a database model to store the 20+ fields of the iCal calendar format and am faced with tediously typing in all these into an SQLAlchemy model.py file. Is there a smarter approach? I am looking for a GUI or model designer that can create the model.py file for me. I would specify the column names and some attributes, e.g, type, length, etc.\nAt the minimum, I need this designer to output a model for one table. Additional requirements, in decreasing order of priority:\n\nCreate multiple tables\nSupport basic relationships between the multiple tables (1:1, 1:n)\nSupport constraints on the columns.\n\nI am also open to other ways of achieving the goal, perhaps using a GUI to create the tables in the database and then reflecting them back into a model.\nI appreciate your feedback in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2698,"Q_Id":4330339,"Users Score":0,"Answer":"\"I would specify the column names and some attributes, e.g, type, length, etc.\"\nIsn't that the exact same thing as \n\"tediously typing in all these into an SQLAlchemy model.py file\"?\nIf those two things aren't identical, please explain how they're different.","Q_Score":5,"Tags":"python,model,sqlalchemy,data-modeling","A_Id":4330995,"CreationDate":"2010-12-01T23:52:00.000","Title":"Is there any database model designer that can output SQLAlchemy models?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just downloaded sqlite3.exe. It opens up as a command prompt. I created a table test & inserted a few entries in it. I used .backup test just in case. After I exit the program using .exit and reopened it I don't find the table listed under .tables nor can I run any query on it.\nI need to quickly run an open source python program that makes use of this table & although I have worked with MySQL, I have no clue about sqlite. I need the minimal basics of sqlite. Can someone guide me through this or at least tell me how to permanently store my tables.\nI have put this sqlite3.exe in Python folder assuming that python would then be able to read the sqlite files. Any ideas on this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2598,"Q_Id":4348658,"Users Score":0,"Answer":"Just execute sqlite3 foo.db? This will permanently store everything you do afterwards in this file. (No need for .backup.)","Q_Score":0,"Tags":"python,sqlite","A_Id":4348768,"CreationDate":"2010-12-03T18:30:00.000","Title":"How to create tables in sqlite 3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some (Excel 2000) workbooks. I want to extract the data in each worksheet to a separate file.\nI am running on Linux.\nIs there a library I can use to access (read) XLS files on Linux from Python?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2104,"Q_Id":4355435,"Users Score":0,"Answer":"The easiest way would be to run excel up under Wine or as a VM and do it from Windows. You can use Mark Hammond's COM bindings, which come bundled with ActiveState Python. Alternatively, you could export the data in CSV format and read it from that.","Q_Score":3,"Tags":"python,excel","A_Id":4355455,"CreationDate":"2010-12-04T19:41:00.000","Title":"Cross platform way to read Excel files in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?\nNote\nI am not asking whether RDBMS\/SQL is not relevant because that will only start flamewar.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":210,"Q_Id":4355909,"Users Score":0,"Answer":"The NoSQL effort has to do with creating a persistence layer that scales with modern applications using non-normalized data structures for fast reads & writes and data formats like JSON, the standard format used by ajax based systems. It is sometimes the case that transaction based relational databases do not scale well, but more often than not poor performance is directly related to poor data modeling, poor query creation and poor planning.\nNo persistence layer should have anything to do with your domain model. Using a data abstraction layer, you transform the data contained in your objects to the schema implemented in your data store. You would then use the same DAL to read data from your data store, transform and load it into your objects.\nYour data store could be XML files, an RDBMS like SQL Server or a NoSQL implementation like CouchDB. It doesn't matter.\nFWIW, I've built and inherited plenty of applications that used no model at all. For some, there's no need, but if you're using an object model it has to fit the needs of the application, not the data store and not the presentation layer.","Q_Score":0,"Tags":"python,mysql,ruby-on-rails,nosql","A_Id":4357332,"CreationDate":"2010-12-04T21:24:00.000","Title":"With the rise of NoSQL, Is it more common these days to have a webapp without any model?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?\nNote\nI am not asking whether RDBMS\/SQL is not relevant because that will only start flamewar.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":210,"Q_Id":4355909,"Users Score":0,"Answer":"SQL databases are still the order of the day. But it's becoming more common to use unstructured stores. NoSQL databases are well suited for some web apps, but not necessarily all of them.","Q_Score":0,"Tags":"python,mysql,ruby-on-rails,nosql","A_Id":4355924,"CreationDate":"2010-12-04T21:24:00.000","Title":"With the rise of NoSQL, Is it more common these days to have a webapp without any model?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?\nNote\nI am not asking whether RDBMS\/SQL is not relevant because that will only start flamewar.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":210,"Q_Id":4355909,"Users Score":4,"Answer":"I don't think \"NoSQL\" has anything to do with \"no model\".\nFor one, MVC originated in the Smalltalk world for desktop applications, long before the current web server architecture (or even the web itself) existed. Most apps I've written have used MVC (including the M), even those that didn't use a DBMS (R or otherwise).\nFor another, some kinds of \"NoSQL\" explicitly have a model. An object database might look, to the application code, almost just like the interface that your \"SQL RDBMS + ORM\" are trying to expose, but without all the weird quirks and explicit mapping and so on.\nFinally, you can obviously go the other way, and write SQL-based apps with no model. It may not be pretty, but I've seen it done.","Q_Score":0,"Tags":"python,mysql,ruby-on-rails,nosql","A_Id":4355976,"CreationDate":"2010-12-04T21:24:00.000","Title":"With the rise of NoSQL, Is it more common these days to have a webapp without any model?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When committing data that has originally come from a webpage, sometimes data has to be converted to a data type or format which is suitable for the back-end database. For instance, a date in 'dd\/mm\/yyyy' format needs to be converted to a Python date-object or 'yyyy-mm-dd' in order to be stored in a SQLite date column (SQLite will accept 'dd\/mm\/yyyy', but that can cause problems when data is retrieved).\nQuestion - at what point should the data be converted?\n a) As part of a generic web_page_save() method (immediately after data validation, but before a row.table_update() method is called).\n b) As part of row.table_update() (a data-object method called from web- or non-web-based applications, and includes construction of a field-value parameter list prior to executing the UPDATE command).\nIn other words, from a framework point-of-view, does the data-conversion belong to page-object processing or data-object processing?\nAny opinions would be appreciated.\nAlan","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":4360407,"Users Score":1,"Answer":"I could be wrong, but I think there is no definite answer to this question. It depends on \"language\" level your framework provides. For example, if another parts of the framework accept data in non-canonical form and then convert it to an internal canonical form, it this case it would worth to support some input date formats that are expected.\nI always prefer to build strict frameworks and convert data in front-ends.","Q_Score":0,"Tags":"python,sqlite","A_Id":4360475,"CreationDate":"2010-12-05T18:24:00.000","Title":"Framework design question","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When committing data that has originally come from a webpage, sometimes data has to be converted to a data type or format which is suitable for the back-end database. For instance, a date in 'dd\/mm\/yyyy' format needs to be converted to a Python date-object or 'yyyy-mm-dd' in order to be stored in a SQLite date column (SQLite will accept 'dd\/mm\/yyyy', but that can cause problems when data is retrieved).\nQuestion - at what point should the data be converted?\n a) As part of a generic web_page_save() method (immediately after data validation, but before a row.table_update() method is called).\n b) As part of row.table_update() (a data-object method called from web- or non-web-based applications, and includes construction of a field-value parameter list prior to executing the UPDATE command).\nIn other words, from a framework point-of-view, does the data-conversion belong to page-object processing or data-object processing?\nAny opinions would be appreciated.\nAlan","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":75,"Q_Id":4360407,"Users Score":2,"Answer":"I think it belongs in the validation. You want a date, but the web page inputs strings only, so the validator needs to check if the value van be converted to a date, and from that point on your application should process it like a date.","Q_Score":0,"Tags":"python,sqlite","A_Id":4360452,"CreationDate":"2010-12-05T18:24:00.000","Title":"Framework design question","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Which one of Ruby-PHP-Python is best suited for Cassandra\/Hadoop on 500M+ users? I know language itself is not a big concern but I like to know base on proven success, infrastructure and available utilities around those frameworks! thanks so much.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":339,"Q_Id":4398341,"Users Score":0,"Answer":"Because Cassandra is written in Java, a client also in Java would likely have the best stability and maturity for your application.\nAs far as choosing between those 3 dynamic languages, I'd say whatever you're most comfortable with is best. I don't know of any significant differences between client libraries in those languages.","Q_Score":0,"Tags":"php,python,ruby-on-rails,scalability,cassandra","A_Id":9921879,"CreationDate":"2010-12-09T12:46:00.000","Title":"Scability of Ruby-PHP-Python on Cassandra\/Hadoop on 500M+ users","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The question pretty much says it all. The database is in MySQL using phpMyAdmin. \nA little background: I'm writing the interface for a small non-profit organization. They need to be able to see which customers to ship to this month, which customers have recurring orders, etc. The current system is ancient, written in PHP 4, and I'm in charge of upgrading it. I spoke with the creator of the current system, and he agreed that it would be better to just write a new interface.\nI'm new to Python, SQL and PHP, so this is a big learning opportunity for me. I'm pretty excited. I do have a lot of programming experience though (C, Java, Objective-C), and I don't anticipate any problems picking up Python.\nSo here I am!\nThanks in advance for all your help.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":186,"Q_Id":4413840,"Users Score":0,"Answer":"What can I say? Just download the various software, dig in and ask questions here when you run into specific problems.","Q_Score":0,"Tags":"php,python,mysql,phpmyadmin","A_Id":4413898,"CreationDate":"2010-12-10T22:23:00.000","Title":"I have a MySQL database, I want to write an interface for it using Python. Help me get started, please!","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a Python logger script which writes to a CSV file in the following manner:\n\nOpen the file\nAppend data\nClose the file (I think this is necessary to save the changes, to be safe after every logging routine.)\n\nPROBLEM:\nThe file is very much accessible through Windows Explorer (I'm using XP). If the file is opened in Excel, access to it is locked by Excel. When the script tries to append data, obviously it fails, then it aborts altogether.\nOBJECTIVE:\nIs there a way to lock a file using Python so that any access to it remains exclusive to the script? Or perhaps my methodology is poor in the first place?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2562,"Q_Id":4427936,"Users Score":0,"Answer":"As far as I know, Windows does not support file locking. In other words, applications that don't know about your file being locked can't be prevented from reading a file.\nBut the remaining question is: how can Excel accomplish this?\nYou might want to try to write to a temporary file first (one that Excel does not know about) and replace the original file by it lateron.","Q_Score":1,"Tags":"python,logging,file-locking","A_Id":4427958,"CreationDate":"2010-12-13T10:36:00.000","Title":"Prevent a file from being opened","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.\nWhich path should be followed, any advice ?\nFor example:\nThere is a table of Customers, containing fields like Name, Address\nNow for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only.","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":159,"Q_Id":4428613,"Users Score":1,"Answer":"I agree with Mchi, there is no problem storing \"pickled\" data if you don't need to search or do relational type operations.\nDenormalisation is also an important tool that can scale up database performance when applied correctly.\nIt's probably a better idea to use JSON instead of pickles. It only uses a little more space, and makes it possible to use the database from languages other than Python","Q_Score":2,"Tags":"python,mysql","A_Id":4428933,"CreationDate":"2010-12-13T12:04:00.000","Title":"Is it a good practice to use pickled data instead of additional tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.\nWhich path should be followed, any advice ?\nFor example:\nThere is a table of Customers, containing fields like Name, Address\nNow for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only.","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":159,"Q_Id":4428613,"Users Score":2,"Answer":"Mixing SQL databases and pickling seems to ask for trouble. I'd go with either sticking all data in the SQL databases or using only pickling, in the form of the ZODB, which is a Python only OO database that is pretty damn awesome. \nMixing makes case sometimes, but is usually just more trouble than it's worth.","Q_Score":2,"Tags":"python,mysql","A_Id":4429509,"CreationDate":"2010-12-13T12:04:00.000","Title":"Is it a good practice to use pickled data instead of additional tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.\nWhich path should be followed, any advice ?\nFor example:\nThere is a table of Customers, containing fields like Name, Address\nNow for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only.","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":4428613,"Users Score":0,"Answer":"I agree with @Lennart Regebro. You should probably see whether you need a Relational DB or an OODB. If RDBMS is your choice, I would suggest you stick with more tables. IMHO, pickling may have issues with scalability. If thats what you want, you should look at ZODB. It is pretty good and supports caching etc for better performance","Q_Score":2,"Tags":"python,mysql","A_Id":4432349,"CreationDate":"2010-12-13T12:04:00.000","Title":"Is it a good practice to use pickled data instead of additional tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.\nWhich path should be followed, any advice ?\nFor example:\nThere is a table of Customers, containing fields like Name, Address\nNow for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only.","AnswerCount":5,"Available Count":4,"Score":0.1194272985,"is_accepted":false,"ViewCount":159,"Q_Id":4428613,"Users Score":3,"Answer":"Usually it's best to keep your data normalized (i.e. create more tables). Storing data 'pickled' as you say, is acceptable, when you don't need to perform relational operations on them.","Q_Score":2,"Tags":"python,mysql","A_Id":4428635,"CreationDate":"2010-12-13T12:04:00.000","Title":"Is it a good practice to use pickled data instead of additional tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.)\nI would like the tool to generate a tree for me, given the server name, db name, stored proc name, a \"call tree\", which includes:\n\nParent stored procedure.\nEvery other stored procedure that is being called as a child of the caller.\nEvery table that is being modified (updated or deleted from) as a child of the stored proc which does it.\n\nHopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions?\nEDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :)\nEDIT2: I would be OK if all I am missing is graphics.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6375,"Q_Id":4445117,"Users Score":0,"Answer":"SQL Negotiator Pro has a free lite version at www.aphilen.com\nThe full version is the only product out there that will find all dependencies and not stop after finding the first 10 child dependencies. Other products fail when there is a circular reference and just hang, these guys have covered this off. Also a neat feature is the ability to add notes to the diagram so that it can be easily distributed.\nFull version is not cheap but has saved us plenty of hours usually required figuring out complex database procedures. apex also provide a neat tool","Q_Score":8,"Tags":"sql-server-2008,stored-procedures,python-2.6,call-graph","A_Id":18523367,"CreationDate":"2010-12-14T22:54:00.000","Title":"Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I create hangman game with silverlight ironpython and I use data in postgresql for random word but I don't know to access data in postgresql in silverlight.\nhow can or should it be done?\nThanks!!","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":950,"Q_Id":4470073,"Users Score":3,"Answer":"From Silverlight you cannot access a database directly (remember it's a web technology that actually runs locally on the client and the client cannot access your database directly over the internet). \nTo communicate with the server from Silverlight, you must create a separated WebService either with SOAP, WCF or RIA Services for example. \nThat Webservice will expose your data on the web. Call the WebService method to get your data from your Silverlight program.\nThis WebService layer will be your middle tiers that actually makes the bridge between your postgresql database and your Silverlight application.","Q_Score":0,"Tags":"silverlight,postgresql,silverlight-4.0,silverlight-3.0,ironpython","A_Id":4470466,"CreationDate":"2010-12-17T11:37:00.000","Title":"How to access PostgreSQL with Silverlight","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to implement a function that takes a lambda as the argument and queries the database. I use SQLAlchemy for ORM. Is there a way to pass the lambda, that my function receives, to SQLAlchemy to create a query?\nSincerely,\nRoman Prykhodchenko","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1681,"Q_Id":4470481,"Users Score":2,"Answer":"I guess you want to filter the data with the lambda, like a WHERE clause? Well, no, functions nor lambdas cannot be turned into a SQL query. Sure, you could just fetch all the data and filter it in Python, but that completely defeats the purpose of the database.\nYou'll need to recreate the logic you put into the lambda with SQLAlchemy.","Q_Score":2,"Tags":"python,sqlalchemy","A_Id":4470921,"CreationDate":"2010-12-17T12:33:00.000","Title":"Can I use lambda to create a query in SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The desire is to have the user provide information in an OpenOffice Writer or MS Word file that is inserted into part of a ReportLab generated PDF. I am comfortable with ReportLab; but, I don't have any experience with using Writer or Word data in this way. How would you automate the process of pulling in the Writer\/Word data? Is it possible to retain tables and graphs?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":147,"Q_Id":4478478,"Users Score":0,"Answer":"You can not embed such objects as is within a PDF, adobe specification does not support that. However you could always parse the data from the Office document and reproduce it as a table\/graph\/etc using reportlab in the output PDF. If you don't care about the data being an actual text you could always save it in the PDF as an image.","Q_Score":1,"Tags":"python,ms-word,reportlab,openoffice-writer","A_Id":4691989,"CreationDate":"2010-12-18T14:36:00.000","Title":"Is it possible to include OpenOffice Writer or MS Word data in a ReportLab generated PDF?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing an application for managers that might be used in a large organisation. The app is improved and extended step by step on a frequent (irregular) basis. The app will have SQL connections to several databases and has a complex GUI.\nWhat would you advise to deploy the app ?\nBased on my current (limited) knowledge of apps in lager organisations I prefer a setup where the app runs on a server and the user uses a thin client via the web. I prefer not to use a webbrowser because of (possible)limitations of the user GUI. The user experience should be as if the app was running on his own laptop\/pc\/tablet(?)\nWhat opensource solution would you advise ?\nThanks !","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":318,"Q_Id":4485404,"Users Score":1,"Answer":"If possible, make the application run without any installation procedure, and provide it on a network share (e.g. with a fixed UNC path). You didn't specify the client operating system: if it's Windows, create an MSI that sets up something in the start menu that will still make the application launch from the network share.\nWith that approach, updates will be as simple as replacing the files on the file server - yet it will always run on the client.","Q_Score":1,"Tags":"python,client-server,rich-internet-application","A_Id":4485440,"CreationDate":"2010-12-19T22:13:00.000","Title":"Deploy python application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Okay, so what I want to do is upload an excel sheet and display it on my website, in html. What are my options here ? I've found this xlrd module that allows you to read the data from spreadsheets, but I don't really need that right now.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1961,"Q_Id":4498678,"Users Score":4,"Answer":"Why don't you need xlrd? It sounds like exactly what you need. \nCreate a Django model with a FileField that holds the spreadsheet. Then your view uses xlrd to loop over the rows and columns and put them into an HTML table. Job done.\nPossible complications: multiple sheets in one Excel file; formulas; styles.","Q_Score":1,"Tags":"python,html,django,excel","A_Id":4499265,"CreationDate":"2010-12-21T11:20:00.000","Title":"Python\/Django excel to html","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial \"seed\" set the 50million I use this as the first values in my queue.\nThanks,","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1161,"Q_Id":4505170,"Users Score":0,"Answer":"How about some key-value storages like MongoDB","Q_Score":2,"Tags":"python,mysql,database,optimization,csv","A_Id":4505300,"CreationDate":"2010-12-22T00:20:00.000","Title":"50 million+ Rows of Data - CSV or MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial \"seed\" set the 50million I use this as the first values in my queue.\nThanks,","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1161,"Q_Id":4505170,"Users Score":3,"Answer":"I would say that there are a wide variety of benefits to using a database over a CSV for such large structured data so I would suggest that you learn enough to do so. However, based on your description you might want to check out non-server\/lighter weight databases. Such as SQLite, or something similar to JavaDB\/Derby... or depending on the structure of your data a non-relational (Nosql) database- obviously you will need one with some type of python support though.","Q_Score":2,"Tags":"python,mysql,database,optimization,csv","A_Id":4505218,"CreationDate":"2010-12-22T00:20:00.000","Title":"50 million+ Rows of Data - CSV or MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial \"seed\" set the 50million I use this as the first values in my queue.\nThanks,","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":1161,"Q_Id":4505170,"Users Score":1,"Answer":"Are you just going to slurp in everything all at once? If so, then CSV is probably the way to go. It's simple and works.\nIf you need to do lookups, then something that lets you index the data, like MySQL, would be better.","Q_Score":2,"Tags":"python,mysql,database,optimization,csv","A_Id":4505180,"CreationDate":"2010-12-22T00:20:00.000","Title":"50 million+ Rows of Data - CSV or MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to write a small web app to utilise a dataset I already have stored in a MongoDB collection. I've been writing more Python than other languages lately and would like to broaden my repertoire and write a Python web app. \nIt seems however that most if not all of the current popular Python web development frameworks favour MySQL and others with no mention given to MongoDB. \nI am aware that there are more than likely plugins written to allow Mongo be used with existing frameworks but so far have found little as to documentation that compares and contrasts them. \nI was wondering what in people's experience is the Python web development framework with the best MongoDB support?\nMany thanks in advance,\nPatrick","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":9196,"Q_Id":4534684,"Users Score":0,"Answer":"There is no stable support for mongodb using django framework. I tried using mongoengine, but unlike models, provided for admin in django framework, there is no support for mongoengine.\nCorrect if I am wrong.","Q_Score":17,"Tags":"python,mongodb","A_Id":50201839,"CreationDate":"2010-12-26T17:14:00.000","Title":"Python Web Framework with best Mongo support","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm new to MySQL, and I have a question about the memory. \nI have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the \nmemory. \nI use python(actually MySQLdb in python) with sql: SELECT * FROM table. \nHowever, from my linux \"top\" I saw this python process uses 50% of my memory(which is total 6GB) \nI'm curious about why it uses about 3GB memory only for a 200 mb table. \nThanks in advance!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4116,"Q_Id":4559402,"Users Score":0,"Answer":"In pretty much any scripting language, a variable will always take up more memory than its actual contents would suggest. An INT might be 32 or 64bits, suggesting it would require 4 or 8 bytes of memory, but it will take up 16 or 32bytes (pulling numbers out of my hat), because the language interpreter has to attach various metadata to that value along the way.\nThe database might only require 200megabytes of raw storage space, but once you factor in the metadata, it will definitely occupy much much more.","Q_Score":1,"Tags":"python,mysql","A_Id":4559691,"CreationDate":"2010-12-30T01:47:00.000","Title":"the Memory problem about MySQL \"SELECT *\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to MySQL, and I have a question about the memory. \nI have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the \nmemory. \nI use python(actually MySQLdb in python) with sql: SELECT * FROM table. \nHowever, from my linux \"top\" I saw this python process uses 50% of my memory(which is total 6GB) \nI'm curious about why it uses about 3GB memory only for a 200 mb table. \nThanks in advance!","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":4116,"Q_Id":4559402,"Users Score":-1,"Answer":"This is almost certainly a bad design.\nWhat are you doing with all that data in memory at once? \nIf it's for one user, why not pare the size down so you can support multiple users?\nIf you're doing a calculation on the middle tier, is it possible to shift the work to the database server so you don't have to bring all the data into memory?\nYou know you can do this, but the larger questions are (1) why? and (2) what else could you do? We'd need more context to answer these.","Q_Score":1,"Tags":"python,mysql","A_Id":4559443,"CreationDate":"2010-12-30T01:47:00.000","Title":"the Memory problem about MySQL \"SELECT *\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I update multiple records in a queryset efficiently? \nDo I just loop over the queryset, edit , and call save() for each one of them? Is it equivalent to psycopg2's executemany?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6395,"Q_Id":4600938,"Users Score":6,"Answer":"If you have to update each record with a different value, then of couse you have to iterate over each record. If you wish to do update them all with the same value, then just use the update method of the queryset.","Q_Score":1,"Tags":"python,django","A_Id":4601203,"CreationDate":"2011-01-05T04:53:00.000","Title":"Django: how can I update more than one record at once?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a project coming up that involves a desktop application (tournament scoring for an amateur competition) that probably 99+% of the time will be a single-user on a single machine, no network connectivity, etc. For that, sqlite will likely work beautifully. For those other few times when there are more than one person, with more than one computer, and some form of network... they would ideally need to be able to enter data (event registration and putting in scores) to a central database such as a MySQL or PostgreSQL server. I don't envision a need for synchronizing data between the local (sqlite) and remote databases, just a need to be able to switch via preferences or configuration file which kind of database the program should connect to the next time its started (along with the connection info for any remote database).\nI'm fairly new at this kind of programming, and this will likely take me a good while to get where I want it... but I'd prefer to avoid going down the wrong path early on (at least on major things like this). Given my limited understanding of things like ORMs it seems like this would be a near-ideal use for something like SQLAlchemy, no? Or would the 'batteries included' python db-api be generic enough for this kind of task?\nTIA,\nMonte","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":245,"Q_Id":4610698,"Users Score":1,"Answer":"Yes, SQLAlchemy will help you to be independent on what SQL database you use, and you get a nice ORM as well. Highly recommended.","Q_Score":2,"Tags":"python,database,sqlite,orm,sqlalchemy","A_Id":4612684,"CreationDate":"2011-01-06T00:31:00.000","Title":"creating a database-neutral app in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a project coming up that involves a desktop application (tournament scoring for an amateur competition) that probably 99+% of the time will be a single-user on a single machine, no network connectivity, etc. For that, sqlite will likely work beautifully. For those other few times when there are more than one person, with more than one computer, and some form of network... they would ideally need to be able to enter data (event registration and putting in scores) to a central database such as a MySQL or PostgreSQL server. I don't envision a need for synchronizing data between the local (sqlite) and remote databases, just a need to be able to switch via preferences or configuration file which kind of database the program should connect to the next time its started (along with the connection info for any remote database).\nI'm fairly new at this kind of programming, and this will likely take me a good while to get where I want it... but I'd prefer to avoid going down the wrong path early on (at least on major things like this). Given my limited understanding of things like ORMs it seems like this would be a near-ideal use for something like SQLAlchemy, no? Or would the 'batteries included' python db-api be generic enough for this kind of task?\nTIA,\nMonte","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":245,"Q_Id":4610698,"Users Score":-1,"Answer":"I don't see how those 2 use cases would use the same methods. Just create a wrapper module that conditionally imports either the sqlite or sqlalchemy modules or whatever else you need.","Q_Score":2,"Tags":"python,database,sqlite,orm,sqlalchemy","A_Id":4610735,"CreationDate":"2011-01-06T00:31:00.000","Title":"creating a database-neutral app in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a web app that uses stock data. The stock data can be stored in:\n\nFiles \nDB\n\nThe structure of the data is simple: there's a daily set and a weekly set. If files are used, then I can store a file per symbol\/set, such as GOOGLE_DAILY and GOOGLE_WEEKLY. Each set includes a simple list of (Date, open\/hight\/low\/close, volume, dividend) fields.\nBut how can I do it with DB? Should I use relational or other db? I thought about using 2 tables per each symbol, but that would generate thousands of tables, which doesn't feel right.\nThanks.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":4613251,"Users Score":3,"Answer":"You don't need a table per stock symbol, you just need one of the fields in the table to be the stock symbol. The table might be called StockPrices and its fields might be\n\nticker_symbol - the stock ticker symbol\ntime - the time of the stock quote\nprice - the price of the stock at that time\n\nAs long as ticker_symbol is an indexed field you can do powerful queries like SELECT time,price FROM StockPrices WHERE ticker_symbol='GOOG' ORDER BY time DESC and they will be very efficient. You can also store as many symbols as you like in this table.\nYou could add other tables for dividends, volume information and such. In all cases you probably have a composite key of ticker_symbol and time.","Q_Score":0,"Tags":"python,django,data-structures","A_Id":4613300,"CreationDate":"2011-01-06T09:02:00.000","Title":"Help needed with db structure","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Im setting a VM. \nBoth host and VM machine have Mysql. \nHow do keep the VM Mysql sync'd to to the host Mysql. \nHost is using MYsql 5.5 on XP.\nVM is Mysql 5.1 on Fedora 14. \n1) I could DUMP to \"shared,\" Restore. Not sure if this will work.\n2) I could network Mysql Host to Mysql VM. Not how to do this \nHow would I do this with python 2.7? \nI dont want them in sync after set-up phase. But, maybe sync some tables or SP occasionly on-rewrites. After I build out Linux Env. I would like to be able to convert V2P and have a dual-boot system.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2319,"Q_Id":4619392,"Users Score":0,"Answer":"Do you want it synced in realtime?\nWhy not just connect the guest's mysql process to the host?","Q_Score":1,"Tags":"python,mysql,virtual-machine","A_Id":4621472,"CreationDate":"2011-01-06T20:15:00.000","Title":"How to Sync MySQL with python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im setting a VM. \nBoth host and VM machine have Mysql. \nHow do keep the VM Mysql sync'd to to the host Mysql. \nHost is using MYsql 5.5 on XP.\nVM is Mysql 5.1 on Fedora 14. \n1) I could DUMP to \"shared,\" Restore. Not sure if this will work.\n2) I could network Mysql Host to Mysql VM. Not how to do this \nHow would I do this with python 2.7? \nI dont want them in sync after set-up phase. But, maybe sync some tables or SP occasionly on-rewrites. After I build out Linux Env. I would like to be able to convert V2P and have a dual-boot system.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2319,"Q_Id":4619392,"Users Score":1,"Answer":"You can use mysqldump to make snapshots of the database, and to restore it to known states after tests. \nBut instead or going into the complication of synchronizing different database instances, it would be best to open the host machine's instance to local network access, and have the applications in the virtual machine access that as if it was a remote server. Overall performance should improve too. \nEven if you decide to run different databases for the host and the guest, run then both on the host's MySQL instance. Performance will be better, configuration management will be easier, and the apps in the guest will be tested against a realistic deployment environment.","Q_Score":1,"Tags":"python,mysql,virtual-machine","A_Id":4619503,"CreationDate":"2011-01-06T20:15:00.000","Title":"How to Sync MySQL with python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know of PyMySQLDb, is that pretty much the thinnest\/lightest way of accessing MySql?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2598,"Q_Id":4620340,"Users Score":0,"Answer":"MySQLDb is faster while SQLAlchemy makes code more user friendly -:)","Q_Score":2,"Tags":"python,mysql","A_Id":9090731,"CreationDate":"2011-01-06T21:56:00.000","Title":"What is the fastest\/most performant SQL driver for Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know of PyMySQLDb, is that pretty much the thinnest\/lightest way of accessing MySql?","AnswerCount":3,"Available Count":3,"Score":0.3215127375,"is_accepted":false,"ViewCount":2598,"Q_Id":4620340,"Users Score":5,"Answer":"The fastest is SQLAlchemy. \n\"Say what!?\"\nWell, a nice ORM, and I like SQLAlchemy, you will get your code finished much faster. If your code then runs 0.2 seconds slower isn't really gonna make any noticeable difference. :)\nNow if you get performance problems, then you can look into improving the code. But choosing the access module after who in theory is \"fastest\" is premature optimization.","Q_Score":2,"Tags":"python,mysql","A_Id":4620669,"CreationDate":"2011-01-06T21:56:00.000","Title":"What is the fastest\/most performant SQL driver for Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know of PyMySQLDb, is that pretty much the thinnest\/lightest way of accessing MySql?","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":2598,"Q_Id":4620340,"Users Score":3,"Answer":"The lightest possible way is to use ctypes and directly call into the MySQL API, of course, without using any translation layers. Now, that's ugly and will make your life miserable unless you also write C, so yes, the MySQLDb extension is the standard and most performant way to use MySQL while still using the Python Database API. Almost anything else will be built on top of that or one of its predecessors.\nOf course, the connection layer is rarely where all of the database speed problems come from. That's mostly from misusing the API you have or building a bad database or queries.","Q_Score":2,"Tags":"python,mysql","A_Id":4620433,"CreationDate":"2011-01-06T21:56:00.000","Title":"What is the fastest\/most performant SQL driver for Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I work with Oracle Database and lastest Django but when i use the default user model is the query very slow\nwhat can i do?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":305,"Q_Id":4625835,"Users Score":2,"Answer":"The solution was to add an index.","Q_Score":2,"Tags":"python,django,oracle,django-models,model","A_Id":6583775,"CreationDate":"2011-01-07T13:18:00.000","Title":"how can i optimize a django oracle connection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I know that with an InnoDB table, transactions are autocommit, however I understand that to mean for a single statement? For example, I want to check if a user exists in a table, and then if it doesn't, create it. However there lies a race condition. I believe using a transaction prior to doing the select, will ensure that the table remains untouched until the subsequent insert, and the transaction is committed. How can you do this with MySQLdb and Python?","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":363,"Q_Id":4637886,"Users Score":4,"Answer":"There exists a SELECT ... FOR UPDATE that allows you to lock the rows from being read by another transaction but I believe the records have to exist in the first place. Then you can do as you say, and unlock it once you commit. \nIn your case I think the best approach is to simply set a unique constraint on the username and try to insert. If you get a key exception you can notify the user that the name was already taken.","Q_Score":2,"Tags":"python,mysql","A_Id":4656098,"CreationDate":"2011-01-09T05:47:00.000","Title":"How do you create a transaction that spans multiple statements in Python with MySQLdb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a django project with a long running (~3hour) management command\nin my production environment ( apache mod_wsgi ) this process fails with a broken pipe(32) at the end, when trying to update the database.\nthank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":571,"Q_Id":4644317,"Users Score":1,"Answer":"The broken pipe mostly mean that one socket in the canal of transmission has been closed without notifying the other one , in your case i think it mean that the database connection that you have establish was closed from the database part, so when you code try to use it, it raise the exception.\nUsually the database connection has a time out which \"usually\" you can configure by making it more bigger to solve this kind of problem , check your database documentation to see how.\nN.B: you don't give us much detail so i'm just trying to make assumption here.\nWell hope this can help.","Q_Score":1,"Tags":"python,django,apache,mod-wsgi","A_Id":4644443,"CreationDate":"2011-01-10T06:42:00.000","Title":"django long running process database connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I quickly hit the quota imposed by google. A final option is to use django-nonrel + djangoappengine, but I'm afraid that package is still in its infancy.\nIdeally, I'd like to create a read-only sqlite database that uses a blobstore as its data source. Is this possible?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":803,"Q_Id":4663071,"Users Score":2,"Answer":"I don't think you're likely to find anything like that...surely not over blobstore. Because if all your data is stored in a single blob, you'd have to read the entire database into memory for any operation, and you said you can't do that.\nUsing the datastore as your backend is more plausible, but not much. The big issue with providing a SQLite driver there would be implementing transaction semantics, and since that's the key thing GAE takes away from you for the sake of high availability, it's hard to imagine somebody going to much trouble to write such a thing.","Q_Score":2,"Tags":"python,google-app-engine,sqlite,relational-database,non-relational-database","A_Id":4663353,"CreationDate":"2011-01-11T21:55:00.000","Title":"A Read-Only Relational Database on Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I quickly hit the quota imposed by google. A final option is to use django-nonrel + djangoappengine, but I'm afraid that package is still in its infancy.\nIdeally, I'd like to create a read-only sqlite database that uses a blobstore as its data source. Is this possible?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":803,"Q_Id":4663071,"Users Score":2,"Answer":"django-nonrel does not magically provide an SQL database - so it's not really a solution to your problem.\nAccessing a blobstore blob like a file is possible, but the SQLite module requires a native C extension, which is not enabled on App Engine.","Q_Score":2,"Tags":"python,google-app-engine,sqlite,relational-database,non-relational-database","A_Id":4663631,"CreationDate":"2011-01-11T21:55:00.000","Title":"A Read-Only Relational Database on Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to use clr.AddReference to add sqlite3 functionality to a simple IronPython program I'm writing; but everytime I try to reference System.Data.SQLite I get this error:\n\nTraceback (most recent call last):\n File \"\", line 1, in \n IOError: System.IO.IOException: Could not add reference to assembly System.Data.SQLite\n at Microsoft.Scripting.Actions.Calls.MethodCandidate.Caller.Call(Object[] args, Boolean&shouldOptimize)\n at IronPython.Runtime.Types.BuiltinFunction.BuiltinFunctionCaller2.Call1(CallSite site, CodeContext context, TFuncType func, T0 arg0)\n at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2)\n at CallSite.Target(Closure , CallSite , CodeContext , Object , Object )\n at IronPython.Compiler.Ast.CallExpression.Invoke1Instruction.Run(InterpretedFrame frame)\n at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)\n at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)\n at IronPython.Runtime.FunctionCode.Call(CodeContext context)\n at IronPython.Runtime.Operations.PythonOps.QualifiedExec(CodeContext context, Object code, PythonDictionary globals, Object locals)\n at Microsoft.Scripting.Interpreter.ActionCallInstruction4.Run(InterpretedFrame frame)\n at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)\n\nI've been testing out the imports and references in the interpreter mainly, and these are the lines I test:\n\nimport sys\n import clr\n sys.path.append(\"C:\/Program Files (x86)\/SQLite.NET\/bin\")\n clr.AddReference(\"System.Data.SQLite\") \n\nThe error happens after the clr.AddReference line is entered. How would I add System.Data.SQLite properly?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1695,"Q_Id":4682960,"Users Score":1,"Answer":"My first guess is that you're trying to load the x86 (32-bit) System.Data.SQLite.dll in a x64 (64-bit) process, or vice versa. System.Data.SQLite.dll contains the native sqlite3 library, which must be compiled for x86 or x64, so there is a version of System.Data.SQLite.dll for each CPU.\nIf you're using the console, ipy.exe is always 32-bit (even on 64-bit platforms) while ipy64.exe is AnyCPU, so it matches the current platform. If you're hosting IronPython, and the host app is AnyCPU, you need to load the right copy of System.Data.SQLite.dll for the machine you're running on (or just force the host app x86).","Q_Score":1,"Tags":"ado.net,ironpython,system.data.sqlite","A_Id":4696478,"CreationDate":"2011-01-13T17:11:00.000","Title":"Adding System.Data.SQLite reference in IronPython","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I read somewhere that to save data to a SQLite3 database in Python, the method commit of the connection object should be called. Yet I have never needed to do this. Why?","AnswerCount":5,"Available Count":1,"Score":0.1194272985,"is_accepted":false,"ViewCount":20808,"Q_Id":4699605,"Users Score":3,"Answer":"Python sqlite3 issues a BEGIN statement automatically before \"INSERT\" or \"UPDATE\". After that it automatically commits on any other command or db.close()","Q_Score":18,"Tags":"python,transactions,sqlite,autocommit","A_Id":15967816,"CreationDate":"2011-01-15T12:36:00.000","Title":"Why doesn\u2019t SQLite3 require a commit() call to save data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a situation where I'm contemplating using subversion\/svn as the repository\/version control system for a project. I'm trying to figure out if it's possible, (and if so, how) to be able to have the subversion system, on a post commit hook\/process to to write the user\/file\/time (and maybe msg) to either an external file (csv) or to a mysql db. \nOnce I can figure out how to invoke the post commit hook to write the output to a file, I can then modify my issue tracker\/project app to then implement a basic workflow process based on the user role, as well as the success\/failure of the repository files.\nShort sample\/pointers would be helpful.\nMy test env, is running subversion\/svnserve on centos5. The scripting languages in use are Php\/Python.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1005,"Q_Id":4701902,"Users Score":0,"Answer":"I would say that's possible, but you are going to need a bit of work to retrieve the username, date and commit message.\nSubversion invokes the post-commit hook with the repo path and the number of revision which was just committed as arguments.\nIn order to retrieve the information you're looking for, you will need to use an executable by the name of svnlook, which is bundled with Subversion.\nSee repo\\hooks\\post-commit.tmpl for a rather clear explanation about how to use it\nAlso, take a look at svnlook help, it's not difficult to use.","Q_Score":1,"Tags":"php,python,svn,hook,svn-hooks","A_Id":4701984,"CreationDate":"2011-01-15T20:14:00.000","Title":"subversion post commit hooks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a situation where I'm contemplating using subversion\/svn as the repository\/version control system for a project. I'm trying to figure out if it's possible, (and if so, how) to be able to have the subversion system, on a post commit hook\/process to to write the user\/file\/time (and maybe msg) to either an external file (csv) or to a mysql db. \nOnce I can figure out how to invoke the post commit hook to write the output to a file, I can then modify my issue tracker\/project app to then implement a basic workflow process based on the user role, as well as the success\/failure of the repository files.\nShort sample\/pointers would be helpful.\nMy test env, is running subversion\/svnserve on centos5. The scripting languages in use are Php\/Python.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1005,"Q_Id":4701902,"Users Score":0,"Answer":"Indeed it is very possible, in your repository root there should be a folder named hooks, inside which should be a file named post-commit (if not, create one), add whatever bash code you put there and it will execute after every commit.\nNote, there are 2 variables that are passed into the script $1 is the repository, and $2 is the revision number (i think), you can use those two variables to execute some svn commands\/queries, and pull out the required data, and do with it whatever your heart desires.","Q_Score":1,"Tags":"php,python,svn,hook,svn-hooks","A_Id":4701973,"CreationDate":"2011-01-15T20:14:00.000","Title":"subversion post commit hooks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"According to the Bigtable original article, a column key of a Bigtable is named using \"family:qualifier\" syntax where column family names must be printable but qualifiers may be arbitrary strings. In the application I am working on, I would like to specify the qualifiers using Chinese words (or phrase). Is it possible to do this in Google App Engine? Is there a Bigtable API other than provided datastore API? It seems Google is tightly protecting its platform for good reasons.\nThanks in advance.\nMarvin","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":134,"Q_Id":4712143,"Users Score":2,"Answer":"The Datastore is the only interface to the underlying storage on App Engine. You should be able to use any valid UTF-8 string as a kind name, key name, or property name, however.","Q_Score":1,"Tags":"python,google-app-engine","A_Id":4718951,"CreationDate":"2011-01-17T10:32:00.000","Title":"Is there an API of Google App Engine provided to better configure the Bigtable besides Datastore?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to try Mongodb w\/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL.","AnswerCount":6,"Available Count":5,"Score":0.0333209931,"is_accepted":false,"ViewCount":2780,"Q_Id":4718580,"Users Score":1,"Answer":"I've used mongoengine with django but you need to create a file like mongo_models.py for example. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data back end ( with a bit of craft )\nBEWARE: If you have very well defined and structured data that can be described in documents or models then don't use Mongo. Its not designed for that and something like PostGreSQL will work much better.\n\nI use PostGreSQL for relational or well structured data because its good for that. Small memory footprint and good response.\nI use Redis to cache or operate in memory queues\/lists because its very good for that. great performance providing you have the memory to cope with it.\nI use Mongo to store large JSON documents and to perform Map and reduce on them ( if needed ) because its very good for that. Be sure to use indexing on certain columns if you can to speed up lookups.\n\nDon't circle to fill a square hole. It won't fill it.\nI've seen too many posts where someone wanted to swap a relational DB for Mongo because Mongo is a buzz word. Don't get me wrong, Mongo is really great... when you use it appropriately. I love using Mongo appropriately","Q_Score":2,"Tags":"python,django,mongodb,mongoengine","A_Id":10204815,"CreationDate":"2011-01-17T22:22:00.000","Title":"Converting Django project from MySQL to Mongo, any major pitfalls?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to try Mongodb w\/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL.","AnswerCount":6,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":2780,"Q_Id":4718580,"Users Score":9,"Answer":"There's no reason why you can't use one of the standard RDBMSs for all the standard Django apps, and then Mongo for your app. You'll just have to replace all the standard ways of processing things from the Django ORM with doing it the Mongo way.\nSo you can keep urls.py and its neat pattern matching, views will still get parameters, and templates can still take objects. \nYou'll lose querysets because I suspect they are too closely tied to the RDBMS models - but they are just lazily evaluated lists really. Just ignore the Django docs on writing models.py and code up your database business logic in a Mongo paradigm.\nOh, and you won't have the Django Admin interface for easy access to your data.","Q_Score":2,"Tags":"python,django,mongodb,mongoengine","A_Id":4718924,"CreationDate":"2011-01-17T22:22:00.000","Title":"Converting Django project from MySQL to Mongo, any major pitfalls?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to try Mongodb w\/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL.","AnswerCount":6,"Available Count":5,"Score":-0.0333209931,"is_accepted":false,"ViewCount":2780,"Q_Id":4718580,"Users Score":-1,"Answer":"Primary pitfall (for me): no JOINs!","Q_Score":2,"Tags":"python,django,mongodb,mongoengine","A_Id":4719398,"CreationDate":"2011-01-17T22:22:00.000","Title":"Converting Django project from MySQL to Mongo, any major pitfalls?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to try Mongodb w\/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL.","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":2780,"Q_Id":4718580,"Users Score":0,"Answer":"Upfront, it won't work for any existing Django app that ships it's models. There's no backend for storing Django's Model data in mongodb or other NoSQL storages at the moment and, database backends aside, models themselves are somewhat of a moot point, because once you get in to using someones app (django.contrib apps included) that ships model-template-view triads, whenever you require a slightly different model for your purposes you either have to edit the application code (plain wrong), dynamically edit the contents of imported Python modules at runtime (magical), fork the application source altogether (cumbersome) or provide additional settings (good, but it's a rare encounter, with django.contrib.auth probably being the only widely known example of an application that allows you to dynamically specify which model it will use, as is the case with user profile models through the AUTH_PROFILE_MODULE setting).\nThis might sound bad, but what it really means is that you'll have to deploy SQL and NoSQL databases in parallel and go from an app-to-app basis--like Spacedman suggested--and if mongodb is the best fit for a certain app, hell, just roll your own custom app.\nThere's a lot of fine Djangonauts with NoSQL storages on their minds. If you followed the streams from the past Djangocon presentations, every year there's been important discussions about how Django should leverage NoSQL storages. I'm pretty sure, in this year or the next, someone will refactor the apps and models API to pave the path to a clean design that can finally unify all the different flavors of NoSQL storages as part of the Django core.","Q_Score":2,"Tags":"python,django,mongodb,mongoengine","A_Id":4719167,"CreationDate":"2011-01-17T22:22:00.000","Title":"Converting Django project from MySQL to Mongo, any major pitfalls?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to try Mongodb w\/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL.","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":2780,"Q_Id":4718580,"Users Score":0,"Answer":"I have recently tried this (although without Mongoengine). There are a huge number of pitfalls, IMHO:\n\nNo admin interface. \nNo Auth django.contrib.auth relies on the DB interface. \nMany things rely on django.contrib.auth.User. For example, the RequestContext class. This is a huge hindrance.\nNo Registration (Relies on the DB interface and django.contrib.auth) \n\nBasically, search through the django interface for references to django.contrib.auth and you'll see how many things will be broken.\nThat said, it's possible that MongoEngine provides some support to replace\/augment django.contrib.auth with something better, but there are so many things that depend on it that it's hard to say how you'd monkey patch something that much.","Q_Score":2,"Tags":"python,django,mongodb,mongoengine","A_Id":4728500,"CreationDate":"2011-01-17T22:22:00.000","Title":"Converting Django project from MySQL to Mongo, any major pitfalls?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a .sql file containing thousands of individual insert statements. It takes forever to do them all. I am trying to figure out a way to do this more efficiently. In python the sqlite3 library can't do things like \".read\" or \".import\" but executescript is too slow for that many inserts. \nI installed the sqlite3.exe shell in hopes of using \".read\" or \".import\" but I can't quite figure out how to use it. Running it through django in eclipse doesn't work because it expects the database to be at the root of my C drive which seems silly. Running it through the command line doesn't work because it can't find my database file (unless I'm doing something wrong)\nAny tips?\nThanks!","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1859,"Q_Id":4719836,"Users Score":1,"Answer":"Use a parameterized query \nand\nUse a transaction.","Q_Score":6,"Tags":"python,sql,django,sqlite","A_Id":4724461,"CreationDate":"2011-01-18T02:01:00.000","Title":"Python and sqlite3 - adding thousands of rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a .sql file containing thousands of individual insert statements. It takes forever to do them all. I am trying to figure out a way to do this more efficiently. In python the sqlite3 library can't do things like \".read\" or \".import\" but executescript is too slow for that many inserts. \nI installed the sqlite3.exe shell in hopes of using \".read\" or \".import\" but I can't quite figure out how to use it. Running it through django in eclipse doesn't work because it expects the database to be at the root of my C drive which seems silly. Running it through the command line doesn't work because it can't find my database file (unless I'm doing something wrong)\nAny tips?\nThanks!","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1859,"Q_Id":4719836,"Users Score":2,"Answer":"In addition to running the queries in bulk inside a single transaction, also try VACUUM and ANALYZEing the database file. It helped a similar problem of mine.","Q_Score":6,"Tags":"python,sql,django,sqlite","A_Id":13787939,"CreationDate":"2011-01-18T02:01:00.000","Title":"Python and sqlite3 - adding thousands of rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"At my organization, PostgreSQL databases are created with a 20-connection limit as a matter of policy. This tends to interact poorly when multiple applications are in play that use connection pools, since many of those open up their full suite of connections and hold them idle.\nAs soon as there are more than a couple of applications in contact with the DB, we run out of connections, as you'd expect.\nPooling behaviour is a new thing here; until now we've managed pooled connections by serializing access to them through a web-based DB gateway (?!) or by not pooling anything at all. As a consequence, I'm having to explain (literally, 5 trouble tickets from one person over the course of the project) over and over again how the pooling works.\nWhat I want is one of the following:\n\nA solid, inarguable rationale for increasing the number of available connections to the database in order to play nice with pools.\nIf so, what's a safe limit? Is there any reason to keep the limit to 20?\nA reason why I'm wrong and we should cut the size of the pools down or eliminate them altogether.\n\nFor what it's worth, here are the components in play. If it's relevant how one of these is configured, please weigh in:\nDB: PostgreSQL 8.2. No, we won't be upgrading it as part of this.\nWeb server: Python 2.7, Pylons 1.0, SQLAlchemy 0.6.5, psycopg2 \n\nThis is complicated by the fact that some aspects of the system access data using SQLAlchemy ORM using a manually configured engine, while others access data using a different engine factory (Still sqlalchemy) written by one of my associates that wraps the connection in an object that matches an old PHP API.\n\nTask runner: Python 2.7, celery 2.1.4, SQLAlchemy 0.6.5, psycopg2","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":303,"Q_Id":4729361,"Users Score":2,"Answer":"I think it's reasonable to require one connection per concurrent activity, and it's reasonable to assume that concurrent HTTP requests are concurrently executed.\nNow, the number of concurrent HTTP requests you want to process should scale with a) the load on your server, and b) the number of CPUs you have available. If all goes well, each request will consume CPU time somewhere (in the web server, in the application server, or in the database server), meaning that you couldn't process more requests concurrently than you have CPUs. In practice, it's not that all goes well: some requests will wait for IO at some point, and not consume any CPU. So it's ok to process some more requests concurrently than you have CPUs.\nStill, assuming that you have, say, 4 CPUs, allowing 20 concurrent requests is already quite some load. I'd rather throttle HTTP requests than increasing the number of requests that can be processed concurrently. If you find that a single request needs more than one connection, you have a flaw in your application.\nSo my recommendation is to cope with the limit, and make sure that there are not too many idle connections (compared to the number of requests that you are actually processing concurrently).","Q_Score":3,"Tags":"python,database,sqlalchemy,pylons,connection-pooling","A_Id":4729629,"CreationDate":"2011-01-18T21:40:00.000","Title":"How can I determine what my database's connection limits should be?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Question is rather conceptual, then direct.\nWhat's the best solution to keep two different calendars synchronised? I can run a cron job for example every minute, I can keep additional information in database. How to avoid events conflicts?\nAs far I was thinking about these two solutions. First one is keeping a database which gathers information from both calendars and each time compares if something new appeared in any of them. Inside this database we can judge, which events should be added, edited or removed and then send those information back to both calendars.\nSecond one is keepien two databases for both calendars and collecting information separately. Then, after those databases are compared, we can say, where did the changes occure and send information from database A to calendar B or from database B to calendar A. I'm afraid this solution leads to more conflicts when changes were made to both databases. \nWhat do you think of these? To be more accurate, I mean two google calendars and script written in python using gdata. Any idea of more simple solution?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":516,"Q_Id":4737852,"Users Score":0,"Answer":"Most calendars, including the Google calendar, has ways to import and synchronize data. You can use these ways. Just import the gdata information (perhaps you need to make it into ics first, I don't know) into the Google calendar.","Q_Score":0,"Tags":"python,synchronization,calendar","A_Id":4738228,"CreationDate":"2011-01-19T16:27:00.000","Title":"Two calendars synchronization","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can connect to a Oracle 10g release 2 server using instant client. Using pyodbc and cx_Oracle.\nUsing either module, I can execute a select query without any problems, but when I try to update a table, my program crashes.\nFor example,\nSELECT * FROM table WHERE col1 = 'value'; works fine.\nUPDATE table SET col2 = 'value' WHERE col1 = 'val'; does not work\nIs this a known limitation with instant client, or is there a problem with my installation?\nThanks in advance for your help.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":587,"Q_Id":4748962,"Users Score":1,"Answer":"Use the instant client with SQL*Plus and see if you can run the update. If there's a problem, SQL*Plus is production quality, so won't crash and it should give you a reasonable error message.","Q_Score":2,"Tags":"python,oracle,pyodbc,cx-oracle,instantclient","A_Id":4753975,"CreationDate":"2011-01-20T15:26:00.000","Title":"Oracle instant client can't execute sql update","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can connect to a Oracle 10g release 2 server using instant client. Using pyodbc and cx_Oracle.\nUsing either module, I can execute a select query without any problems, but when I try to update a table, my program crashes.\nFor example,\nSELECT * FROM table WHERE col1 = 'value'; works fine.\nUPDATE table SET col2 = 'value' WHERE col1 = 'val'; does not work\nIs this a known limitation with instant client, or is there a problem with my installation?\nThanks in advance for your help.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":587,"Q_Id":4748962,"Users Score":0,"Answer":"Sounds more like your user you are connecting with doesn't have those privileges on that table. Do you get an ORA error indicating insufficient permissions when performing the update?","Q_Score":2,"Tags":"python,oracle,pyodbc,cx-oracle,instantclient","A_Id":4749022,"CreationDate":"2011-01-20T15:26:00.000","Title":"Oracle instant client can't execute sql update","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a massive data set of customer information (100s of millions of records, 50+ tables).\nI am writing a python (twisted) app that I would like to interact with the dataset, performing table manipulation. What I really need is an abstraction of 'table', so I can add\/remove\/alter columns\/tables without having to resort to only creating SQL.\nIs there an ORM that will not add significant overhead to my application, considering the size of the dataset?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":858,"Q_Id":4764476,"Users Score":0,"Answer":"I thought that ORM solutions had to do with DQL (Data Query Language), not DDL (Data Definition Language). You don't use ORM to add, alter, or remove columns at runtime. You'd have to be able to add, alter, or remove object attributes and their types at the same time.\nORM is about dynamically generating SQL and developer's lift, not what you're alluding to.","Q_Score":1,"Tags":"python,orm","A_Id":4764551,"CreationDate":"2011-01-21T22:26:00.000","Title":"Python ORM for massive data set","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use sqlalchemy on Cygwin with a MSSQL backend but I cannot seem to get any of the MSSQL Python DB APIs installed on Cygwin. Is there one that is known to work?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":682,"Q_Id":4770083,"Users Score":0,"Answer":"FreeTDS + unixodbc + pyodbc stack will work on Unix-like systems and should therefore work just as well in Cygwin. You should use version 8.0 of TDS protocol. This can be configured in connection string.","Q_Score":2,"Tags":"python,sql-server,cygwin,sqlalchemy","A_Id":5013126,"CreationDate":"2011-01-22T19:34:00.000","Title":"Which Python (sqlalchemy) mssql DB API works in Cygwin?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A website I am making revolves around a search utility, and a want to have something on the homepage that lists the top 10 (or something) most searched queries of the day.\nWhat would be the easiest \/ most efficient way of doing this?\nShould I use a sql database, or just a text file containing the top 10 queries and a cronjob erasing the data every day?\nAlso, how would I avoid the problem of two users searching for something at the same and it only recording one of them, i.e multithreading?\nThe back-end of the site is all written in python","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":98,"Q_Id":4778058,"Users Score":2,"Answer":"Put the queries in a table, with one row per distinct query, and a column to count. Insert if the query doesn't exist already, or otherwise increment the query row counter. \nPut a cron job together than empties the table at 12 midnight. Use transactions to prevent two different requests from colliding.","Q_Score":0,"Tags":"python,sql,multithreading","A_Id":4778081,"CreationDate":"2011-01-24T02:23:00.000","Title":"How to make a \"top queries\" page","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In some project I implement user-requested mapping (at runtime) of two tables which are connected by a 1-to-n relation (one table has a ForeignKey field).\nFrom what I get from the documentation, the usual way is to add a orm.relation to the mapped properties with a mapped_collection as collection_class on the non-foreignkey table with a backref, so that in the end both table orm objects have each other mapped on an attribute (one has a collection through the collection_class of the orm.relation used on it, the other has an attribute placed on it by the backref).\nI am in a situation where I sometimes do just want the ForeignKey-side to have a mapped attribute to the other table (that one, that is created by the backref), depending on what the user decides (he might just want to have that side mapped).\nNow I'm wondering whether I can simply use an orm.relation on the ForeignKey table aswell, so I'd probably end up with an orm.relation on the non-foreignkey table as before with a mapped_collection but no backref, and another orm.relation on the foreignkey table replacing that automagic backref (making two orm.relations on both tables mapping each other from both sides).\nWill that get me into trouble? Is the result equivalent (to just one orm.relation on the non-foreignkey table with a backref)? Is there another way how I could map just on the ForeignKey-side without having to map the dictionary on the non-ForeignKey table aswell with that backref?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":664,"Q_Id":4782344,"Users Score":1,"Answer":"I found the answer myself by now:\nIf you use an orm.relation from each side and no backrefs, you have to use back_populates or if you mess around at one side, it won't be properly updated in the mapping on the other side.\nTherefore, an orm.relation from each side instead of an automated backref IS possible but you have to use back_populates accordingly.","Q_Score":0,"Tags":"python,database,sqlalchemy,relation","A_Id":5594860,"CreationDate":"2011-01-24T13:10:00.000","Title":"SQLAlchemy - difference between mapped orm.relation with backref or two orm.relation from both sides","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a network scheduling like program in Python 2.6+ in which I have a complex queue requirement: Queue should store packets, should retrieve by timestamp or by packet ID in O(1), should be able to retrieve all the packets below a certain threshold, sort packet by priorities etc. It should insert and delete with reasonable complexity as well.\nNow I have two choices:\n\nCombine a few data structures and synchronize them properly to fulfill my requirement.\nUse some in-memory database so that I can perform all sorts of operations easily.\n\nAny suggestions please?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":4802900,"Users Score":0,"Answer":"A database is just some indexes and fancy algorithms wrapped around a single data structure -- a table. You don't have a lot of control about what happens under the hood.\nI'd try using the built-in Python datastructures.","Q_Score":0,"Tags":"python,data-structures","A_Id":4803269,"CreationDate":"2011-01-26T09:20:00.000","Title":"Need advice on customized datastructure vs using in-memory DB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to python and its workings.\nI have an excel spreadsheet which was got using some VBA's.\nNow I want to invoke Python to do some of the jobs...\nMy question then is: How can I use python script instead of VBA in an excel spreadsheet?\nAn example of such will be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2789,"Q_Id":4829509,"Users Score":0,"Answer":"I've always done the manipulation of Excel spreadsheets and Word documents with standalone scripts which use COM objects to manipulate the documents. I've never come across a good use case for putting Python into a spreadsheet in place of VBA.","Q_Score":2,"Tags":"python,excel","A_Id":4872985,"CreationDate":"2011-01-28T14:49:00.000","Title":"Use of python script instead of VBA in Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I use the Werkzeug framework without any ORM like SQLAlchemy? In my case, it's a lot of effort to rewrite all the tables and columns in SQLAlchemy from existing tables & data.\nHow do I query the database and make an object from the database output?\nIn my case now, I use Oracle with cx_Oracle. If you have a solution for MySQL, too, please mention it.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":445,"Q_Id":4838528,"Users Score":0,"Answer":"Is it a problem to use normal DB API, issue regular SQL queries, etc? cx_Oracle even has connection pooling biolt in to help you manage connections.","Q_Score":1,"Tags":"python,orm,werkzeug","A_Id":4838669,"CreationDate":"2011-01-29T18:09:00.000","Title":"Werkzeug without ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have several occasions where I want to collect data when in the field. This is in situations where I do not always have access to my postgres database.\nTo keep things in sync, it would be excellent if I could use psycopg2 functions offline to generate queries that can be held back and once I am able to connect to the database; process everything that is held back.\nOne thing I am currently struggling with is that the psycopg2 cursor requires a connection to be constructed.\nMy question is:\nIs there a way to use a cursor to do things like mogrify without an active connection object? Or with a connection object that is not connected to a database? I would then like to write the mogrify results temporarily to file so they can be processed later.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3099,"Q_Id":4879804,"Users Score":0,"Answer":"It seems like it would be easier and more versatile to store the data to be inserted later in another structure. Perhaps a csv file. Then when you connect you can run through that table, but you can also easily do other things with that CSV if necessary.","Q_Score":12,"Tags":"python,psycopg2,offline-mode","A_Id":4880978,"CreationDate":"2011-02-02T21:00:00.000","Title":"Use psycopg2 to construct queries without connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After setting up a django site and running on the dev server, I have finally gotten around to figuring out deploying it in a production environment using the recommended mod_wsgi\/apache22. I am currently limited to deploying this on a Windows XP machine.\nMy problem is that several django views I have written use the python subprocess module to run programs on the filesystem. I keep getting errors when running the subprocess.Popen I have seen several SO questions that have asked about this, and the accepted answer is to use WSGIDaemonProcess to handle the problem (due to permissions of the apache user, I believe).\nThe only problem with this is that WSGIDaemonProcess is not available for mod_wsgi on Windows. Is there any way that I can use mod_wsgi\/apache\/windows\/subprocess together?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2617,"Q_Id":4882605,"Users Score":1,"Answer":"I ran into a couple of issues trying to use subprocess under this configuration. Since I am not sure what specifically you had trouble with I can share a couple of things that were not easy for me to solve but in hindsight seem pretty trivial.\n\nI was receiving permissions related errors when trying to execute an application. I searched quite a bit but was having a hard time finding Windows specific answers. This one was obvious: I changed the user under which Apache runs to a user with higher permissions. (Note, there are security implications with that so you want to be sure you understand what you are getting in to).\nDjango (depending on your configuration) may store strings as Unicode. I had a command line application I was trying to run with some parameters from my view which was crashing despite having the correct arguments passed in. After a couple hours of frustration I did a type(args) which returned rather than my expected string. A quick conversion resolved that issue.","Q_Score":6,"Tags":"python,django,apache,subprocess,mod-wsgi","A_Id":8750220,"CreationDate":"2011-02-03T04:07:00.000","Title":"Django + Apache + Windows WSGIDaemonProcess Alternative","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm storing MySQL DateTimes in UTC, and let the user select their time zone, storing that information.\nHowever, I want to to some queries that uses group by a date. Is it better to store that datetime information in UTC (and do the calculation every time) or is it better to save it in the timezone given? Since time zones for users can change, I wonder.\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":123,"Q_Id":4928220,"Users Score":1,"Answer":"It's almost always better to save the time information in UTC, and convert it to local time when needed for presentation and display.\nOtherwise, you will go stark raving mad trying to manipulate and compare dates and times in your system because you will have to convert each time to UTC time for comparison and manipulation.","Q_Score":0,"Tags":"python,mysql,timezone","A_Id":4928246,"CreationDate":"2011-02-08T00:10:00.000","Title":"How to handle time zones in a CMS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm storing MySQL DateTimes in UTC, and let the user select their time zone, storing that information.\nHowever, I want to to some queries that uses group by a date. Is it better to store that datetime information in UTC (and do the calculation every time) or is it better to save it in the timezone given? Since time zones for users can change, I wonder.\nThanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":123,"Q_Id":4928220,"Users Score":3,"Answer":"Generally always store in UTC and convert for display, it's the only sane way to do time differences etc. Or when somebody next year decides to change the summer time dates.","Q_Score":0,"Tags":"python,mysql,timezone","A_Id":4928244,"CreationDate":"2011-02-08T00:10:00.000","Title":"How to handle time zones in a CMS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having problem when I do a query to mongodb using pymongo.\nI do not know how to avoid getting the _id for each record.\nI am doing something like this,\nresult = db.meta.find(filters, [\n 'model',\n 'fields.parent',\n 'fields.status',\n 'fields.slug',\n 'fields.firm',\n 'fields.properties'])\nI do not want to iterate the cursor elements only to delete a field.\nThanks,\nJoaquin","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":803,"Q_Id":4937817,"Users Score":0,"Answer":"Does make any sense. The object id is core part of each document. Convert the BSON\/JSON document to a native datastructure (depending on your implementation language) and remove _id on this level. Apart from that it does not make much sense what you are trying to accomplish.","Q_Score":1,"Tags":"python,mongodb,pymongo","A_Id":4941686,"CreationDate":"2011-02-08T20:08:00.000","Title":"PYMongo: Keep returning _id in every record after quering, How can I exclude this record?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the some of the tables increase. (The largest of which is at around 5 million rows) The speed of the script (inserts) has slowed to a crawl. What was once taking a couple of minutes now takes about an hour.\nWhat can I do to speed this up? Was I wrong in using python and psycopg2 for this task? Is there anything I can do to the database that may speed up this process. I get the feeling I am going about this in entirely the wrong way.","AnswerCount":7,"Available Count":2,"Score":0.057080742,"is_accepted":false,"ViewCount":4138,"Q_Id":4968837,"Users Score":2,"Answer":"Considering the process was fairly efficient before and only now when the dataset grew up it slowed down my guess is it's the indexes. You may try dropping indexes on the table before the import and recreating them after it's done. That should speed things up.","Q_Score":4,"Tags":"python,database-design,postgresql,psycopg2","A_Id":4969077,"CreationDate":"2011-02-11T12:11:00.000","Title":"Postgres Performance Tips Loading in billions of rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the some of the tables increase. (The largest of which is at around 5 million rows) The speed of the script (inserts) has slowed to a crawl. What was once taking a couple of minutes now takes about an hour.\nWhat can I do to speed this up? Was I wrong in using python and psycopg2 for this task? Is there anything I can do to the database that may speed up this process. I get the feeling I am going about this in entirely the wrong way.","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4138,"Q_Id":4968837,"Users Score":0,"Answer":"I'd look at the rollback logs. They've got to be getting pretty big if you're doing this in one transaction.\nIf that's the case, perhaps you can try committing a smaller transaction batch size. Chunk it into smaller blocks of records (1K, 10K, 100K, etc.) and see if that helps.","Q_Score":4,"Tags":"python,database-design,postgresql,psycopg2","A_Id":4968869,"CreationDate":"2011-02-11T12:11:00.000","Title":"Postgres Performance Tips Loading in billions of rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find the best solution (perfomance\/easy code) for the following situation:\nConsidering a database system with two tables, A (production table) and A'(cache table):\n\nFuture rows are added first into A' table in order to not disturb the production one.\nWhen a timer says go (at midnight, for example) rows from A' are incorporated to A.\nDealing with duplicates, inexistent rows, etc have to be considerated. \n\nI've been reading some about Materialized Views, Triggers, etc. The problem is that I should not introduce so much noise in the production table because is the reference table for a server (a PowerDNS server in fact).\nSo, what do you guys make of it? Should I better use triggers, MV, or programatically outside of the database?? (I'm using python, BTW)\nThanks in advance for helping me.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":611,"Q_Id":4973316,"Users Score":1,"Answer":"The \"best\" solution according to the criteria you've laid out so far would just be to insert into the production table.\n...unless there's actually something extremely relevant you're not telling us","Q_Score":0,"Tags":"python,postgresql,triggers,materialized-views","A_Id":4973738,"CreationDate":"2011-02-11T19:46:00.000","Title":"Materialize data from cache table to production table [PostgreSQL]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When exactly the database transaction is being commited? Is it for example at the end of every response generation?\nTo explain the question: I need to develop a bit more sophisticated application where I have to control DB transactions less or more manually. Especialy I have to be able to design a set of forms with some complex logics behind the forms (some kind of 'wizard') but the database operations must not be commited until the last form and the confirmation.\nOf course I could put everything to the session without making any DB change but it's not a solution, the changes are quite complex and realy have to be performed. So the only way is to keep it uncommited.\nNow back to the question: if I undertand how is it working in web2py it will be easier for me to decide if thats a good framework for me. I am a java and php programmer, I know python but I don't know web2py yet ...\nIf you know any web page when it's explained I also wppreciate.\nTHanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1224,"Q_Id":4979392,"Users Score":1,"Answer":"you can call db.commit() and db.rollback() pretty much everywhere. If you do not and the action does not raise an exception, it commits before returning a response to the client. If it raises an exception and it is not explicitly caught, it rollsback.","Q_Score":2,"Tags":"python,web2py","A_Id":5443158,"CreationDate":"2011-02-12T17:14:00.000","Title":"web2py and DB transactions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using TG2.1 on WinXP.\nPython ver is 2.6.\nTrying to use sqlautocode (0.5.2) for working with my existing MySQL schema.\nSQLAlchemy ver is 0.6.6\n\n\nimport sqlautocode # works OK\n\n\nWhile trying to reflect the schema ----\n\n\nsqlautocode mysql:\\\\username:pswd@hostname:3306\\schema_name -o tables.py\n\n\nSyntaxError: invalid syntax\nis raised.\nCan someone please point out what's going wrong, & how to handle the same?\nThanks,\nVineet.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":659,"Q_Id":4994838,"Users Score":1,"Answer":"Hey, I got it right somehow.\nThe problem seems to be version mismatch between SA 0.6 & sqlautocode 0.6\nSeems that they don't work in tandom.\nSo I removed those & installed SA 0.5\nNow it's working.\nThanks,\nVineet Deodhar.","Q_Score":2,"Tags":"python,web-applications,turbogears2","A_Id":5003413,"CreationDate":"2011-02-14T16:55:00.000","Title":"sqlautocode for mysql giving syntax error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This relates to primary key constraint in SQLAlchemy & sqlautocode.\nI have SA 0.5.1 & sqlautocode 0.6b1\nI have a MySQL table without primary key.\nsqlautocode spits traceback that \"could not assemble any primary key columns\".\nCan I rectify this with a patch sothat it will reflect tables w\/o primary key?\nThanks,\nVineet Deodhar","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":321,"Q_Id":5003475,"Users Score":0,"Answer":"We've succeeded in faking sqa if the there's combination of columns on the underlying table that uniquely identify it.\nIf this is your own table and you're not live, add a primary key integer column or something.\nWe've even been able to map an existing legacy table in a database with a) no pk and b) no proxy for a primary key in the other columns. It was Oracle not MySQL but we were able to hack sqa to see Oracle's rowid as a pk, though this is only safe for insert and query...update is not possible since it can't uniquely identify which row it should be updating. But these are ugly hacks so if you can help it, don't go down that road.","Q_Score":0,"Tags":"python,web-applications,turbogears,turbogears2","A_Id":5292555,"CreationDate":"2011-02-15T12:11:00.000","Title":"sqlautocode : primary key required in tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This relates to primary key constraint in SQLAlchemy & sqlautocode.\nI have SA 0.5.1 & sqlautocode 0.6b1\nI have a MySQL table without primary key.\nsqlautocode spits traceback that \"could not assemble any primary key columns\".\nCan I rectify this with a patch sothat it will reflect tables w\/o primary key?\nThanks,\nVineet Deodhar","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":321,"Q_Id":5003475,"Users Score":0,"Answer":"If the problem is that sqlautocode will not generate your class code because it cannot determine the PKs of the table, then you would probably be able to change that code to fit your needs (even if it means generating SQLA code that doesn't have PKs). Eventually, if you're using the ORM side of SQLA, you're going to need fields defined as PKs, even if the database doesn't explicitly label them as such.","Q_Score":0,"Tags":"python,web-applications,turbogears,turbogears2","A_Id":5292729,"CreationDate":"2011-02-15T12:11:00.000","Title":"sqlautocode : primary key required in tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This relates to primary key constraint in SQLAlchemy & sqlautocode.\nI have SA 0.5.1 & sqlautocode 0.6b1\nI have a MySQL table without primary key.\nsqlautocode spits traceback that \"could not assemble any primary key columns\".\nCan I rectify this with a patch sothat it will reflect tables w\/o primary key?\nThanks,\nVineet Deodhar","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":321,"Q_Id":5003475,"Users Score":0,"Answer":"I don't think so. How an ORM is suposed to persist an object to the database without any way to uniquely identify records? \nHowever, most ORMs accept a primary_key argument so you can indicate the key if it is not explicitly defined in the database.","Q_Score":0,"Tags":"python,web-applications,turbogears,turbogears2","A_Id":5003573,"CreationDate":"2011-02-15T12:11:00.000","Title":"sqlautocode : primary key required in tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The python unit testing framework called nosetest has a plugin for sqlalchemy, however there is no documentation for it that I can find. I'd like to know how it works, and if possible, see a code example.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":340,"Q_Id":5009112,"Users Score":0,"Answer":"It is my understanding that this plugin is only meant for unit testing SQLAlchemy itself and not as a general tool. Perhaps that is why there are no examples or documentation? Posting to the SQLAlchemy mailing list is likely to give you a better answer \"straight from the horse's mouth\".","Q_Score":3,"Tags":"python,sqlalchemy,nosetests","A_Id":10268378,"CreationDate":"2011-02-15T20:26:00.000","Title":"How does the nosetests sqlalchemy plugin work?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a system which generates reports in XLS using Spreadsheet_Excel_Writer for smaller files and in case of huge files we just export them as CSVs. \nWe now want to export excel sheets which are multicolor etc. as a part of report generation, which in excel could be done through a few macros. \nIs there any good exporter which generates the excel sheets with macros?(Spreadsheet_Excel_Writer cant do this) If it exists for PHP it would be amazing but if it exists for any other language, its fine we could interface it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":515,"Q_Id":5028536,"Users Score":0,"Answer":"It's your \"excel sheets with macros\" that is going to cause you all end of problems. If you're on a Windows platform, with Excel installed, then PHP's COM extension should allow you to do this. Otherwise, I'm nor aware of any PHP library which allows you to create macros... not even PHPExcel. I suspect the same will apply with most languages, other than perhaps those running with .Net (and possibly Mono).\nHowever, do you really need macros to play with colour? Can't you do this more simply with styles, and perhaps conditional formatting?\nPS. What's your definition of \"huge\"?","Q_Score":0,"Tags":"java,php,python,macros,xls","A_Id":5028703,"CreationDate":"2011-02-17T11:49:00.000","Title":"Good xls exporter to generate excel sheets automatically with a few macros from any programming language?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically I'm looking for an equivalent of DataMapper.auto_upgrade! from the Ruby world.\nIn other words:\n\nchange the model\nrun some magic -> current db schema is investigated and changed to reflect the model\nprofit\n\nOf course, there are cases when it's impossible for such alteration to be non-desctructive, eg. when you deleted some attribute. But I don't mean such case. I'm looking for a general solution which doesn't get in the way when rapidly prototyping and changing the schema.\nTIA","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":220,"Q_Id":5036118,"Users Score":0,"Answer":"Sqlalchemy-migrate (http:\/\/packages.python.org\/sqlalchemy-migrate\/) is intended to help do these types of operations.","Q_Score":0,"Tags":"python,orm,sqlalchemy","A_Id":5037471,"CreationDate":"2011-02-17T23:44:00.000","Title":"Can SQLAlchemy do a non-destructive alter of the db comparing the current model with db schema?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have recently converted my workspace file format for my application to sqlite. In order to ensure robust operation on NFS I've used a common update policy, I do all modifications to a copy stored in a temp location on the local harddisk. Only when saving do I modify the original file (potentially on NFS) by copying over the original file with the temp file. I only open the orginal file to keep an exclusive lock on it so it someone else tries to open they will be warned that someone else is using it.\nThe problem is this: When I go to save my temp file back over the original file I must release the lock on the orginal file, this provides a window for someone else to get in and take the original, albeit a small window.\nI can think of a few ways around this: \n(1) being to simply dump the contents of the temp in to the orginal by using sql, i.e. drop tables on original, vacumm original, select from temp and insert into orginal. I don't like doing sql operations on a sqlite file stored on NFS though. This scares me with corruptions issues. Am I right to think like this?\n(2) Use various extra files to act as a guard to prevent other from coming in while copying the temp over the original. Using files as a mutex is problematic at best. I also don't like the idea of having extra files hanging around if the application crashes.\nI'm wondering if anyone has any different solutions for this. Again to copy the temp file over the original file while ensuring other application don't sneak in and grab the original file while doing so?\nI'm using python2.5, sqlalchemy 0.6.6 and sqlite 3.6.20\nThanks,\nDean","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2749,"Q_Id":5043327,"Users Score":2,"Answer":"SQLite NFS issues are due to broken caching and locking. If your process is the only one accessing the file on NFS then you'll be ok.\nThe SQLite backup API was designed to solve exactly your problem. You can either backup directly to the NFS database or to another local temp file and then copy that. The backup API deals with all the locking and concurrency issues.\nYou can use APSW to get access to the backup API or the most recent version of pysqlite. (Disclosure: I am the APSW author.)","Q_Score":3,"Tags":"python,sqlite,sqlalchemy,nfs","A_Id":5095693,"CreationDate":"2011-02-18T15:44:00.000","Title":"How to ensure a safe file sync with sqlite and NFS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Consider this test case:\nimport sqlite3\n\ncon1 = sqlite3.connect('test.sqlite')\ncon1.isolation_level = None\ncon2 = sqlite3.connect('test.sqlite')\ncon2.isolation_level = None\ncur1 = con1.cursor()\ncur2 = con2.cursor()\ncur1.execute('CREATE TABLE foo (bar INTEGER, baz STRING)')\ncon1.isolation_level = 'IMMEDIATE'\ncur1.execute('INSERT INTO foo VALUES (1, \"a\")')\ncur1.execute('INSERT INTO foo VALUES (2, \"b\")')\nprint cur2.execute('SELECT * FROM foo').fetchall()\ncon1.commit()\nprint cur2.execute('SELECT * FROM foo').fetchall()\ncon1.rollback()\nprint cur2.execute('SELECT * FROM foo').fetchall()\nFrom my knowledge I was expecting to see this as a result:\n[]\n[(1, u'a'), (2, u'b')]\n[]\nBut here it's resulting in this:\n[]\n[(1, u'a'), (2, u'b')]\n[(1, u'a'), (2, u'b')]\nSo the call to rollback() method in the first connection didn't reverted the previously commited changes. Why? Shouldn't it roll back them?\nThank you in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":488,"Q_Id":5051151,"Users Score":3,"Answer":"You can't both commit and rollback the same transaction. con1.commit() ends your transaction on that cursor. The next con1.rollback() is either being silently ignored or is rolling back an empty transaction.","Q_Score":0,"Tags":"python,sqlite,rollback","A_Id":5051345,"CreationDate":"2011-02-19T13:59:00.000","Title":"Python sqlite3 module not rolling back transactions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a multiplayer gaming server that uses Django for the webserver (HTML frontend, user authentication, games available, leaderboard, etc.) and Twisted to handle connections between the players and the games and to interface with the games themselves. The gameserver, the webserver, and the database may run on different machines.\nWhat is the \"best\" way to architect the shared database, in a manner that supports changes to the database schema going forward. Should I try incorporating Django's ORM in the Twisted framework and used deferreds to make it non-blocking? Should I be stuck creating and maintaining two separate databases schemas \/ interfaces, one in Django's model and the other using twisted.enterprise.row?\nSimilarly, with user authentication, should I utilize twisted's user authentication functionality, or try to include Django modules into the gameserver to handle user authentication on the game side?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2454,"Q_Id":5051408,"Users Score":2,"Answer":"I would just avoid the Django ORM, it's not all that and it would be a pain to access outside of a Django context (witness the work that was required to make Django support multiple databases). Twisted database access always requires threads (even with twisted.adbapi), and threads give you access to any ORM you choose. SQLalchemy would be a good choice.","Q_Score":9,"Tags":"python,database,django,twisted","A_Id":5051832,"CreationDate":"2011-02-19T14:42:00.000","Title":"Sharing a database between Twisted and Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am developing a multiplayer gaming server that uses Django for the webserver (HTML frontend, user authentication, games available, leaderboard, etc.) and Twisted to handle connections between the players and the games and to interface with the games themselves. The gameserver, the webserver, and the database may run on different machines.\nWhat is the \"best\" way to architect the shared database, in a manner that supports changes to the database schema going forward. Should I try incorporating Django's ORM in the Twisted framework and used deferreds to make it non-blocking? Should I be stuck creating and maintaining two separate databases schemas \/ interfaces, one in Django's model and the other using twisted.enterprise.row?\nSimilarly, with user authentication, should I utilize twisted's user authentication functionality, or try to include Django modules into the gameserver to handle user authentication on the game side?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2454,"Q_Id":5051408,"Users Score":10,"Answer":"First of all I'd identify why you need both Django and Twisted. Assuming you are comfortable with Twisted using twisted.web and auth will easily be sufficient and you'll be able to reuse your database layer for both the frontend and backend apps.\nAlternatively you could look at it the other way, what is Twisted doing better as a game server? Are you hoping to support more players (more simultaneous connections) or something else? Consider that if you must use threaded within twisted to do blocking database access that you are most likely not going to be able to efficently\/reliably support hundreds of simultaneous threads. Remember python has a Global Interpreter Lock so threads are not necessarily the best way to scale. \nYou should also consider why you are looking to use a SQL Database and an ORM. Does your game have data that is really best suited to being stored in an relational database? Perhaps it's worth examining something like MongoDB or another key-value or object database for storing game state. Many of these NoSQL stores have both blocking drivers for use in Django and non-blocking drivers for use in Twisted (txmongo for example). \nThat said, if you're dead set on using both Django and Twisted there are a few techniques for embedding blocking DB access into a non-blocking Twisted server.\n\nadbapi (uses twisted thread pool)\nDirect use of the twisted thread pool using reactor.deferToThread\nThe Storm ORM has a branch providing Twisted support (it handles deferToThread calls internally)\nSAsync is a library that tries to make SQLAlchemy work in an Async way\nHave twisted interact via RPC with a process that manages the blocking DB\n\nSo you should be able to manage the Django ORM objects yourself by importing them in twisted and being very careful making calls to reactor.deferToThread. There are many possible issues when working with these objects within twisted in that some ORM objects can issue SQL when accessing\/setting a property, etc. \nI realize this isn't necessarily the answer you were expecting but perhaps more detail about what you're hoping to accomplish and why you are choosing these specific technologies will allow folks to get you better answers.","Q_Score":9,"Tags":"python,database,django,twisted","A_Id":5051760,"CreationDate":"2011-02-19T14:42:00.000","Title":"Sharing a database between Twisted and Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to MySQL for Python (MySQLdb package) in Windows so that I can use it in the Django web frame.\nI have just installed MySQL Community Server 5.5.9 and I have managed to run it and test it using the testing procedures suggested in the MySQL 5.5 Reference Manual. However, I discovered that I still don't have the MySQL AB folder, the subsequent MySQL Server 5.5 folder and regkey in the HKEY_LOCAL_MACHINE, which is needed to build the MySQLdb package.\nFrom the MySQL 5.5 Reference Manual, it says that:\n\nThe MySQL Installation Wizard creates one Windows registry key in a typical install situation, located in HKEY_LOCAL_MACHINE\\SOFTWARE\\MySQL AB.\n\nHowever, I do have the Start Menu short cut and all the program files installed. I have used the msi installation and installed without problems. Should I be getting the MySQL AB folder? Does anyone know what has happened and how I should get the MySQL AB\/MySQL Server 5.5 folder and the regkey?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":854,"Q_Id":5059883,"Users Score":2,"Answer":"I found that the key was actually generated under HKEY_CURRENT_USER instead of HKEY_LOCAL_MACHINE. Thanks.","Q_Score":5,"Tags":"python,mysql,django","A_Id":5412380,"CreationDate":"2011-02-20T20:43:00.000","Title":"MySQL AB, MySQL Server 5.5 Folder in HKEY_LOCAL_MACHINE not present","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating a small website with Django, and I need to calculate statistics with data taken from several tables in the database.\nFor example (nothing to do with my actual models), for a given user, let's say I want all birthday parties he has attended, and people he spoke with in said parties. For this, I would need a wide query, accessing several tables.\nNow, from the object-oriented perspective, it would be great if the User class implemented a method that returned that information. From a database model perspective, I don't like at all the idea of adding functionality to a \"row instance\" that needs to query other tables. I would like to keep all properties and methods in the Model classes relevant to that single row, so as to avoid scattering the business logic all over the place.\nHow should I go about implementing database-wide queries that, from an object-oriented standpoint, belong to a single object? Should I have an external kinda God-object that knows how to collect and organize this information? Or is there a better, more elegant solution?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":110,"Q_Id":5063658,"Users Score":1,"Answer":"I recommend extending Django's Model-Template-View approach with a controller. I usually have a controller.py within my apps which is the only interface to the data sources. So in your above case I'd have something like get_all_parties_and_people_for_user(user).\nThis is especially useful when your \"data taken from several tables in the database\" becomes \"data taken from several tables in SEVERAL databases\" or even \"data taken from various sources, e.g. databases, cache backends, external apis, etc.\".","Q_Score":3,"Tags":"python,database,django,django-models,coding-style","A_Id":5064564,"CreationDate":"2011-02-21T08:17:00.000","Title":"Correct way of implementing database-wide functionality","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating a small website with Django, and I need to calculate statistics with data taken from several tables in the database.\nFor example (nothing to do with my actual models), for a given user, let's say I want all birthday parties he has attended, and people he spoke with in said parties. For this, I would need a wide query, accessing several tables.\nNow, from the object-oriented perspective, it would be great if the User class implemented a method that returned that information. From a database model perspective, I don't like at all the idea of adding functionality to a \"row instance\" that needs to query other tables. I would like to keep all properties and methods in the Model classes relevant to that single row, so as to avoid scattering the business logic all over the place.\nHow should I go about implementing database-wide queries that, from an object-oriented standpoint, belong to a single object? Should I have an external kinda God-object that knows how to collect and organize this information? Or is there a better, more elegant solution?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":5063658,"Users Score":0,"Answer":"User.get_attended_birthday_parties() or Event.get_attended_parties(user) work fine: it's an interface that makes sense when you use it. Creating an additional \"all-purpose\" object will not make your code cleaner or easier to maintain.","Q_Score":3,"Tags":"python,database,django,django-models,coding-style","A_Id":5065280,"CreationDate":"2011-02-21T08:17:00.000","Title":"Correct way of implementing database-wide functionality","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I am trying to take a large number of xml files (None are that big in particular and I can split them up as I see fit.) In all there is about 70GB worth of data. For the sake of reference the loading script is written in python and uses psycopg2 to interface with a postgres table. \nAnyway, what I am trying to do is to deal with data that works something like this. The relation count being the number of time the two tags are seen together and the tag count being the number of time the tag has been seen. I have all the tags already its just getting the times they appear and the times they appear together of the xml that has become a problem.\n Tag Table | Relations Table \n TagID TagName TagCount | tag1 tag2 relationCount \n 1 Dogs 20 | 1 2 5 \n 2 Beagles 10 | 1 3 2 \n 3 Birds 11 | 2 3 7 \nThe problem I am encountering is getting the data to load in a reasonable amount of time. I have been iterating over the update methods as I count how often the tags come up in the xml files.\nI suppose I am asking if anyone has any ideas. Should I create some sort of buffer to hold the update info and try to use cur.executeall() periodically and\/or should I restructure the database somehow. Anyway, any and all thoughts on this issue are appreciated.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":126,"Q_Id":5066569,"Users Score":3,"Answer":"If I understand this \"...I have been iterating over the update methods\" it sounds like you are updating the database rows as you go? If this is so, consider writing some code that passes the XML, accumulates the totals you are tracking, outputs them to a file, and then loads that file with COPY.\nIf you are updating existing data, try something like this:\n1) Pass the XML file(s) to generate all new totals from the new data\n2) COPY that into a working table - a table that you clear out before and after every batch\n3) Issue an INSERT from the working table to the real tables for all rows that cannot be found, inserting zeros for all values\n4) Issue an UPDATE from the working table to the real tables to increment the counters.\n5) Truncate the working table.","Q_Score":0,"Tags":"python,database-design,postgresql,psycopg2","A_Id":5066699,"CreationDate":"2011-02-21T13:30:00.000","Title":"Efficiently creating a database to analyze relationships between information","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python 2.7 and trying to get a Django project running on a MySQL backend.\nI have downloaded mysqldb and followed the guide here:http:\/\/cd34.com\/blog\/programming\/python\/mysql-python-and-snow-leopard\/\nYet when I go to run the django project the following traceback occurs:\n\nTraceback (most recent call last):\n File \"\/Users\/andyarmstrong\/Documents\/workspace\/BroadbandMapper\/src\/BroadbandMapper\/manage.py\", line 11, in \n execute_manager(settings)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/__init__.py\", line 438, in execute_manager\n utility.execute()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/__init__.py\", line 379, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/base.py\", line 191, in run_from_argv\n self.execute(*args, **options.__dict__)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/base.py\", line 209, in execute\n translation.activate('en-us')\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/__init__.py\", line 66, in activate\n return real_activate(language)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/functional.py\", line 55, in _curried\n return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/__init__.py\", line 36, in delayed_loader\n return getattr(trans, real_name)(*args, **kwargs)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/trans_real.py\", line 193, in activate\n _active[currentThread()] = translation(language)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/trans_real.py\", line 176, in translation\n default_translation = _fetch(settings.LANGUAGE_CODE)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/trans_real.py\", line 159, in _fetch\n app = import_module(appname)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/importlib.py\", line 35, in import_module\n __import__(name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/contrib\/admin\/__init__.py\", line 1, in \n from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/contrib\/admin\/helpers.py\", line 1, in \n from django import forms\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/forms\/__init__.py\", line 17, in \n from models import *\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/forms\/models.py\", line 6, in \n from django.db import connections\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/__init__.py\", line 77, in \n connection = connections[DEFAULT_DB_ALIAS]\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/utils.py\", line 92, in __getitem__\n backend = load_backend(db['ENGINE'])\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/utils.py\", line 33, in load_backend\n return import_module('.base', backend_name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/importlib.py\", line 35, in import_module\n __import__(name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/backends\/mysql\/base.py\", line 14, in \n raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\ndjango.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(\/Users\/andyarmstrong\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp\/_mysql.so, 2): Library not loaded: libmysqlclient.16.dylib\n Referenced from: \/Users\/andyarmstrong\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp\/_mysql.so\n Reason: image not found\n\nI have tried the following also:http:\/\/whereofwecannotspeak.wordpress.com\/2007\/11\/02\/mysqldb-python-module-quirk-in-os-x\/\nadding a link between the mysql lib directory and somewhere else...\nHelp!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1535,"Q_Id":5072066,"Users Score":2,"Answer":"I eventually managed to solve the problem by Installing python 2.7 with Mac Ports and installing mysqldb using Mac Ports - was pretty simple after that.","Q_Score":2,"Tags":"python,mysql,django,macos","A_Id":5072940,"CreationDate":"2011-02-21T22:35:00.000","Title":"Python mysqldb on Mac OSX 10.6 not working","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using Python 2.7 and trying to get a Django project running on a MySQL backend.\nI have downloaded mysqldb and followed the guide here:http:\/\/cd34.com\/blog\/programming\/python\/mysql-python-and-snow-leopard\/\nYet when I go to run the django project the following traceback occurs:\n\nTraceback (most recent call last):\n File \"\/Users\/andyarmstrong\/Documents\/workspace\/BroadbandMapper\/src\/BroadbandMapper\/manage.py\", line 11, in \n execute_manager(settings)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/__init__.py\", line 438, in execute_manager\n utility.execute()\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/__init__.py\", line 379, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/base.py\", line 191, in run_from_argv\n self.execute(*args, **options.__dict__)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/base.py\", line 209, in execute\n translation.activate('en-us')\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/__init__.py\", line 66, in activate\n return real_activate(language)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/functional.py\", line 55, in _curried\n return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/__init__.py\", line 36, in delayed_loader\n return getattr(trans, real_name)(*args, **kwargs)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/trans_real.py\", line 193, in activate\n _active[currentThread()] = translation(language)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/trans_real.py\", line 176, in translation\n default_translation = _fetch(settings.LANGUAGE_CODE)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/translation\/trans_real.py\", line 159, in _fetch\n app = import_module(appname)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/importlib.py\", line 35, in import_module\n __import__(name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/contrib\/admin\/__init__.py\", line 1, in \n from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/contrib\/admin\/helpers.py\", line 1, in \n from django import forms\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/forms\/__init__.py\", line 17, in \n from models import *\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/forms\/models.py\", line 6, in \n from django.db import connections\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/__init__.py\", line 77, in \n connection = connections[DEFAULT_DB_ALIAS]\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/utils.py\", line 92, in __getitem__\n backend = load_backend(db['ENGINE'])\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/utils.py\", line 33, in load_backend\n return import_module('.base', backend_name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/importlib.py\", line 35, in import_module\n __import__(name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/backends\/mysql\/base.py\", line 14, in \n raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\ndjango.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(\/Users\/andyarmstrong\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp\/_mysql.so, 2): Library not loaded: libmysqlclient.16.dylib\n Referenced from: \/Users\/andyarmstrong\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp\/_mysql.so\n Reason: image not found\n\nI have tried the following also:http:\/\/whereofwecannotspeak.wordpress.com\/2007\/11\/02\/mysqldb-python-module-quirk-in-os-x\/\nadding a link between the mysql lib directory and somewhere else...\nHelp!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1535,"Q_Id":5072066,"Users Score":0,"Answer":"you needed to add the MySQL client libraries to the LD_LIBRARY_PATH.","Q_Score":2,"Tags":"python,mysql,django,macos","A_Id":5305496,"CreationDate":"2011-02-21T22:35:00.000","Title":"Python mysqldb on Mac OSX 10.6 not working","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm building my first app with GAE to allow users to run elections, and I create an Election entity for each election. \nTo avoid storing too much data, I'd like to automatically delete an Election entity after a certain period of time -- say three months after the end of the election. Is it possible to do this automatically in GAE? Or do I need to do this manually?\nIf it matters, I'm using the Python interface.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2483,"Q_Id":5079885,"Users Score":5,"Answer":"Assuming you have a DateProperty on the entities indicating when the election ended, you can have a cron job search for any older than 3 months every night and delete them.","Q_Score":3,"Tags":"python,google-app-engine,google-cloud-datastore","A_Id":5079939,"CreationDate":"2011-02-22T15:09:00.000","Title":"Automatic deletion or expiration of GAE datastore entities","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I know that if I figure this one out or if somebody shows me, it'll be a forehead slapper. Before posting any questions, I try for at least three hours and quite a bit of searching. There are several hints that are close, but nothing I have adopted\/tried seems to work.\nI am taking a byte[] from Java and passing that via JSON (with Gson) to a python JSON using Flask. This byte[] is stored in a python object as an integer list when received, but now I need to send it to MySQLdb and store it as a blob. The data contents is binary file data.\nHow do I convert a python integer list [1,2,-3,-143....] to something that I can store in MySQL? I have tried bytearray() and array.array(), but those choke when I access the list directly from the object and try and convert to a string to store thorugh MySQLdb.\nAny links or hints are greatly appreciated.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":7408,"Q_Id":5088671,"Users Score":0,"Answer":"I found ''.join(map(lambda x: chr(x % 256), data)) to be painfully slow (~4 minutes) for my data on python 2.7.9, where a small change to str(bytearray(map(lambda x: chr(x % 256), data))) only took about 10 seconds.","Q_Score":1,"Tags":"java,python,mysql,binary,byte","A_Id":31187500,"CreationDate":"2011-02-23T08:47:00.000","Title":"Convert Java byte array to Python byte array","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed MySqldb through .exe(precompiled). Its is stored in site-packages. But now i don't know how to test, that it is accessable or not. And major problem how to import in my application like import MySqldb. Help me i am very new techie in python i just want to work with my existing Mysql. Thanks in advance...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":165,"Q_Id":5090870,"Users Score":3,"Answer":"Just open your CMD\/Console, type python, press Enter, type import MySQLdb and then press Enter again.\nIf no error is shown, you're ok!","Q_Score":0,"Tags":"python,mysql","A_Id":5090944,"CreationDate":"2011-02-23T12:23:00.000","Title":"how to import mysqldb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to retain the flexibility of switching between MySQL and PostgreSQL without the awkwardness of using an ORM - SQL is a fantastic language and i would like to retain it's power without the additional overhead of an ORM.\nSo...is there a best practice for abstraction the database layer of a Python application to provide the stated flexibility.\nThanks community!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":965,"Q_Id":5090901,"Users Score":1,"Answer":"Have a look at SQLAlchemy. You can use it to execute literal SQL on several RDBMS, including MySQL and PostgreSQL. It wraps the DB-API adapters with a common interface, so they will behave as similarly as possible.\nSQLAlchemy also offers programmatic generation of SQL, with or without the included ORM, which you may find very useful.","Q_Score":1,"Tags":"python,mysql,database,postgresql,abstraction","A_Id":5090938,"CreationDate":"2011-02-23T12:25:00.000","Title":"Starting new project: database abstraction in Python, best practice for retaining option of MySQL or PostgreSQL without ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a library that lets me run SQL-like queries on python \"object databases\". With object database I mean a fairly complex structure of python objects and lists in memory. Basically this would be a \"reverse ORM\" - instead of providing an object oriented interface to a relational database, it would provide a SQL-ish interface to an object database.\nC#'s LINQ is very close. Python's list comprehensions are very nice, but the syntax gets hairy when doing complex things (sorting, joining, etc.). Also, I can't (easily) create queries dynamically with list comprehensions.\nThe actual syntax could either be string based, or use a object-oriented DSL (a la from(mylist).select(...)). Bonus points if the library would provide some kind of indices to speed up search.\nDoes this exist or do I have to invent it?","AnswerCount":7,"Available Count":1,"Score":0.057080742,"is_accepted":false,"ViewCount":9831,"Q_Id":5126776,"Users Score":2,"Answer":"One major difference between what SQL does and what you can do in idiomatic python, in SQL, you tell the evaluator what information you are looking for, and it works out the most efficient way of retrieving that based on the structure of the data it holds. In python, you can only tell the interpreter how you want the data, there's no equivalent to a query planner.\nThat said, there are a few extra tools above and beyond list comprehensions that help alot. \nFirst, use a structure that closely resembles the declarative nature of SQL. Many of them are builtins. map, filter, reduce, zip, all, any, sorted, as well as the contents of the operator, functools and itertools packages, all offer a fairly concise way of expressing data queries.","Q_Score":17,"Tags":"python,object-oriented-database","A_Id":5127794,"CreationDate":"2011-02-26T12:03:00.000","Title":"Query language for python objects","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have three tables, 1-Users, 2-Softwares, 3-UserSoftwares.\nif suppose, Users table having 6 user records(say U1,U2,...,U6) and Softwares table having 4 different softwares(say S1,S2,S3,S4) and UserSoftwares stores the references if a user requested for given software only.\nFor example: UserSoftwares(5 records) have only two columns(userid, softwareid) which references others. and the data is:\nU1 S1\nU2 S2\nU2 S3\nU3 S3\nU4 S1\nNow I m expecting following results:(if current login user is U2):\n\nS1 Disable\nS2 Enable\nS3 Enable\nS4 Disable\nHere, 1st column is softwareid or name and 2nd column is status which having only two values(Enable\/Disable) based on UserSoftwares table(model). Note status is not a field of any model(table).\n\"My Logic is: \n1. loop through each software in softwares model\n2. find softwareid with current login userid (U2) in UserSoftwares model: \n if it found then set status='Enable'\n if not found then set status='Disable'\n3. add this status property to software object.\n4. repeat this procedure for all softwares. \n\"\nWhat should be the query in python google app engine to achieve above result?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":873,"Q_Id":5142192,"Users Score":0,"Answer":"If your are looking for join - there is no joins in GAE. BTW, there is pretty easy to make 2 simple queries (Softwares and UserSoftware), and calculate all additional data manually","Q_Score":1,"Tags":"python,google-app-engine,model","A_Id":5143851,"CreationDate":"2011-02-28T12:48:00.000","Title":"Querying on multiple tables using google apps engine (Python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I had a postgresql query where I need to take column defined as character from table and then pass this value to the function where it only accepts integer.So in this case, how can i solve the problem??Can anyone help??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":5148790,"Users Score":0,"Answer":"ord(val) will give you the integer value of a character. int(val) will cast a value into an integer.","Q_Score":0,"Tags":"python,casting","A_Id":5148795,"CreationDate":"2011-02-28T23:23:00.000","Title":"how to convert value of column defined as character into integer in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As part of artifacts delivery, our developers give the data and structure scripts in .sql files. I usually \"double click\" on these files to open in \"Microsoft SQL Server Management Studio\". Management studio will prompt me for entering database server and user\/pwd. I enter them manually and click on Execute button to execute these scripts.\nThese scripts contain structure and data sql commands. Each script may contain more than one data command (like select, insert, update, etc). Structure and data scripts are provided in separate .sql files.\nThese scripts also contain stored procedures and functions, etc. They also contain comments \/ description.\nI want to automate execution of these scripts through python. I looked at pyodbc and pymssql but they dont look like solve my issue.\nThrough pyodbc, i need to read each .sql file and read the sql commands and execute them one by one. As the files may have comments \/ description \/ SPs \/ etc, reading the files will be little difficult.\nCan anyone give suggestion on how to automate this?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":3938,"Q_Id":5174269,"Users Score":4,"Answer":"You could just run them using sqlcmd. Sqlcmd is a command line utility that will let you run .sql scripts from the command line, which I'm sure you can kick off through python.","Q_Score":1,"Tags":"python,sql,sql-server","A_Id":5174307,"CreationDate":"2011-03-02T22:22:00.000","Title":"Execute .sql files that are used to run in SQL Management Studio in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using SQLAlchemy 0.6.6 against a Postgres 8.3 DB on Windows 7 an PY 2.6. I am leaving the defaults for configuring pooling when I create my engine, which is pool_size=5, max_overflow=10.\nFor some reason, the connections keep piling up and I intermittently get \"Too many clients\" from PG. I am positive that connections are being closed in a finally block as this application is only accessed via WSGI (CherryPy) and uses a connection\/request pattern. I am also logging when connections are being closed just to make sure.\nI've tried to see what's going on by adding echo_pool=true during my engine creation, but nothing is being logged. I can see SQL statement rolling through the console when I set echo=True, but nothing for pooling.\nAnyway, this is driving me crazy because my co-worker who is on a Mac doesn't have any of these issues (I know, get a Mac), so I'm trying to see if this is the result of a bug or something. Google has yielded nothing so I'm hoping to get some help here.\nThanks,\ncc","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1198,"Q_Id":5185438,"Users Score":0,"Answer":"Turns out there was ScopedSession being used outside the normal application usage and the close wasn't in a finally.","Q_Score":1,"Tags":"python,postgresql,sqlalchemy,connection-pooling,cherrypy","A_Id":5195465,"CreationDate":"2011-03-03T19:21:00.000","Title":"SQLAlchemy Connection Pooling Problems - Postgres on Windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have about 30 MB of textual data that is core to the algorithms I use in my web application.\nOn the one hand, the data is part of the algorithm and changes to the data can cause an entire algorithm to fail. This is why I keep the data in text files in my source control, and all changes are auto-tested (pre-commit). I currently have a good level of control. Distributing the data along with the source as we spawn more web instances is a non-issue because it tags along with the source. I currently have these problems:\n\nI often develop special tools to manipulate the files, replicating database access tool functionality\nI would like to give non-developers controlled web-access to this data.\n\nOn the other hand, it is data, and it \"belongs\" in a database. I wish I could place it in a database, but then I would have these problems:\n\nHow do I sync this database to the source? A release contains both code and data.\nHow do I ship it with the data as I spawn a new instance of the web server?\nHow do I manage pre-commit testing of the data? \n\nThings I have considered thus-far:\n\nSqlite (does not solve the non-developer access)\nBuilding an elaborate pre-production database, which data-users will edit to create \"patches\" to the \"real\" database, which developers will accept, test and commit. Sounds very complex.\nI have not fully designed this yet and I sure hope I'm reinventing the wheel here and some SO user will show me the error of my ways...\n\nBTW: I have a \"regular\" database as well, with things that are not algorithmic-data.\nBTW2: I added the Python tag because I currently use Python, Django, Apache and Nginx, Linux (and some lame developers use Windows). \nThanks in advance!\nUPDATE\nSome examples of the data (the algorithms are natural language processing stuff):\n\nWorld Cities and their alternate names\nNames of currencies\nCoordinates of hotels\n\nThe list goes on and on, but Imagine trying to parse the sentence Romantic hotel for 2 in Rome arriving in Italy next monday if someone changes the coordinates that teach me that Rome is in Italy, or if someone adds `romantic' as an alternate name for Las Vegas (yes, the example is lame, but I hope you get the drift).","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":127,"Q_Id":5210318,"Users Score":1,"Answer":"Okay, here's an idea:\n\nShip all the data as is done now.\nHave the installation script install it in the appropriate databases.\nLet users modify this database and give them a button \"restore to original\" that simply reinstalls from the text file.\n\nAlternatively, this route may be easier, esp. when upgrading an installation:\n\nShip all the data as is done now.\nLet users modify the data and store the modified versions in the appropriate database.\nLet the code look in the database, falling back to the text files if the appropriate data is not found. Don't let the code modify the text files in any way.\n\nIn either case, you can keep your current testing code; you just need to add tests that make sure the database properly overrides text files.","Q_Score":4,"Tags":"python","A_Id":5210383,"CreationDate":"2011-03-06T11:56:00.000","Title":"What's the best way to handle source-like data files in a web application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm not sure if this is an issue specific to sqlite databases but after adding some properties I executed syncdb successfully but still the the columns were not added to the database and when I try the access the model in admin I get no such column error.\nWhy is this happening and how do I overcome this issue?\nDetails: Django 1.3, Python 2.6, OSX 10.6, PyCharm.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":5639,"Q_Id":5211340,"Users Score":3,"Answer":"As always, syncdb does not migrate the existing schema.","Q_Score":4,"Tags":"python,django,sqlite","A_Id":5211417,"CreationDate":"2011-03-06T15:26:00.000","Title":"Django manage.py syncdb doing nothing when used with sqlite3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to modify the guestbook example webapp to reduce the amount of database writes.\nWhat I am trying to achieve is to load all the guestbook entries into memcache which I have done. \nHowever I want to be able to directly update the memcache with new guestbook entries and then write all changes to the database as a batch put.() every 30 seconds.\nHas anyone got an example of how I could achieve the above? it would really help me!\nThanks :)","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1124,"Q_Id":5221977,"Users Score":6,"Answer":"This is a recipe for lost data. I have a hard time believing that a guest book is causing enough write activity to be an issue. Also, the bookkeeping involved in this would be tricky, since memcache isn't searchable.","Q_Score":2,"Tags":"python,google-app-engine,caching,memcached,google-cloud-datastore","A_Id":5222081,"CreationDate":"2011-03-07T16:12:00.000","Title":"Limit amount of writes to database using memcache","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using sqlalchemy with reflection, a couple of partial indices in my DB make it dump warnings like this:\nSAWarning: Predicate of partial index i_some_index ignored during reflection\ninto my logs and keep cluttering. It does not hinder my application behavior. I would like to keep these warnings while developing, but not at production level. Does anyone know how to turn this off?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":14820,"Q_Id":5225780,"Users Score":12,"Answer":"the warning means you did a table or metadata reflection, and it's reading in postgresql indexes that have some complex condition which the SQLAlchemy reflection code doesn't know what to do with. This is a harmless warning, as whether or not indexes are reflected doesn't affect the operation of the application, unless you wanted to re-emit CREATE statements for those tables\/indexes on another database.","Q_Score":35,"Tags":"python,postgresql,sqlalchemy","A_Id":5331129,"CreationDate":"2011-03-07T22:00:00.000","Title":"Turn off a warning in sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a client machine running suse linux and python 2.4.2. I'm not allowed to dowload anything from the net including any external libraries. So, is there any way I can connect to a database (oracle) using only the default libraries?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":360,"Q_Id":5228728,"Users Score":1,"Answer":"No. There is nothing in the standard library for connecting to database servers.","Q_Score":0,"Tags":"python,database,oracle","A_Id":5228737,"CreationDate":"2011-03-08T05:32:00.000","Title":"Python: Connecting to db without any external libraries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have a Django web application and I need to add a payment module to it.\nBasically a user will prepay for a certain amount of service and this will slowly reduce over as the user uses the service. I'm wondering what is the best practice to facilitate this? I can process payments using Satchmo, but then just storing the USD value in a database and having my code interacting with that value directly seems kind of risky. Sure I can do that but I am wondering if there is a well tested solution to this problem out there already?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":453,"Q_Id":5236855,"Users Score":1,"Answer":"My language agnostic recommendation would be to make sure that the database that communicates with the web app is read only; at least for the table(s) that deal with these account balances. So, you process payments, and manage the reduction of account balances in a database that is not accessible to anyone other than you (i.e. not connected to the internet, or this web app). You can periodically take snapshots of that database and update the one that interacts with the webapp, so your users have a read copy of their balance. This way, even if a user is able to modify the data to increase their balance by a million bucks, you know that you have their true balance in a separate location. Basically, you'd never have to trust the data on the webapp side - it would be purely informational for your users.","Q_Score":3,"Tags":"python,database,django,web-applications","A_Id":5236907,"CreationDate":"2011-03-08T18:45:00.000","Title":"Securely storing account balances in a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I have a Django web application and I need to add a payment module to it.\nBasically a user will prepay for a certain amount of service and this will slowly reduce over as the user uses the service. I'm wondering what is the best practice to facilitate this? I can process payments using Satchmo, but then just storing the USD value in a database and having my code interacting with that value directly seems kind of risky. Sure I can do that but I am wondering if there is a well tested solution to this problem out there already?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":453,"Q_Id":5236855,"Users Score":6,"Answer":"I don't know about a \"well-tested solution\" as you put it, but I would strongly caution against just storing a dollar value in the database and increasing or decreasing that dollar value. Instead, I would advise storing transactions that can be audited if anything goes wrong. Calculate the amount available from the credit and debit transactions in the user account rather than storing it directly.\nFor extra safety, you would want to ensure that your application cannot delete any transaction records. If you cannot deny write permissions on the relevant tables for some reason, try replicating the transactions to a second database (that the application does not touch) as they are created.","Q_Score":3,"Tags":"python,database,django,web-applications","A_Id":5236901,"CreationDate":"2011-03-08T18:45:00.000","Title":"Securely storing account balances in a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way in cx_Oracle to capture the stdout output from an oracle stored procedure? These show up when using Oracle's SQL Developer or SQL Plus, but there does not seem to be a way to fetch it using the database drivers.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2934,"Q_Id":5244517,"Users Score":4,"Answer":"You can retrieve dbms_output with DBMS_OUTPUT.GET_LINE(buffer, status). Status is 0 on success and 1 when there's no more data.\nYou can also use get_lines(lines, numlines). numlines is input-output. You set it to the max number of lines and it is set to the actual number on output. You can call this in a loop and exit when the returned numlines is less than your input. lines is an output array.","Q_Score":3,"Tags":"python,oracle10g,cx-oracle","A_Id":5247755,"CreationDate":"2011-03-09T10:36:00.000","Title":"Capturing stdout output from stored procedures with cx_Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I \u00b4m trying to serialize an array in python to insert it on a database of MySQL... I try with pickle.dump() method but it returns byte... what can I use?? thanks!!\n(I \u00b4m working in python 3)","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1312,"Q_Id":5259329,"Users Score":1,"Answer":"Pickle is a binary serialization, that's why you get a byte string.\nPros:\n\nmore compact\ncan express most of Python's objects.\n\nCon's:\n\nbytes can be harder to handle\nPython only.\n\nJSON is more universal, so you're not tied to reading data with Python. It's also mostly ASCII, so it's easier to handle. the con is that it's limited to numbers, strings, arrays and dicts. usually enough, but even datetimes have to be converted to string representation before encoding.","Q_Score":3,"Tags":"python,mysql","A_Id":5259400,"CreationDate":"2011-03-10T12:01:00.000","Title":"Serizalize an array in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to use a Python ORM with a MS-Access database (in Windows).\nMy first searches are not really succesfull : \n\nSQLAlchemy : no MS Access support in the two last versions.\nDAL from Web2Py : no Access (??)\nStorm : no MS Access\nsqlobject: no MS Access\ndejavu : seems OK for MS Access but\nis the project alive ?\n\nAny ideas or informations are welcome ...","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1062,"Q_Id":5262387,"Users Score":1,"Answer":"Web2py recently updated their DAL making it much easier to add support for new db engines. I don't believe there is currently native Jet (MS Access) support, but the existing SQL Server support could probably be modified without much effort to provide MS Access support. The latest version of the web2py DAL is a single stand-alone .py file, so it's not a \"heavy\" package.\nFor what it's worth, I've successfully used the web2py DAL as a stand-alone module with SQL Server after initially trying and giving up on SQLAlchemy. In fairness to SQLAlchemy, I had used the web2py DAL as part of the framework and was already comfortable with it.","Q_Score":1,"Tags":"python,ms-access,orm","A_Id":5262564,"CreationDate":"2011-03-10T16:08:00.000","Title":"Python ORM for MS-Access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is opening\/closing db cursor costly operation? What is the best practice, to use a different cursor or to reuse the same cursor between different sql executions? Does it matter if a transaction consists of executions performed on same or different cursors belonging to same connection? \nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":659,"Q_Id":5275236,"Users Score":1,"Answer":"This will depend a lot on your database as well as your chose python implementation - have you tried profiling a few short test operations?","Q_Score":2,"Tags":"python,database,transactions,cursor","A_Id":5275401,"CreationDate":"2011-03-11T15:58:00.000","Title":"db cursor - transaction in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a very large dataset - millions of records - that I want to store in Python. I might be running on 32-bit machines so I want to keep the dataset down in the hundreds-of-MB range and not ballooning much larger than that.\nThese records - represent a M:M relationship - two IDs (foo and bar) and some simple metadata like timestamps (baz).\nSome foo have too nearly all bar in them, and some bar have nearly all foo. But there are many bar that have almost no foos and many foos that have almost no bar.\nIf this were a relational database, a M:M relationship would be modelled as a table with a compound key. You can of course search on either component key individually comfortably.\nIf you store the rows in a hashtable, however, you need to maintain three hashtables as the compound key is hashed and you can't search on the component keys with it.\nIf you have some kind of sorted index, you can abuse lexical sorting to iterate the first key in the compound key, and need a second index for the other key; but its less obvious to me what actual data-structure in the standard Python collections this equates to.\nI am considering a dict of foo where each value is automatically moved from tuple (a single row) to list (of row tuples) to dict depending on some thresholds, and another dict of bar where each is a single foo, or a list of foo.\nAre there more efficient - speedwise and spacewise - ways of doing this? Any kind of numpy for indices or something?\n\n(I want to store them in Python because I am having performance problems with databases - both SQL and NoSQL varieties. You end up being IPC memcpy and serialisation-bound. That is another story; however the key point is that I want to move the data into the application rather than get recommendations to move it out of the application ;) )","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":240,"Q_Id":5302816,"Users Score":2,"Answer":"What you describe sounds like a sparse matrix, where the foos are along one axis and the bars along the other one. Each non-empty cell represents a relationship between one foo and one bar, and contains the \"simple metadata\" you describe.\nThere are efficient sparse matrix packages for Python (scipy.sparse, PySparse) you should look at. I found these two just by Googling \"python sparse matrix\".\nAs to using a database, you claim that you've had performance problems. I'd like to suggest that you may not have chosen an optimal representation, but without more details on what your access patterns look like, and what database schema you used, it's awfully hard for anybody to contribute useful help. You might consider editing your post to provide more information.","Q_Score":1,"Tags":"python,data-structures","A_Id":5303400,"CreationDate":"2011-03-14T18:34:00.000","Title":"Efficient large dicts of dicts to represent M:M relationships in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I put my database file (which is a .sdb) into a directory and try to access it from that directory, I receive an error. The error reads \"unable to open database file\". For example, let's say my .sdb file is in the \"data\" directory and I use the command \"con = lite.connect('data\\noktalar.sdb')\", this error occurs. Why is that so?\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":324,"Q_Id":5321699,"Users Score":1,"Answer":"Where is your python process running from? Try to point to the absolute path of the file. And when pointing to path use raw string r'c:\\\\mypath\\data\\notktalar.sub'","Q_Score":3,"Tags":"python,sqlite","A_Id":5321757,"CreationDate":"2011-03-16T06:14:00.000","Title":"Python Database Error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the Python version of Google App Engine and Datastore. What is a good way to load a table that will contain lookup data?\nBy look up data I mean that after the initial load no rows will need to be inserted, deleted, or updated\nBlowing away all rows and reloading the table is not acceptable if it destroys referential integrity with other rows referring to it.\nHere is an example of a couple kinds that I am using that I want to load lookup data into\nclass Badge(db.Model):\n name = db.StringProperty()\n level = db.IntegerProperty()\n\nclass Achievement(db.Model):\n name = db.StringProperty()\n level = db.IntegerProperty()\n badge = db.ReferenceProperty(reference_class=Badge)\n\nHere is an example of a kind not holding look up data but referring to it\nclass CamperAchievement(db.Model):\n camper = db.ReferenceProperty(reference_class=Camper)\n achievement = db.ReferenceProperty(reference_class=Achievement)\n session = db.ReferenceProperty(reference_class=Session)\n passed = db.BooleanProperty(default=True)\nI'm looking to find out two things:\nWhat should the code to load the data look like?\nWhat should trigger the loading code to execute?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":319,"Q_Id":5328112,"Users Score":2,"Answer":"If it's really created once and never changes within the lifetime of a deployment, and it's relatively small (a few megs or less), store it with your app as data files. Have the app load the data into memory initially, and cache it there.","Q_Score":2,"Tags":"python,google-app-engine,google-cloud-datastore","A_Id":5331814,"CreationDate":"2011-03-16T16:05:00.000","Title":"Need Pattern for lookup tables in Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm designing a python application which works with a database. I'm planning to use sqlite.\nThere are 15000 objects, and each object has a few attributes. every day I need to add some data for each object.(Maybe create a column with the date as its name).\nHowever, I would like to easily delete the data which is too old but it is very hard to delete columns using sqlite(and it might be slow because I need to copy the required columns and then delete the old table)\nIs there a better way to organize this data other than creating a column for every date? Or should I use something other than sqlite?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":5335330,"Users Score":0,"Answer":"If your database is pretty much a collection of almost-homogenic data, you could as well go for a simpler key-value database. If the main action you perform on the data is scanning through everything, it would perform significantly better.\nPython library has bindings for popular ones as \"anydbm\". There is also a dict-imitating proxy over anydbm in shelve. You could pickle your objects with the attributes using any serializer you want (simplejson, yaml, pickle)","Q_Score":0,"Tags":"python,sqlite,data-modeling","A_Id":5339473,"CreationDate":"2011-03-17T05:42:00.000","Title":"Please help me design a database schema for this:","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm designing a python application which works with a database. I'm planning to use sqlite.\nThere are 15000 objects, and each object has a few attributes. every day I need to add some data for each object.(Maybe create a column with the date as its name).\nHowever, I would like to easily delete the data which is too old but it is very hard to delete columns using sqlite(and it might be slow because I need to copy the required columns and then delete the old table)\nIs there a better way to organize this data other than creating a column for every date? Or should I use something other than sqlite?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":5335330,"Users Score":0,"Answer":"For that size of a db, I would use something else. I've used sqlite once for a media library with about 10k objects and it was slow, like 5 minutes to query it all and display, searches were :\/, switching to postgres made life so much easier. This is just on the performance issue only.\nIt also might be better to create an index that contains the date and the data\/column you want to add and a pk reference to the object it belongs and use that for your deletions instead of altering the table all the time. This can be done in sqlite if you give the pk an int type and save the pk of the object to it, instead of using a Foreign Key like you would with mysql\/postgres.","Q_Score":0,"Tags":"python,sqlite,data-modeling","A_Id":5335386,"CreationDate":"2011-03-17T05:42:00.000","Title":"Please help me design a database schema for this:","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is it possible to Insert a python tuple in a postgresql database","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":4015,"Q_Id":5342359,"Users Score":1,"Answer":"Really we need more information. What data is inside the tuple? Is it just integers? Just strings? Is it megabytes of images?\nIf you had a Python tuple like (4,6,2,\"Hello\",7) you could insert the string '(4,6,2,\"Hello\",7)' into a Postgres database, but that's probably not the answer you're looking for.\nYou really need to figure out what data you're really trying to store before you can figure out how\/where to store it.\n\nEDIT: So the short answer is \"no\", you cannot store an arbitrary Python tuple in a postgres database, but there's probably some way to take whatever is inside the tuple and store it somewhere useful.","Q_Score":0,"Tags":"python,database,postgresql","A_Id":5342409,"CreationDate":"2011-03-17T16:47:00.000","Title":"is it possible to Insert a python tuple in a postgresql database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is it possible to Insert a python tuple in a postgresql database","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":4015,"Q_Id":5342359,"Users Score":1,"Answer":"This question does not make any sense. You can insert using SQL whatever is supported by your database model. If you need a fancy mapper: look at an ORM like SQLAlchemy.","Q_Score":0,"Tags":"python,database,postgresql","A_Id":5342419,"CreationDate":"2011-03-17T16:47:00.000","Title":"is it possible to Insert a python tuple in a postgresql database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attemping to install OpsCenter for Cassandra, and using the the standard REHL image. I can't figure out how to get this to work. Another version of EPEL perhaps?\nyum install opscenter....\nError: Package: python26-rrdtool-1.2.27-1.i386 (opscenter)\n Requires: librrd.so.2","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":410,"Q_Id":5344641,"Users Score":0,"Answer":"Try installing rrdtool via yum, that should contain librrd.so.2 and correct your issue.","Q_Score":2,"Tags":"python,linux,centos,cassandra,yum","A_Id":5344716,"CreationDate":"2011-03-17T20:10:00.000","Title":"Amazon Linux AMI EC2 - librrd.so.2 dependency issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Webserver running in Python. He is getting some Data from some Apps and need to store these in MongoDB. My MongoDB is sharded. \nNow i want that my Webserver know how much Shards MongoDB has. At the moment he reads this from a cfg file. There is an Statement in MongoDb named printshardingstatus where u can see all shards. So i tried to call this statement from my Pythonserver. But it seems that it is not possible.I dont find such a function in the Pymongo API. \nSo my question is, is there an chance to run an MongoDB Statement in Python, so that it is directly passed and executed in MongoDB ?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1576,"Q_Id":5350599,"Users Score":0,"Answer":"You can simply get config databasr and\n execute find() on shards collection\n just like normal collection.","Q_Score":0,"Tags":"python,mongodb,pymongo","A_Id":5377084,"CreationDate":"2011-03-18T10:21:00.000","Title":"Execute MongoDb Statements in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a server with Apache2 + mod_python + Django for development and would like to know how to use Mercurial to manage application development.\nMy idea is to make the folder where the Mercurial stores the project be the same folder to deploy Django.\nThank you for your attention!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":627,"Q_Id":5397528,"Users Score":0,"Answer":"I thought about this, good idea for development.\nUse mercurial in common way. And of course you need deploy mercurial server before.\nIf you update your django project, it will be compiled on the fly.\n\n\nMy workflow:\n\nSet up mercurial server or use bitbucket\nInit repo locally\nPush repo to central repo\nOn server pull repo in some target dir\nEdit smth locally and push to central repo\nPull repo on server and everything is fine","Q_Score":0,"Tags":"python,django,mercurial,apache2,mod-python","A_Id":5397870,"CreationDate":"2011-03-22T20:37:00.000","Title":"How to use Mercurial to deploy Django applications?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have my database in msacess 2000 .mdb format which I downloaded from the net and now I want to access that database from my program which is a python script.\nCan I call tables from my programs??\nit would be very grateful if anyone of you please suggest me what to do","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5861,"Q_Id":5402463,"Users Score":0,"Answer":"Create an ODBC DSN wit hthis MDB. Python can access ODBC data sources.","Q_Score":2,"Tags":"python,ms-access","A_Id":5402549,"CreationDate":"2011-03-23T08:16:00.000","Title":"How do I access a .mdb file from python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the function open_workbook() to open an excel file. But I cannot find any function to close the file later in the xlrd module. Is there a way to close the xls file using xlrd?\nOr is not required at all?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":22147,"Q_Id":5403781,"Users Score":6,"Answer":"The open_workbook calls the release_resources ( which closes the mmaped file ) before returning.","Q_Score":23,"Tags":"python,xlrd","A_Id":5404018,"CreationDate":"2011-03-23T10:24:00.000","Title":"Is there a way to close a workbook using xlrd","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scenario\n\nEntity1 (id,itmname)\nEntity2 (id,itmname,price)\nEntity3 (id,itmname,profit)\nprofit and price are both IntegerProperty\n\nI want to count all the item with price more then 500 and profit more then 10.\nI know its join operation and is not supported by google. I tried my best to find out the way other then executing queries separately and performing count but I didn't get anything. \nThe reason for not executing queries separately is query execution time. In each query I am getting more then 50000 records as result so it takes nearly 20 seconds in fetching records from first query.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":372,"Q_Id":5415342,"Users Score":0,"Answer":"The standard solution to this problem is denormalization. Try storing a copy of price and profit in Entity1 and then you can answer your question with a single, simple query on Entity1.","Q_Score":1,"Tags":"python,google-app-engine","A_Id":5415555,"CreationDate":"2011-03-24T05:58:00.000","Title":"Optimizing join query performance in google app engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have to re-design an existing application which uses Pylons (Python) on the backend and GWT on the frontend.\nIn the course of this re-design I can also change the backend system.\nI tried to read up on the advantages and disadvantages of various backend systems (Java, Python, etc) but I would be thankful for some feedback from the community.\nExisting application:\nThe existing application was developed with GWT 1.5 (runs now on 2.1) and is a multi-host-page setup.\nThe Pylons MVC framework defines a set of controllers\/host pages in which GWT widgets are embedded (\"classical website\"). \nData is stored in a MySQL database and accessed by the backend with SQLAlchemy\/Elixir. Server\/client communication is done with RequestBuilder (JSON).\nThe application is not a typical business like application with complex CRUD functionality (transactions, locking, etc) or sophisticated permission system (tough a simple ACL is required).\nThe application is used for visualization (charts, tables) of scientific data. The client interface is primarily used to display data in read-only mode. There might be some CRUD functionality but it's not the main aspect of the app.\nOnly a subset of the scientific data is going to be transfered to the client interface but this subset is generated out of large datasets.\nThe existing backend uses numpy\/scipy to read data from db\/files, create matrices and filter them.\nThe numbers of users accessing or using the app is relatively small, but the burden on the backend for each user\/request is pretty high because it has to read and filter large datasets. \nRequirements for the new system:\nI want to move away from the multi-host-page setup to the MVP architecture (one single host page).\nSo the backend only serves one host page and acts as data source for AJAX calls.\nData will be still stored in a relational database (PostgreSQL instead of MySQL).\nThere will be a simple ACL (defines who can see what kind of data) and maybe some CRUD functionality (but it's not a priority).\nThe size of the datasets is going to increase, so the burden on the backend is probably going to be higher. There won't be many concurrent requests but the few ones have to be handled by the backend quickly. Hardware (RAM and CPU) for the backend server is not an issue. \nPossible backend solutions:\nPython (SQLAlchemy, Pylons or Django):\nAdvantages: \n\nRapid prototyping. \nRe-Use of parts of the existing application \nNumpy\/Scipy for handling large datasets.\n\nDisadvantages:\n\nWeakly typed language -> debugging can be painful\nServer\/Client communication (JSON parsing or using 3rd party libraries). \nPython GIL -> scaling with concurrent requests ?\nServer language (python) <> client language (java)\n\n\nJava (Hibernate\/JPA, Spring, etc)\nAdvantages: \n\nOne language for both client and server (Java)\n\"Easier\" to debug. \nServer\/Client communication (RequestFactory, RPC) easer to implement.\nPerformance, multi-threading, etc\nObject graph can be transfered (RequestFactory).\nCRUD \"easy\" to implement \nMultitear architecture (features)\n\nDisadvantages: \n\nMultitear architecture (complexity,requires a lot of configuration)\nHandling of arrays\/matrices (not sure if there is a pendant to numpy\/scipy in java).\nNot all features of the Java web application layers\/frameworks used (overkill?). \n\nI didn't mention any other backend systems (RoR, etc) because I think these two systems are the most viable ones for my use case. \nTo be honest I am not new to Java but relatively new to Java web application frameworks. I know my way around Pylons though in the new setup not much of the Pylons features (MVC, templates) will be used because it probably only serves as AJAX backend.\nIf I go with a Java backend I have to decide whether to do a RESTful service (and clearly separate client from server) or use RequestFactory (tighter coupling). There is no specific requirement for \"RESTfulness\". In case of a Python backend I would probably go with a RESTful backend (as I have to take care of client\/server communication anyways). \nAlthough mainly scientific data is going to be displayed (not part of any Domain Object Graph) also related metadata is going to be displayed on the client (this would favor RequestFactory).\nIn case of python I can re-use code which was used for loading and filtering of the scientific data.\nIn case of Java I would have to re-implement this part. \nBoth backend-systems have its advantages and disadvantages. \nI would be thankful for any further feedback.\nMaybe somebody has experience with both backend and\/or with that use case.\nthanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1559,"Q_Id":5417372,"Users Score":1,"Answer":"We had the same dilemma in the past. \nI was involved in designing and building a system that had a GWT frontend and Java (Spring, Hibernate) backend. Some of our other (related) systems were built in Python and Ruby, so the expertise was there, and a question just like yours came up.\nWe decided on Java mainly so we could use a single language for the entire stack. Since the same people worked on both the client and server side, working in a single language reduced the need to context-switch when moving from client to server code (e.g. when debugging). In hindsight I feel that we were proven right and that that was a good decision.\nWe used RPC, which as you mentioned yourself definitely eased the implementation of c\/s communication. I can't say that I liked it much though. REST + JSON feels more right, and at the very least creates better decoupling between server and client. I guess you'll have to decide based on whether you expect you might need to re-implement either client or server independently in the future. If that's unlikely, I'd go with the KISS principle and thus with RPC which keeps it simple in this specific case.\nRegarding the disadvantages for Java that you mention, I tend to agree on the principle (I prefer RoR myself), but not on the details. The multitier and configuration architecture isn't really a problem IMO - Spring and Hibernate are simple enough nowadays. IMO the advantage of using Java across client and server in this project trumps the relative ease of using python, plus you'll be introducing complexities in the interface (i.e. by doing REST vs the native RPC).\nI can't comment on Numpy\/Scipy and any Java alternatives. I've no experience there.","Q_Score":4,"Tags":"java,python,gwt,architecture,web-frameworks","A_Id":5421810,"CreationDate":"2011-03-24T09:56:00.000","Title":"Feedback on different backends for GWT","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is there a way to reduce the I\/O's associated with either mysql or a python script? I am thinking of using EC2 and the costs seem okay except I can't really predict my I\/O usage and I am worried it might blindside me with costs. \nI basically develop a python script to parse data and upload it into mysql. Once its in mysql, I do some fairly heavy analytic on it(creating new columns, tables..basically alot of math and financial based analysis on a large dataset). So is there any design best practices to avoid heavy I\/O's? I think memcached stores a everything in memory and accesses it from there, is there a way to get mysql or other scripts to do the same?\nI am running the scripts fine right now on another host with 2 gigs of ram, but the ec2 instance I was looking at had about 8 gigs so I was wondering if I could use the extra memory to save me some money.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":202,"Q_Id":5425289,"Users Score":0,"Answer":"You didn't really specify whether it was writes or reads. My guess is that you can do it all in a mysql instance in a ramdisc (tmpfs under Linux).\nOperations such as ALTER TABLE and copying big data around end up creating a lot of IO requests because they move a lot of data. This is not the same as if you've just got a lot of random (or more predictable queries).\nIf it's a batch operation, maybe you can do it entirely in a tmpfs instance. \nIt is possible to run more than one mysql instance on the machine, it's pretty easy to start up an instance on a tmpfs - just use mysql_install_db with datadir in a tmpfs, then run mysqld with appropriate params. Stick that in some shell scripts and you'll get it to start up. As it's in a ramfs, it won't need to use much memory for its buffers - just set them fairly small.","Q_Score":2,"Tags":"python,mysql,amazon-ec2,mysql-management","A_Id":5426527,"CreationDate":"2011-03-24T20:44:00.000","Title":"reducing I\/O on application and database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I using MySQLdb for access to mysql database from python. I need to know if connection with database is still alive... are there any attribute or method in order to do this???\nthanks!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":146,"Q_Id":5430652,"Users Score":0,"Answer":"To be honest, I haven't used mysqldb in python in a very long time.\nThat being said, I would suggest using an execute(\"now()\") (or \"select 1\", any other \"dummy\" SQL command) and handle any errors.\nedit: That should also probably be part of a class you're using. Don't fill your entire project with .execute(\"now()\") on every other line. ;)","Q_Score":0,"Tags":"python-3.x,mysql-python","A_Id":5430722,"CreationDate":"2011-03-25T09:31:00.000","Title":"Verify the connection with MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"save_or_update has been removed in 0.6. Are there alternatives to use them in 0.6 and above?\nI noticed the existence of the method _save_or_update_state for session objects, but there are no docs on this method.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4986,"Q_Id":5442825,"Users Score":1,"Answer":"Session.merge() works fine for both new and existing object. But you have to remember, that merge() returns object bound to the session as opposed to add() (and save_or_update() in old versions) which puts object passed as argument into the session. This behavior is required to insure there is a single object for each identity in the session.","Q_Score":2,"Tags":"python,sql,sqlalchemy","A_Id":5469880,"CreationDate":"2011-03-26T14:05:00.000","Title":"save_or_update using SQLalchemy 0.6","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"save_or_update has been removed in 0.6. Are there alternatives to use them in 0.6 and above?\nI noticed the existence of the method _save_or_update_state for session objects, but there are no docs on this method.","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":4986,"Q_Id":5442825,"Users Score":-1,"Answer":"session.merge() will not work if you have your db setup as a master-slave, where you typically want to query from the slave, but write to the master. I have such a setup, and ended up re-querying from the master just before the writing, then using a session.add() if the data is indeed not there on the master.","Q_Score":2,"Tags":"python,sql,sqlalchemy","A_Id":11861997,"CreationDate":"2011-03-26T14:05:00.000","Title":"save_or_update using SQLalchemy 0.6","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm actually working in a search engine project. We are working with python + mongoDb.\nI have a pymongo cursor after excecuting a find() command to the mongo db. The pymongo cursor has around 20k results.\nI have noticed that the iteration over the pymongo cursor is really slow compared with a normal iteration over for example a list of the same size.\nI did a little benchmark:\n\niteration over a list of 20k strings: 0.001492 seconds\niteration over a pymongo cursor with 20k results: 1.445343 seconds\n\nThe difference is really a lot. Maybe not a problem with this amounts of results, but if I have millions of results the time would be unacceptable.\nHas anyone got an idea of why pymongo cursors are too slow to iterate?\nAny idea of how can I iterate the cursor in less time?\nSome extra info:\n\nPython v2.6\nPyMongo v1.9\nMongoDB v1.6 32 bits","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":14362,"Q_Id":5480340,"Users Score":1,"Answer":"the default cursor size is 4MB, and the maximum it can go to is 16MB. you can try to increase your cursor size until that limit is reached and see if you get an improvement, but it also depends on what your network can handle.","Q_Score":10,"Tags":"python,mongodb,performance,iteration,database-cursor","A_Id":7828897,"CreationDate":"2011-03-29T23:52:00.000","Title":"Python + MongoDB - Cursor iteration too slow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm actually working in a search engine project. We are working with python + mongoDb.\nI have a pymongo cursor after excecuting a find() command to the mongo db. The pymongo cursor has around 20k results.\nI have noticed that the iteration over the pymongo cursor is really slow compared with a normal iteration over for example a list of the same size.\nI did a little benchmark:\n\niteration over a list of 20k strings: 0.001492 seconds\niteration over a pymongo cursor with 20k results: 1.445343 seconds\n\nThe difference is really a lot. Maybe not a problem with this amounts of results, but if I have millions of results the time would be unacceptable.\nHas anyone got an idea of why pymongo cursors are too slow to iterate?\nAny idea of how can I iterate the cursor in less time?\nSome extra info:\n\nPython v2.6\nPyMongo v1.9\nMongoDB v1.6 32 bits","AnswerCount":4,"Available Count":2,"Score":-1.0,"is_accepted":false,"ViewCount":14362,"Q_Id":5480340,"Users Score":-4,"Answer":"You don't provide any information about the overall document sizes. Fetch such an amount of document requires both network traffic and IO on the database server.\nThe performance is sustained \"bad\" even in \"hot\" state with warm caches? You can use \"mongosniff\" in order to inspect the \"wire\" activity and system tools like \"iostat\" to monitor the disk activity on the server. In addition \"mongostat\" gives a bunch of valuable information\".","Q_Score":10,"Tags":"python,mongodb,performance,iteration,database-cursor","A_Id":5480531,"CreationDate":"2011-03-29T23:52:00.000","Title":"Python + MongoDB - Cursor iteration too slow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm in the planning phase of an Android app which synchronizes to a web app. The web side will be written in Python with probably Django or Pyramid while the Android app will be straightforward java. My goal is to have the Android app work while there is no data connection, excluding the social\/web aspects of the application.\nThis will be a run-of-the-mill app so I want to stick to something that can be installed easily through one click in the market and not require a separate download like CloudDB for Android.\nI haven't found any databases that support this functionality so I will write it myself. One caveat with writing the sync logic is there will be some shared data between users that multiple users will be able to write to. This is a solo project so I thought I'd through this up here to see if I'm totally off-base.\n\nThe app will process local saves to the local sqlite database and then send messages to a service which will attempt to synchronize these changes to the remote database. \nThe sync service will alternate between checking for messages for the local app, i.e. changes to shared data by other users, and writing the local changes to the remote server. \nAll data will have a timestamp for tracking changes\nWhen writing from the app to the server, if the server has newer information, the user will be warned about the conflict and prompted to overwrite what the server has or abandon the local changes. If the server has not been updated since the app last read the data, process the update.\nWhen data comes from the server to the app, if the server has newer data overwrite the local data otherwise discard it as it will be handled in the next go around by the app updating the server.\n\nHere's some questions:\n1) Does this sound like overkill? Is there an easier way to handle this?\n2) Where should this processing take place? On the client or the server? I'm thinking the advantage of the client is less processing on the server but if it's on the server, this makes it easier to implement other clients.\n3) How should I handle the updates from the server? Incremental polling or comet\/websocket? One thing to keep in mind is that I would prefer to go with a minimal installation on Webfaction to begin with as this is the startup.\nOnce these problems are tackled I do plan on contributing the solution to the geek community.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1618,"Q_Id":5544689,"Users Score":1,"Answer":"1) Looks like this is pretty good way to manage your local & remote changes + support offline work. I don't think this is overkill\n2) I think, you should cache user's changes locally with local timestamp until synchronizing is finished. Then server should manage all processing: track current version, commit and rollback update attempts. Less processing on client = better for you! (Easier to support and implement)\n3) I'd choose polling if I want to support offline-mode, because in offline you can't keep your socket open and you will have to reopen it every time when Internet connection is restored. \nPS: Looks like this is VEEERYY OLD question... LOL","Q_Score":4,"Tags":"python,android","A_Id":11871778,"CreationDate":"2011-04-04T21:42:00.000","Title":"Android app database syncing with remote database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"when I launch my application with apache2+modwsgi\nI catch \n\nException Type: ImportError\nException Value: DLL load failed: The specified module could not be found.\n\nin line\n\nfrom lxml import etree\n\nwith Django dev server all works fine\nVisual C++ Redistributable 2008 installed\nDependency walker told that msvcrt90.dll is missed\nbut there is same situation with cx_Oracle, but cx_Oracle's dll loads correct\nany ideas?\nwindows 2003 server 64bit and windows XP sp3 32bit\npython 2.7 32 bit\ncx_Oracle 5.0.4 32bit\nUPD:\ndownload libxml2-2.7.7 and libxslt-1.1.26\ntried to build with setup.py build --compiler mingw32 \n\nBuilding lxml version 2.3.\nBuilding with Cython 0.14.1.\nERROR: 'xslt-config' is not recognized as an internal or external command,\noperable program or batch file.\n\n** make sure the development packages of libxml2 and libxslt are installed **\n\nUsing build configuration of libxslt\nrunning build\nrunning build_py\nrunning build_ext\nskipping 'src\/lxml\\lxml.etree.c' Cython extension (up-to-date)\nbuilding 'lxml.etree' extension\nC:\\MinGW\\bin\\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\\Python27\\include -IC:\\Python27\\PC -c src\/lxml\\lxml.etree.c -o build\\temp.win32-2.7\\Release\\src\\lxml\\lxml.et\nree.o -w\nwriting build\\temp.win32-2.7\\Release\\src\\lxml\\etree.def\nC:\\MinGW\\bin\\gcc.exe -mno-cygwin -shared -s build\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o build\\temp.win32-2.7\\Release\\src\\lxml\\etree.def -LC:\\Python27\\lib\ns -LC:\\Python27\\PCbuild -llibxslt -llibexslt -llibxml2 -liconv -lzlib -lWS2_32 -lpython27 -lmsvcr90 -o build\\lib.win32-2.7\\lxml\\etree.pyd\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0xd11): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0xd24): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x1ee92): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x1eed6): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x2159e): undefined reference to `_imp__xmlMalloc'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x2e741): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x2e784): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x3f157): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x3f19a): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x3f4ac): undefined reference to `_imp__xmlFree'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0x3f4ef): more undefined references to `_imp__xmlFree' follow\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0xb1ad5): undefined reference to `xsltLibxsltVersion'\nbuild\\temp.win32-2.7\\Release\\src\\lxml\\lxml.etree.o:lxml.etree.c:(.text+0xb1b9a): undefined reference to `xsltDocDefaultLoader'\ncollect2: ld returned 1 exit status\nerror: command 'gcc' failed with exit status 1\n\nUPD2:\nI understand why import cx_Oracle works fine: cx_Oracle.pyd contains \"MSVCRT.dll\" dependence etree.pyd doesn't have it","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1054,"Q_Id":5552162,"Users Score":2,"Answer":"It is indeed because of 'msvcrt90.dll'. From somewhere in micro patch revisions of Python 2.6 they stopped building in automatic dependencies on the DLL for extension modules and relied on Python executable doing it. When embedded in other systems however you are then dependent on that executable linking to DLL and in the case of Apache it doesn't. The change in Python has therefore broken many systems which embed Python on Windows and the only solution is for every extension module to have their own dependencies on required DLLs which many don't. The psycopg2 extension was badly affected by this and they have change their builds to add the dependency back in themselves now. You might go searching about the problem as it occurred for psycopg2. One of the solutions was to rebuild extensions with MinGW compiler on Windows instead.","Q_Score":2,"Tags":"python,apache2,mingw,lxml,cx-oracle","A_Id":5559988,"CreationDate":"2011-04-05T12:55:00.000","Title":"problem with soaplib (lxml) with apache2 + mod_wsgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently working on a proof of concept application using Python 3.2 via SQLAlchemy with a MS SQL Server back end. Thus far, I'm hitting a brick wall looking for ways to actually do the connection. Most discussions point to using pyODBC, however it does not support Python 3.x yet.\nDoes anyone have any connection examples for MS SQL and SQLAlchemy, under Python 3.2?\nThis is under Windows 7 64bit also.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":867,"Q_Id":5559645,"Users Score":0,"Answer":"At this moment none of the known Python drivers to connect to Sql Server had a compatible python 3000 version.\n\nPyODBC\nmxODBC\npymssql\nzxjdbc \nAdoDBAPI","Q_Score":0,"Tags":"python,sql-server,sqlalchemy","A_Id":5559890,"CreationDate":"2011-04-05T23:10:00.000","Title":"SQLAlchemy 3.2 and MS SQL Connectivity","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a python server which concurrently handles transactions on a number of databases, each storing performance data about a different application. Concurrency is accomplished via the Multiprocessing module, so each transaction thread starts in a new process, and shared-memory data protection schemes are not viable.\n I am using sqlite as my DBMS, and have opted to set up each application's DB in its own file. Unfortunately, this introduces a race condition on DB creation; If two process attempt to create a DB for the same new application at the same time, both will create the file where the DB is to be stored. My research leads me to believe that one cannot lock a file before it is created; Is there some other mechanism I can use to ensure that the file is not created and then written to concurrently?\nThanks in advance,\nDavid","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":327,"Q_Id":5559660,"Users Score":0,"Answer":"You could capture the error when trying to create the file in your code and in your exception handler, check if the file exists and use the existing file instead of creating it.","Q_Score":0,"Tags":"python,sqlite","A_Id":5559724,"CreationDate":"2011-04-05T23:13:00.000","Title":"Prevent a file from being created in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a python server which concurrently handles transactions on a number of databases, each storing performance data about a different application. Concurrency is accomplished via the Multiprocessing module, so each transaction thread starts in a new process, and shared-memory data protection schemes are not viable.\n I am using sqlite as my DBMS, and have opted to set up each application's DB in its own file. Unfortunately, this introduces a race condition on DB creation; If two process attempt to create a DB for the same new application at the same time, both will create the file where the DB is to be stored. My research leads me to believe that one cannot lock a file before it is created; Is there some other mechanism I can use to ensure that the file is not created and then written to concurrently?\nThanks in advance,\nDavid","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":327,"Q_Id":5559660,"Users Score":0,"Answer":"You didn't mention the platform, but on linux open(), or os.open() in python, takes a flags parameter which you can use. The O_CREAT flag creates a file if it does not exist, and the O_EXCL flag gives you an error if the file already exists. You'll also be needing O_RDONLY, O_WRONLY or O_RDWR for specifying the access mode. You can find these constants in the os module.\nFor example: fd = os.open(filename, os.O_RDWR | os.O_CREAT | os.O_EXCL)","Q_Score":0,"Tags":"python,sqlite","A_Id":5559768,"CreationDate":"2011-04-05T23:13:00.000","Title":"Prevent a file from being created in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any idea on how I could run a bunch of .sql files that contains lots of functions from within sqlalchemy, after I create the schema ? I've tried using DDL(), engine.text().execute(), engine.execute(). None of them work, they are either failing because improper escape or some other weird errors. I am using sqlalchemy 0.6.6","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1729,"Q_Id":5563437,"Users Score":1,"Answer":"You can't do that. You must parse the file and split it into individual SQL commands, and then execute each one separately in a transaction.","Q_Score":2,"Tags":"python,sqlalchemy","A_Id":5564716,"CreationDate":"2011-04-06T08:25:00.000","Title":"run .sql files from within sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to generate data in Excel.\nI generated .CSV file.\nSo up to that point it's easy.\nBut generating graph is quite hard in Excel...\nI am wondering, is python able to generate data AND graph in excel?\nIf there are examples or code snippets, feel free to post it :)\nOr a workaround can be use python to generate graph in graphical format like .jpg, etc or .pdf file is also ok..as long as workaround doesn't need dependency such as the need to install boost library.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":48351,"Q_Id":5568319,"Users Score":2,"Answer":"I suggest you to try gnuplot while drawing graph from data files.","Q_Score":6,"Tags":"python,excel,charts,export-to-excel","A_Id":5568485,"CreationDate":"2011-04-06T14:47:00.000","Title":"use python to generate graph in excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program which makes use of MySQL database.\nI am getting following error.\nIt would be very grateful if some one help me out a solution. \n\nTraceback (most recent call last):\nFile \"version2_1.py\", line 105, in \nrefine(wr,w)#function for replacement\nFile \"version2_1.py\", line 49, in refine\nwrds=db_connect.database(word)\nFile \"\/home\/anusha\/db_connect.py\", line 6, in database\ndb = MySQLdb.connect(\"localhost\",\"root\",\"localhost\",\"anusha\" )\nFile \"\/usr\/lib\/pymodules\/python2.6\/MySQLdb\/_init_.py\", line 81, in Connect\nreturn Connection(*args, **kwargs)\nFile \"\/usr\/lib\/pymodules\/python2.6\/MySQLdb\/connections.py\", line 170, in __init__\nsuper(Connection, self).__init__(*args, **kwargs2)\n_mysql_exceptions.OperationalError: (1045, \"Access denied for user 'root'@'localhost' (using password: YES)\")","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3356,"Q_Id":5606665,"Users Score":0,"Answer":"Looks like you have an incorrect username\/password for MySQL. Try creating a user in MySQL and use that to connect.","Q_Score":1,"Tags":"python,mysql,mysql-error-1045","A_Id":5606690,"CreationDate":"2011-04-09T17:37:00.000","Title":"Error when trying to execute a Python program that uses MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"from the interpreter i can issue >>> from MySQLdb just fine. so, I'm assuming the module did actually load. My source looks as follows: \n\nfrom Tkinter import *\n from MySQLdb import *\n \"\"\"\n Inventory control for Affordable Towing \nFunctions:\n connection() - Controls database connection\n delete() - Remove item from database\n edit() - Edit item's attributes in database\n lookup() - Lookup an item\n new() - Add a new item to database\n receive() - Increase quantity of item in database\n remove() - Decrease quantity of item in database\n report() - Display inventory activity\n transfer() - Remove item from one location, receive item in another \n\"\"\"\n def control():\n ....dbInfo = { 'username':'livetaor_atowtw', 'password':'spam', \\\n ....'server':'eggs.com', 'base':'livetaor_towing', 'table':'inventory' }\n ....def testConnection():\n ........sql = MySQLdb.connect(user=dbInfo[username], passwd=dbInfo[password], \\\n ........host=dbInfo[server], db=dbInfo[base])\n ........MySQLdb.mysql_info(sql) \n....testConnection() \ncontrol() \n\nthis gives me: \n\nbrad@brads-debian:~\/python\/towing\/inventory$ python inventory.py\n Traceback (most recent call last):\n ..File \"inventory.py\", line 53, in \n ....control()\n ..File \"inventory.py\", line 26, in control\n ....testConnection()\n ..File \"inventory.py\", line 22, in testConnection\n ....sql = MySQLdb.connect(user=dbInfo[username], passwd=dbInfo[password], \\\n NameError: global name 'MySQLdb' is not defined \n\n1) where am I going wrong?\n2) any other gotcha's that you folks see?\n3) any advice on how to check for a valid connection to the database, (not just the server)?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":7873,"Q_Id":5609322,"Users Score":1,"Answer":"from MySQLdb import * and import MySQLdb do very different things.","Q_Score":0,"Tags":"python,mysql,programming-languages,network-programming","A_Id":5609341,"CreationDate":"2011-04-10T02:12:00.000","Title":"python2.6 with MySQLdb, NameError 'MySQLdb' not defined","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there some module to allow for easy DB provider configuration via connection string, similar to PHP's PDO where I can nicely say \"psql:\/\/\" or \"mysql:\/\/\" or, in this python project, am I just going to have to code some factory classes that use MySQLdb, psycopg2, etc?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":413,"Q_Id":5617246,"Users Score":0,"Answer":"There's something not quite as nice in logilab.database, but which works quite well (http:\/\/www.logilab.org\/project\/logilab-database). Supports sqlite, mysql, postgresql and some versions of mssql, and some abstraction mechanisms on the SQL understood by the different backend engines.","Q_Score":0,"Tags":"python","A_Id":5617901,"CreationDate":"2011-04-11T05:49:00.000","Title":"python and DB connection abstraction?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python hangs on \nlxml.etree.XMLSchema(tree)\nwhen I use it on apache server + mod_wsgi (Windows)\nWhen I use Django dev server - all works fine\nif you know about other nice XML validation solution against XSD, tell me pls\nUpdate:\nI'm using soaplib, which uses lxml\n\nlogger.debug(\"building schema...\")\nself.schema = etree.XMLSchema(etree.parse(f))\n\nlogger.debug(\"schema %r built, cleaning up...\" % self.schema)\n\nI see \"building schema...\" in apache logs, but I don't see \"schema %r built, cleaning up...\"\nUpdate 2:\nI built lxml 2.3 with MSVS 2010 visual C++; afterwards it crashes on this line self.schema = etree.XMLSchema(etree.parse(f)) with Unhandled exception at 0x7c919af2 in httpd.exe: 0xC0000005: Access violation writing location 0x00000010.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1123,"Q_Id":5617599,"Users Score":1,"Answer":"I had a similar problem on a Linux system. Try installing a more recent version of libxml2 and reinstalling lxml, at least that's what did it for me.","Q_Score":5,"Tags":"python,apache,mod-wsgi,lxml,xml-validation","A_Id":6176299,"CreationDate":"2011-04-11T06:34:00.000","Title":"Python hangs on lxml.etree.XMLSchema(tree) with apache + mod_wsgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Python hangs on \nlxml.etree.XMLSchema(tree)\nwhen I use it on apache server + mod_wsgi (Windows)\nWhen I use Django dev server - all works fine\nif you know about other nice XML validation solution against XSD, tell me pls\nUpdate:\nI'm using soaplib, which uses lxml\n\nlogger.debug(\"building schema...\")\nself.schema = etree.XMLSchema(etree.parse(f))\n\nlogger.debug(\"schema %r built, cleaning up...\" % self.schema)\n\nI see \"building schema...\" in apache logs, but I don't see \"schema %r built, cleaning up...\"\nUpdate 2:\nI built lxml 2.3 with MSVS 2010 visual C++; afterwards it crashes on this line self.schema = etree.XMLSchema(etree.parse(f)) with Unhandled exception at 0x7c919af2 in httpd.exe: 0xC0000005: Access violation writing location 0x00000010.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1123,"Q_Id":5617599,"Users Score":2,"Answer":"I had the same problem (lxml 2.2.6, mod_wsgi 3.2). A work around for this is to pass a file or filename to the constructor: XMLSchema(file=).","Q_Score":5,"Tags":"python,apache,mod-wsgi,lxml,xml-validation","A_Id":6685198,"CreationDate":"2011-04-11T06:34:00.000","Title":"Python hangs on lxml.etree.XMLSchema(tree) with apache + mod_wsgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a sentence like the cat sat on the mat stored as a single sql field. I want to periodically search for keywords which are not not in a stop list, in this case cat sat mat What's the best way to store them in an SQL table for quick searching?\nAs far as I can see it I see the following options\n\nUp to [n] additional columns per row, one for each word.\nStore all of the interesting words in a single, comma separated field.\nA new table, linked to the first with either of the above options. \nDo nothing and search for a match each time I have a new word to search on. \n\nWhich is best practice and which is fastest for searching for word matches? I'm using sqlite in python if that makes a difference.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":290,"Q_Id":5627140,"Users Score":1,"Answer":"I do something similar with SQLite too. In my experience it's not as fast as other db's in this type of situation so it pays to make your schema as simple as possible.\n\nUp to [n] additional columns per row, one for each word.\nStore all of the interesting words in a single, comma separated field.\nA new table, linked to the first with either of the above options.\nDo nothing and search for a match each time I have a new word to search on.\n\nOf your 4 options, 2) and 4) may be too slow if you're looking to scale and matching using LIKE. Matching using full text is faster though, so that's worth looking into. 1) looks to be bad database design, what if there's more words than columns ? And if there's less, it's just wasted space. 3) is best IMO, if you make the words the primary key in their own table the searching speed should be acceptably fast.","Q_Score":1,"Tags":"python,sql,sqlite","A_Id":5627582,"CreationDate":"2011-04-11T20:26:00.000","Title":"Storing interesting words from a sentence","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a sentence like the cat sat on the mat stored as a single sql field. I want to periodically search for keywords which are not not in a stop list, in this case cat sat mat What's the best way to store them in an SQL table for quick searching?\nAs far as I can see it I see the following options\n\nUp to [n] additional columns per row, one for each word.\nStore all of the interesting words in a single, comma separated field.\nA new table, linked to the first with either of the above options. \nDo nothing and search for a match each time I have a new word to search on. \n\nWhich is best practice and which is fastest for searching for word matches? I'm using sqlite in python if that makes a difference.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":290,"Q_Id":5627140,"Users Score":1,"Answer":"I would suggest giving your sentences a key, likely IDENTITY. I would then create a second table linking to your sentence table, with a row for each interesting word. \nIf you'd like to search for say, words starting with ca- if you stored these words in a comma delimited you'd have to wildcard the start and end, whereas if they are each in a separate row you can bypass the beginning wildcard.\nAlso, assuming you find a match, in a comma separated list you'd have to parse out which word is actually a hit. With the second table you simply return the word itself. Not to mention the fact that storing multiple values in one field a major no-no in a relational database.","Q_Score":1,"Tags":"python,sql,sqlite","A_Id":5627243,"CreationDate":"2011-04-11T20:26:00.000","Title":"Storing interesting words from a sentence","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Beginner question- what is the difference between sqlite and sqlalchemy?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":24446,"Q_Id":5632677,"Users Score":66,"Answer":"They're apples and oranges.\nSqlite is a database storage engine, which can be better compared with things such as MySQL, PostgreSQL, Oracle, MSSQL, etc. It is used to store and retrieve structured data from files.\nSQLAlchemy is a Python library that provides an object relational mapper (ORM). It does what it suggests: it maps your databases (tables, etc.) to Python objects, so that you can more easily and natively interact with them. SQLAlchemy can be used with sqlite, MySQL, PostgreSQL, etc.\nSo, an ORM provides a set of tools that let you interact with your database models consistently across database engines.","Q_Score":37,"Tags":"python,sqlite,sqlalchemy","A_Id":5632745,"CreationDate":"2011-04-12T08:54:00.000","Title":"What is the difference between sqlite3 and sqlalchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:\nIf the first column contains the value, X, then I need to be able to delete the entire row.\nI'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...\nGrateful for any help that's out there!","AnswerCount":6,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51378,"Q_Id":5635054,"Users Score":12,"Answer":"Don't delete. Just copy what you need.\n\nread the original file\nopen a new file\niterate over rows of the original file (if the first column of the row does not contain the value X, add this row to the new file)\nclose both files\nrename the new file into the original file","Q_Score":5,"Tags":"python,excel,xlwt","A_Id":5635203,"CreationDate":"2011-04-12T12:19:00.000","Title":"Python to delete a row in excel spreadsheet","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After much study and investigation, I've decided to do my Python development with pyQT4 using Eric5 as the editor. However, I've run into a brick wall with trying to get MySQL to work. It appears that there's an issue with the QMySQL driver. From the discussions that I've seen so far, the only fix is to install the pyQT SDK and then recompile the MySQL driver. A painful process that I really don't want to have to go through. I would actually prefer to use MS SQL but I'm not finding any drivers for pyQT with MSSQL support.\nSo, my question is: What is the best approach for using pyQT with either mySQL or MSSQL, that actually works?\nWhile waiting for an answer, I might just tinker with SQLAlchemy and mySQL.Connector to see if it will co-exist with pyQT.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":6327,"Q_Id":5642537,"Users Score":2,"Answer":"yes that will work, I do the same thing. I like a programming API, like what SQLAlchemy provides over the Raw SQL version of Qt's QtSql module. It works fine and nice, just populate a subclassed QAbstractTableModel with data from your sqlalchemy queries, like you would with data from any other python object. This though means you're handling caching and database queries, losing the niceness of QtSqlTableModel. But shouldn't be too bad.","Q_Score":1,"Tags":"python,mysql,sql-server,pyqt","A_Id":5643057,"CreationDate":"2011-04-12T22:50:00.000","Title":"pyQT and MySQL or MSSQL Connectivity","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am very new to python and Django, was actually thrown in to finish off some coding for my company since our coder left for overseas.\nWhen I run python manage.py syncdb I receive the following error\npsycopg2.OperationalError: FATAL: password authentication failed for user \"winepad\"\nI'm not sure why I am being prompted for user \"winepad\" as I've created no such user by that name, I am running the sync from a folder named winepad. In my pg_hba.conf file all I have is a postgres account which I altered with a new password.\nAny help would be greatly appreciated as the instructions I left are causing me some issues.\nThank you in advance","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3164,"Q_Id":5643201,"Users Score":1,"Answer":"Check your settings.py file. The most likely reason for this issue is that the username for the database is set to \"winepad\". Change that to the appropriate value and rerun python manage.py syncdb That should fix the issue.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":5643247,"CreationDate":"2011-04-13T00:37:00.000","Title":"python manage.py syncdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have just begun learning Python. Eventually I will learn Django, as my goal is to able to do web development (video sharing\/social networking). At which point should I begin learning MySQL? Do I need to know it before I even begin Django? If so, how much should I look to know before diving into Django? Thank you.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1826,"Q_Id":5643400,"Users Score":0,"Answer":"Django uses its own ORM, so I guess it's not completely necessary to learn MySQL first, but I suspect it would help a fair bit to know what's going on behind the scenes, and it will help you think in the correct way to formulate your queries.\nI would start learning MySQL (or any other SQL), after you've got a pretty good grip on Python, but probably before you start learning Django, or perhaps alongside. You won't need a thorough understanding of SQL. At least, not to get started.\n\nErr... ORM\/Object Relational Mapper, it hides\/abstracts the complexities of SQL and lets you access your data through the simple objects\/models you define in Python. For example, you might have a \"Person\" model with Name, Age, etc. That Name and Age could be stored and retrieved from the database transparently just be accessing the object, without having to write any SQL. (Just a simple .save() and .get())","Q_Score":1,"Tags":"python,mysql,django,new-operator","A_Id":5643494,"CreationDate":"2011-04-13T01:16:00.000","Title":"Beginning MySQL\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have just begun learning Python. Eventually I will learn Django, as my goal is to able to do web development (video sharing\/social networking). At which point should I begin learning MySQL? Do I need to know it before I even begin Django? If so, how much should I look to know before diving into Django? Thank you.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1826,"Q_Id":5643400,"Users Score":0,"Answer":"As Django documents somehow Recommends, It is better to learning PostgreSQL.\nPostgreSQL is working pretty with Django, I never had any problem with Django\/PostgreSQL.\nI all know is sometimes i have weird error when working with MySQL.","Q_Score":1,"Tags":"python,mysql,django,new-operator","A_Id":5654701,"CreationDate":"2011-04-13T01:16:00.000","Title":"Beginning MySQL\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to Python and having some rudimentary problems getting MySQLdb up and running. I'm hoping somebody out there can help me.\nWhen I first tried to install the module using setup.py, the setup terminated because it was unable to find mysql_config. This is because I didn't realize the module expected MySQL to be installed on the local machine. I am only trying to connect to a remote MySQL server.\nMy question is twofold:\n1) How should I use MySQLdb on a machine that doesn't have MySQL installed, only to connect to a remote server?\n2) How can I roll back what appears to be a corrupt installation of MySQLdb? Whenever I try to import MySQLdb from a script, I get the error \"no module named _mysql\", which according to the documentation, indicates a faulty install.\nBTW: I am on a Mac running Snow Leopard\/Python 2.6.1\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":5644374,"Users Score":0,"Answer":"Install the MySQL client libraries.\nInstall the MySQL client library development files, and build again.","Q_Score":0,"Tags":"python,mysql,python-module,mysql-python,setup.py","A_Id":5644390,"CreationDate":"2011-04-13T04:23:00.000","Title":"Help with MySQLdb module: corrupted installation and connecting to remote servers","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert\/delete records in database. \nDoes anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ??","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1182,"Q_Id":5654107,"Users Score":6,"Answer":"InnoDB is transactional. You need to call connection.commit() after inserts\/deletes\/updates.\nEdit: you can call connection.autocommit(True) to turn on autocommit.","Q_Score":3,"Tags":"python,mysql,innodb,myisam","A_Id":5654733,"CreationDate":"2011-04-13T18:55:00.000","Title":"Problem in insertion from python script in mysql database with innondb engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a query set of approximately 1500 records from a Django ORM query. I have used the select_related() and only() methods to make sure the query is tight. I have also used connection.queries to make sure there is only this one query. That is, I have made sure no extra queries are getting called on each iteration. \nWhen I run the query cut and paste from connection.queries it runs in 0.02 seconds. However, it takes seven seconds to iterate over those records and do nothing with them (pass).\nWhat can I do to speed this up? What causes this slowness?","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":4372,"Q_Id":5656238,"Users Score":3,"Answer":"1500 records is far from being a large dataset, and seven seconds is really too much. There is probably some problem in your models, you can easily check it by getting (as Brandon says) the values() query, and then create explicitly the 1500 object by iterating the dictionary. Just convert the ValuesQuerySet into a list before the construction to factor out the db connection.","Q_Score":7,"Tags":"python,django","A_Id":5656734,"CreationDate":"2011-04-13T22:03:00.000","Title":"How do I speed up iteration of large datasets in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a query set of approximately 1500 records from a Django ORM query. I have used the select_related() and only() methods to make sure the query is tight. I have also used connection.queries to make sure there is only this one query. That is, I have made sure no extra queries are getting called on each iteration. \nWhen I run the query cut and paste from connection.queries it runs in 0.02 seconds. However, it takes seven seconds to iterate over those records and do nothing with them (pass).\nWhat can I do to speed this up? What causes this slowness?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":4372,"Q_Id":5656238,"Users Score":1,"Answer":"Does your model's Meta declaration tell it to \"order by\" a field that is stored off in some other related table? If so, your attempt to iterate might be triggering 1,500 queries as Django runs off and grabs that field for each item, and then sorts them. Showing us your code would help us unravel the problem!","Q_Score":7,"Tags":"python,django","A_Id":5657066,"CreationDate":"2011-04-13T22:03:00.000","Title":"How do I speed up iteration of large datasets in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a group of related companies that share items they own with one-another. Each item has a company that owns it and a company that has possession of it. Obviously, the company that owns the item can also have possession of it. Also, companies sometimes permanently transfer ownership of items instead of just lending it, so I have to allow for that as well.\nI'm trying to decide how to model ownership and possession of the items. I have a Company table and an Item table.\nHere are the options as I see them:\n\nInventory table with entries for each Item - Company relationship. Has a company field pointing to a Company and has Boolean fields is_owner and has_possession.\nInventory table with entries for each Item. Has an owner_company field and a possessing_company field that each point to a Company.\nTwo separate tables: ItemOwner and ItemHolder**.\n\nSo far I'm leaning towards option three, but the tables are so similar it feels like duplication. Option two would have only one row per item (cleaner than option one in this regard), but having two fields on one table that both reference the Company table doesn't smell right (and it's messy to draw in an ER diagram!).\nDatabase design is not my specialty (I've mostly used non-relational databases), so I don't know what the best practice would be in this situation. Additionally, I'm brand new to Python and Django, so there might be an obvious idiom or pattern I'm missing out on.\nWhat is the best way to model this without Company and Item being polluted by knowledge of ownership and possession? Or am I missing the point by wanting to keep my models so segregated? What is the Pythonic way?\nUpdate\nI've realized I'm focusing too much on database design. Would it be wise to just write good OO code and let Django's ORM do it's thing?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":198,"Q_Id":5656345,"Users Score":0,"Answer":"Option #1 is probably the cleanest choice. An Item has only one owner company and is possessed by only one possessing company. \nPut two FK to Company in Item, and remember to explicitly define the related_name of the two inverses to be different each other. \nAs you want to avoid touching the Item model, either add the FKs from outside, like in field.contribute_to_class(), or put a new model with a one-to-one rel to Item, plus the foreign keys. \nThe second method is easier to implement but the first will be more natural to use once implemented.","Q_Score":2,"Tags":"django,design-patterns,database-design,django-models,python","A_Id":5656695,"CreationDate":"2011-04-13T22:17:00.000","Title":"How to model lending items between a group of companies","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on a realtime data website that has a data-mining backend side to it. I am highly experienced in both Python and C++\/C#, and wondering which one would be preferable for the backend development.\nI am strongly leaning towards Python for its available libraries and ease of use. But am I wrong? If so, why?\nAs I side question, would you recommend using SQLAlchemy? Are there any drawback to it (performance is crucial) compared to _mysql or MySQLdb? \nThanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1302,"Q_Id":5658529,"Users Score":1,"Answer":"We do backend development based on Zope, Python and other Python-related stuff since almost 15 years. Python gives you great flexibility and all-batteries included (likely true for C#, not sure about C++).\nIf you do RDBMS development with Python: SQLAlchemy is the way to go. It provides a huge functionality and saved my a** over the last years a couple of times...Sqlalchemy can be complex and complicated but the advantages is that you can hide a complex database schema behind an OO facade..very handy like any ORM in general. \n_mysql vs MySQLdb...I only know of the python-mysql package.","Q_Score":1,"Tags":"c#,c++,python,backend","A_Id":5658631,"CreationDate":"2011-04-14T04:27:00.000","Title":"Designing a Website Backend - Python or C++\/C#?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to insert a query that contains \u00e9 - or \\xe9 (INSERT INTO tbl1 (text) VALUES (\"fianc\u00e9\")) into a MySQL table in Python using the _mysql module.\nMy query is in unicode, and when I call _mysql.connect(...).query(query) I get a UnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position X\n: ordinal not in range(128).\nObviously the call to query causes a conversion of the unicode string to ASCII somehow, but the question is why? My DB is in utf8 and the connection is opened with the flags use_unicode=True and charset='utf8'. Is unicode simply not supported with _mysql or MySQLdb? Am I missing something else? \nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":522,"Q_Id":5658737,"Users Score":0,"Answer":"I know this doesn't directly answer your question, but why aren't you using prepared statements? That will do two things: probably fix your problem, and almost certainly fix the SQLi bug you've almost certainly got.\nIf you won't do that, are you absolutely certain your string itself is unicode? If you're just naively using strings in python 2.7, it probably is being forced into an ASCII string.","Q_Score":0,"Tags":"python,mysql,unicode","A_Id":5658972,"CreationDate":"2011-04-14T04:55:00.000","Title":"Python MySQL Unicode Error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. What is the standard practice for getting and closing cursors? In particular, how long should my cursors last? Should I get a new cursor for each transaction?\nI believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?","AnswerCount":5,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":99138,"Q_Id":5669878,"Users Score":-6,"Answer":"I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a 50x(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.","Q_Score":95,"Tags":"python,mysql,mysql-python","A_Id":5670056,"CreationDate":"2011-04-14T21:23:00.000","Title":"When to close cursors using MySQLdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am now working on a big backend system for a real-time and history tracking web service.\nI am highly experienced in Python and intend to use it with sqlalchemy (MySQL) to develop the backend.\nI don't have any major experience developing robust and sustainable backend systems and I was wondering if you guys could point me out to some documentation \/ books about backend design patterns? I basically need to feed data to a database by querying different services (over HTML \/ SOAP \/ JSON) at realtime, and to keep history of that data. \nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4070,"Q_Id":5670639,"Users Score":0,"Answer":"Use Apache, Django and Piston.\nUse REST as the protocol.\nWrite as little code as possible. \nDjango models, forms, and admin interface.\nPiston wrapppers for your resources.","Q_Score":0,"Tags":"python,backend","A_Id":5671966,"CreationDate":"2011-04-14T22:55:00.000","Title":"Python Backend Design Patterns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to generate compound charts (e.g: Bar+line) from my database using python.\nHow can i do this ?\nThanks in Advance","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":277,"Q_Id":5693151,"Users Score":1,"Answer":"Pretty easy to do with pygooglechart - \nYou can basically follow the bar chart examples that ship with the software and then use the add_data_line method to make the lines on top of the bar chart","Q_Score":0,"Tags":"python,charts","A_Id":6272840,"CreationDate":"2011-04-17T11:09:00.000","Title":"Compoud charts with python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running django site with MySQL as DB back-end.\nFinally i've got 3 millions rows in django_session table. Most of them are expired, thus i want to remove them.\nBut if i manually run delete from django_session where expire_date < \"2011-04-18\" whole site seems to be hanged - it cannot be accessed via browser.\nWhy such kind of blocking is possible? How to avoid it?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":513,"Q_Id":5703308,"Users Score":1,"Answer":"I am not MySQL expert, but I guess MySQL locks the table for the deleting and this might be MySQL transaction\/backend related. When deleting is in progress MySQL blocks the access to the table from other connections. MyISAM and InnoDB backend behavior might differ. I suggest you study MySQL manual related to this: the problem is not limited to Django domain, but generally how to delete MySQL rows without blocking access to the table. \nFor the future reference I suggest you set-up a session cleaner task which will clear the sessions, let's say once in a day, from cron so that you don't end up with such huge table.","Q_Score":1,"Tags":"python,mysql,django","A_Id":5703375,"CreationDate":"2011-04-18T13:03:00.000","Title":"MySQL&django hangs on huge session delete","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm running django site with MySQL as DB back-end.\nFinally i've got 3 millions rows in django_session table. Most of them are expired, thus i want to remove them.\nBut if i manually run delete from django_session where expire_date < \"2011-04-18\" whole site seems to be hanged - it cannot be accessed via browser.\nWhy such kind of blocking is possible? How to avoid it?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":513,"Q_Id":5703308,"Users Score":5,"Answer":"If your table is MyISAM, DELETE operations lock the table and it is not accessible by the concurrent queries.\nIf there are many records to delete, the table is locked for too long.\nSplit your DELETE statement into several shorter batches.","Q_Score":1,"Tags":"python,mysql,django","A_Id":5703378,"CreationDate":"2011-04-18T13:03:00.000","Title":"MySQL&django hangs on huge session delete","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have had a virtualenv for Trunk up and running for a while, but now I am trying to branch, and get things setup on another virtualenv for my 'refactor' branch.\nEverything looks to be setup correctly, but when I try to run any manage.py commands, I get this error:\n_mysql_exceptions.OperationalError: (1045, \"Access denied for user 'brian'@'localhost' (using password: NO)\")\nI just don't understand why it's not attempting to use the password I have set in my django settings file. Is there some addition mysql setup I could have overlooked? Does this issue ring any bells for anyone?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1450,"Q_Id":5726440,"Users Score":1,"Answer":"I found the problem I was having. \nDjango was importing a different settings.py file.\nI had another django project inside my django product like myproject\/myproject\/. \nInstead of importing myproject\/settings.py, it was importing myproject\/myproject\/settings.py\nI assume that Aptana Studio created that project there. If you use eclipse you are also likely to have this problem.","Q_Score":1,"Tags":"python,mysql,django,mysql-error-1045","A_Id":8742834,"CreationDate":"2011-04-20T06:39:00.000","Title":"Can't access MySQL database in Django VirtualEnv on localhost","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In Python, is there a way to get notified that a specific table in a MySQL database has changed?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":30571,"Q_Id":5771925,"Users Score":1,"Answer":"Not possible with standard SQL functionality.","Q_Score":10,"Tags":"python,mysql","A_Id":5771943,"CreationDate":"2011-04-24T17:03:00.000","Title":"python: how to get notifications for mysql database changes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Python, is there a way to get notified that a specific table in a MySQL database has changed?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":30571,"Q_Id":5771925,"Users Score":10,"Answer":"It's theoretically possible but I wouldn't recommend it:\nEssentially you have a trigger on the the table the calls a UDF which communicates with your Python app in some way.\nPitfalls include what happens if there's an error? \nWhat if it blocks? Anything that happens inside a trigger should ideally be near-instant.\nWhat if it's inside a transaction that gets rolled back?\nI'm sure there are many other problems that I haven't thought of as well.\nA better way if possible is to have your data access layer notify the rest of your app. If you're looking for when a program outside your control modifies the database, then you may be out of luck.\nAnother way that's less ideal but imo better than calling an another program from within a trigger is to set some kind of \"LastModified\" table that gets updated by triggers with triggers. Then in your app just check whether that datetime is greater than when you last checked.","Q_Score":10,"Tags":"python,mysql","A_Id":5771988,"CreationDate":"2011-04-24T17:03:00.000","Title":"python: how to get notifications for mysql database changes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a database based django application and I have installed apache, python and django using macport on a snow leopard machine. I ran into issues installing MySQL with macport. But I was able to successfully install a standalone MySQL server (from MySQL.com). Is it possible to remove the MysQL package installed along with py26-MySQL?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":182,"Q_Id":5782875,"Users Score":2,"Answer":"To use py26-mysql you don't need the entire server distribution for MySQL. You do need the client libs, at the very least. If you remove the server, you need to make sure you re-install the base libraries needed by the Python module to function.","Q_Score":0,"Tags":"python,mysql,macports","A_Id":5782960,"CreationDate":"2011-04-25T20:26:00.000","Title":"Is it possible to install py26-mysql without installing mysql5 package?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have some database structure; as most of it is irrelevant for us, i'll describe just some relevant pieces. Let's lake Item object as example:\nitems_table = Table(\"invtypes\", gdata_meta,\n Column(\"typeID\", Integer, primary_key = True),\n Column(\"typeName\", String, index=True),\n Column(\"marketGroupID\", Integer, ForeignKey(\"invmarketgroups.marketGroupID\")),\n Column(\"groupID\", Integer, ForeignKey(\"invgroups.groupID\"), index=True))\n\nmapper(Item, items_table,\n properties = {\"group\" : relation(Group, backref = \"items\"),\n \"_Item__attributes\" : relation(Attribute, collection_class = attribute_mapped_collection('name')),\n \"effects\" : relation(Effect, collection_class = attribute_mapped_collection('name')),\n \"metaGroup\" : relation(MetaType,\n primaryjoin = metatypes_table.c.typeID == items_table.c.typeID,\n uselist = False),\n \"ID\" : synonym(\"typeID\"),\n \"name\" : synonym(\"typeName\")})\nI want to achieve some performance improvements in the sqlalchemy\/database layer, and have couple of ideas:\n1) Requesting the same item twice:\nitem = session.query(Item).get(11184)\nitem = None (reference to item is lost, object is garbage collected)\nitem = session.query(Item).get(11184)\nEach request generates and issues SQL query. To avoid it, i use 2 custom maps for an item object:\nitemMapId = {}\nitemMapName = {}\n\n@cachedQuery(1, \"lookfor\")\ndef getItem(lookfor, eager=None):\n if isinstance(lookfor, (int, float)):\n id = int(lookfor)\n if eager is None and id in itemMapId:\n item = itemMapId[id]\n else:\n item = session.query(Item).options(*processEager(eager)).get(id)\n itemMapId[item.ID] = item\n itemMapName[item.name] = item\n elif isinstance(lookfor, basestring):\n if eager is None and lookfor in itemMapName:\n item = itemMapName[lookfor]\n else:\n # Items have unique names, so we can fetch just first result w\/o ensuring its uniqueness\n item = session.query(Item).options(*processEager(eager)).filter(Item.name == lookfor).first()\n itemMapId[item.ID] = item\n itemMapName[item.name] = item\n return item\nI believe sqlalchemy does similar object tracking, at least by primary key (item.ID). If it does, i can wipe both maps (although wiping name map will require minor modifications to application which uses these queries) to not duplicate functionality and use stock methods. Actual question is: if there's such functionality in sqlalchemy, how to access it?\n2) Eager loading of relationships often helps to save alot of requests to database. Say, i'll definitely need following set of item=Item() properties:\nitem.group (Group object, according to groupID of our item)\nitem.group.items (fetch all items from items list of our group)\nitem.group.items.metaGroup (metaGroup object\/relation for every item in the list)\nIf i have some item ID and no item is loaded yet, i can request it from the database, eagerly loading everything i need: sqlalchemy will join group, its items and corresponding metaGroups within single query. If i'd access them with default lazy loading, sqlalchemy would need to issue 1 query to grab an item + 1 to get group + 1*#items for all items in the list + 1*#items to get metaGroup of each item, which is wasteful.\n2.1) But what if i already have Item object fetched, and some of the properties which i want to load are already loaded? As far as i understand, when i re-fetch some object from the database - its already loaded relations do not become unloaded, am i correct?\n2.2) If i have Item object fetched, and want to access its group, i can just getGroup using item.groupID, applying any eager statements i'll need (\"items\" and \"items.metaGroup\"). It should properly load group and its requested relations w\/o touching item stuff. Will sqlalchemy properly map this fetched group to item.group, so that when i access item.group it won't fetch anything from the underlying database?\n2.3) If i have following things fetched from the database: original item, item.group and some portion of the items from the item.group.items list some of which may have metaGroup loaded, what would be best strategy for completing data structure to the same as eager list above: re-fetch group with (\"items\", \"items.metaGroup\") eager load, or check each item from items list individually, and if item or its metaGroup is not loaded - load them? It seems to depend on the situation, because if everything has already been loaded some time ago - issuing such heavy query is pointless. Does sqlalchemy provide a way to track if some object relation is loaded, with the ability to look deeper than just one level?\nAs an illustration to 2.3 - i can fetch group with ID 83, eagerly fetching \"items\" and \"items.metaGroup\". Is there a way to determine from an item (which has groupID of an 83), does it have \"group\", \"group.items\" and \"group.items.metaGroup\" loaded or not, using sqlalchemy tools (in this case all of them should be loaded)?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3854,"Q_Id":5795492,"Users Score":7,"Answer":"To force loading lazy attributes just access them. This the simplest way and it works fine for relations, but is not as efficient for Columns (you will get separate SQL query for each column in the same table). You can get a list of all unloaded properties (both relations and columns) from sqlalchemy.orm.attributes.instance_state(obj).unloaded.\nYou don't use deferred columns in your example, but I'll describe them here for completeness. The typical scenario for handling deferred columns is the following:\n\nDecorate selected columns with deferred(). Combine them into one or several groups by using group parameter to deferred().\nUse undefer() and undefer_group() options in query when desired.\nAccessing deferred column put in group will load all columns in this group.\n\nUnfortunately this doesn't work reverse: you can combine columns into groups without deferring loading of them by default with column_property(Column(\u2026), group=\u2026), but defer() option won't affect them (it works for Columns only, not column properties, at least in 0.6.7).\nTo force loading deferred column properties session.refresh(obj, attribute_names=\u2026) suggested by Nathan Villaescusa is probably the best solution. The only disadvantage I see is that it expires attributes first so you have to insure there is not loaded attributes among passed as attribute_names argument (e.g. by using intersection with state.unloaded).\nUpdate\n1) SQLAlchemy does track loaded objects. That's how ORM works: there must be the only object in the session for each identity. Its internal cache is weak by default (use weak_identity_map=False to change this), so the object is expunged from the cache as soon as there in no reference to it in your code. SQLAlchemy won't do SQL request for query.get(pk) when object is already in the session. But this works for get() method only, so query.filter_by(id=pk).first() will do SQL request and refresh object in the session with loaded data.\n2) Eager loading of relations will lead to fewer requests, but it's not always faster. You have to check this for your database and data.\n2.1) Refetching data from database won't unload objects bound via relations.\n2.2) item.group is loaded using query.get() method, so there won't lead to SQL request if object is already in the session.\n2.3) Yes, it depends on situation. For most cases it's the best is to hope SQLAlchemy will use the right strategy :). For already loaded relation you can check if related objects' relations are loaded via state.unloaded and so recursively to any depth. But when relation is not loaded yet you can't get know whether related objects and their relations are already loaded: even when relation is not yet loaded the related object[s] might be already in the session (just imagine you request first item, load its group and then request other item that has the same group). For your particular example I see no problem to just check state.unloaded recursively.","Q_Score":7,"Tags":"python,sqlalchemy,eager-loading","A_Id":5819858,"CreationDate":"2011-04-26T19:35:00.000","Title":"Completing object with its relations and avoiding unnecessary queries in sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We're currently in the process of implementing a CRM-like solution internally for a professional firm. Due to the nature of the information stored, and the varying values and keys for the information we decided to use a document storage database, as it suited the purposes perfectly (In this case we chose MongoDB).\nAs part of this CRM solution we wish to store relationships and associations between entities, examples include storing conflicts of interest information, shareholders, trustees etc. Linking all these entities together in the most effective way we determined a central model of \"relationship\" was necessary. All relationships should have history information attached to them ( commencement and termination dates), as well as varying meta data; for example a shareholder relationship would also contain number of shares held.\nAs traditional RDBMS solutions didn't suit our former needs, using them in our current situation is not viable. What I'm trying to determine is whether using a graph database is more pertinent in our case, or if in fact just using mongo's built-in relational information is appropriate.\nThe relationship information is going to be used quite heavily throughout the system. An example of some of the informational queries we wish to perform are:\n\nGet all 'key contact' people of companies who are 'clients' of 'xyz limited'\nGet all other 'shareholders' of companies where 'john' is a shareholder\nGet all 'Key contact' people of entities who are 'clients' of 'abc limited' and are clients of 'trust us bank limited'\n\nGiven this \"tree\" structure of relationships, is using a graph database (such as Neo4j) more appropriate?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":2546,"Q_Id":5817182,"Users Score":1,"Answer":"stay with mongodb. Two reasons - 1. its better to stay in the same domain if you can to reduce complexity and 2. mongodb is excellent for querying and requires less work than redis, for example.","Q_Score":16,"Tags":"python,django,mongodb,redis,neo4j","A_Id":5821550,"CreationDate":"2011-04-28T10:28:00.000","Title":"Using MongoDB as our master database, should I use a separate graph database to implement relationships between entities?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We're currently in the process of implementing a CRM-like solution internally for a professional firm. Due to the nature of the information stored, and the varying values and keys for the information we decided to use a document storage database, as it suited the purposes perfectly (In this case we chose MongoDB).\nAs part of this CRM solution we wish to store relationships and associations between entities, examples include storing conflicts of interest information, shareholders, trustees etc. Linking all these entities together in the most effective way we determined a central model of \"relationship\" was necessary. All relationships should have history information attached to them ( commencement and termination dates), as well as varying meta data; for example a shareholder relationship would also contain number of shares held.\nAs traditional RDBMS solutions didn't suit our former needs, using them in our current situation is not viable. What I'm trying to determine is whether using a graph database is more pertinent in our case, or if in fact just using mongo's built-in relational information is appropriate.\nThe relationship information is going to be used quite heavily throughout the system. An example of some of the informational queries we wish to perform are:\n\nGet all 'key contact' people of companies who are 'clients' of 'xyz limited'\nGet all other 'shareholders' of companies where 'john' is a shareholder\nGet all 'Key contact' people of entities who are 'clients' of 'abc limited' and are clients of 'trust us bank limited'\n\nGiven this \"tree\" structure of relationships, is using a graph database (such as Neo4j) more appropriate?","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":2546,"Q_Id":5817182,"Users Score":6,"Answer":"The documents in MongoDB very much resemble nodes in Neo4j, minus the relationships. They both hold key-value properties. If you've already made the choice to go with MongoDB, then you can use Neo4j to store the relationships and then bridge the stores in your application. If you're choosing new technology, you can go with Neo4j for everything, as the nodes can hold property data just as well as documents can. \nAs for the relationship part, Neo4j is a great fit. You have a graph, not unrelated documents. Using a graph database makes perfect sense here, and the sample queries have graph written all over them. \nHonestly though, the best way to find out what works for you is to do a PoC - low cost, high value.\nDisclaimer: I work for Neo Technology.","Q_Score":16,"Tags":"python,django,mongodb,redis,neo4j","A_Id":5836158,"CreationDate":"2011-04-28T10:28:00.000","Title":"Using MongoDB as our master database, should I use a separate graph database to implement relationships between entities?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built a previous program that took client info and stored it in a folder of txt files (impractical much) but now I want to upgrade the program to be more efficient and put the info into a database of some sort...\nHow can I take the info from the text files and add them to the new database without having to manually do each one. I know this is vague but I need more so the method\/logic instead of the exact code, Also if I don't use SQL what is another method for making a db (Not using another commercial Db)\nbtw the txt files are in simple format (name,city,age) all on separate lines for easy iteration","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5085,"Q_Id":5823236,"Users Score":0,"Answer":"The main reason for DB to have a SQL is to make it separate and generic from the application that you are developing.\nTo have your own DB built you need to have a storage mechanism could be files on the hard disk, with search options so that you can access data immediately with keywords that you are interested in. on top of this you have to have a layer that initiates queues, reads them and translates to the lower file read and write functions. you need to have this queue layer because lets say you have 100 applications and all are trying to read and write from the same file at the same time and you can imagine what can happen to the file . there will be access denied , somebody using it, data corrupted etc etc.. so you need to put all these in queue and let this queue layer translate things for you.\nto start with start from different ways of reading\/writing\/sorting of data into the file, and a queue layer. From there you can build applications. \nThe queue layer here is similar to the client that is trying to push the data into the communication port in most of the available databases.","Q_Score":2,"Tags":"python,database","A_Id":15442076,"CreationDate":"2011-04-28T18:21:00.000","Title":"If I want to build a custom database, how could I?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to save my in-memory sqlite database to hard disk?\nIf it is possible, some python code would be awesome.\nThanks in advance.\nEDIT:\nI succeeded this task by using apsw . It works like a charm. Thanks for your contribution.","AnswerCount":7,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":13227,"Q_Id":5831548,"Users Score":6,"Answer":"Yes. When you create the connection to the database, replace :memory: with the path where you want to save the DB.\nsqlite uses caches for file based DBs, so this shouldn't be (much) slower.","Q_Score":19,"Tags":"python,sqlite","A_Id":5832180,"CreationDate":"2011-04-29T11:43:00.000","Title":"python save in memory sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to save my in-memory sqlite database to hard disk?\nIf it is possible, some python code would be awesome.\nThanks in advance.\nEDIT:\nI succeeded this task by using apsw . It works like a charm. Thanks for your contribution.","AnswerCount":7,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":13227,"Q_Id":5831548,"Users Score":13,"Answer":"(Disclosure: I am the APSW author)\nThe only safe way to make a binary copy of a database is to use the backup API that is part of SQLite and is exposed by APSW. This does the right thing with ordering, locking and concurrency.\nTo make a SQL (text) copy of the a database then use the APSW shell which includes a .dump implementation that is very complete. You can use cursor.execute() to turn the SQL back into a database.\nOn recent platforms you are unlikely to see much of a difference between a memory database and a disk one (assuming you turned journaling off for the disk) as the operating system maintains a file system cache. Older operating systems like Windows XP did have a default configuration of only using 10MB of memory for file cache no matter how much RAM you have.","Q_Score":19,"Tags":"python,sqlite","A_Id":5925061,"CreationDate":"2011-04-29T11:43:00.000","Title":"python save in memory sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to save my in-memory sqlite database to hard disk?\nIf it is possible, some python code would be awesome.\nThanks in advance.\nEDIT:\nI succeeded this task by using apsw . It works like a charm. Thanks for your contribution.","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":13227,"Q_Id":5831548,"Users Score":1,"Answer":"Open a disk based database and just copy everything from one to the other.","Q_Score":19,"Tags":"python,sqlite","A_Id":5831644,"CreationDate":"2011-04-29T11:43:00.000","Title":"python save in memory sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In MySQL, I have two different databases -- let's call them A and B.\nDatabase A resides on server server1, while database B resides on server server2.\nBoth servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).\nIn such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?\nIf so, how do I go about it, programatically, in python? (I am using python's MySQLDB to separately interact with each one of the databases).","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":32061,"Q_Id":5832787,"Users Score":4,"Answer":"It is very simple - select data from one server, select data from another server and aggregate using Python. If you would like to have SQL query with JOIN - put result from both servers into separate tables in local SQLite database and write SELECT with JOIN.","Q_Score":29,"Tags":"python,mysql","A_Id":5832825,"CreationDate":"2011-04-29T13:36:00.000","Title":"MySQL -- Joins Between Databases On Different Servers Using Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In MySQL, I have two different databases -- let's call them A and B.\nDatabase A resides on server server1, while database B resides on server server2.\nBoth servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).\nIn such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?\nIf so, how do I go about it, programatically, in python? (I am using python's MySQLDB to separately interact with each one of the databases).","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":32061,"Q_Id":5832787,"Users Score":3,"Answer":"No. It is not possible to do the join as you would like. But you may be able to sort something out by replicating one of the servers to the other for the individual database. \nOne data set is under the control of one copy of MySQL and the other dataset is under the control of the other copy of MySQL. The query can only be processed by one of the (MySQL) servers.\nIf you create a copy of the second database on the first server or vice versa (the one that gets the fewest updates is best) you can set up replication to keep the copy up to date. You will then be able to run the query as you want.","Q_Score":29,"Tags":"python,mysql","A_Id":5832954,"CreationDate":"2011-04-29T13:36:00.000","Title":"MySQL -- Joins Between Databases On Different Servers Using Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here's what I want to do.\nDevelop a Django project on a development server with a development database. Run the south migrations as necessary when I change the model.\nSave the SQL from each migration, and apply those to the production server when I'm ready to deploy.\nIs such a thing possible with South? (I'd also be curious what others do to get your development database changes on production when working with Django)","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":10745,"Q_Id":5833418,"Users Score":50,"Answer":"You can at least inspect the sql generated by doing manage.py migrate --db-dry-run --verbosity=2. This will not do anything to the database and will show all the sql. I would still make a backup though, better safe than sorry.","Q_Score":28,"Tags":"python,database,migration,django-south","A_Id":5897509,"CreationDate":"2011-04-29T14:32:00.000","Title":"Django - South - Is There a way to view the SQL it runs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Here's what I want to do.\nDevelop a Django project on a development server with a development database. Run the south migrations as necessary when I change the model.\nSave the SQL from each migration, and apply those to the production server when I'm ready to deploy.\nIs such a thing possible with South? (I'd also be curious what others do to get your development database changes on production when working with Django)","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":10745,"Q_Id":5833418,"Users Score":2,"Answer":"I'd either do what Lutger suggested (and maybe write a log parser to strip out just the SQL), or I'd run my migration against a test database with logging enabled on the test DB.\nOf course, if you can run it against the test database, you're just a few steps away from validating the migration. If it passes, run it again against production.","Q_Score":28,"Tags":"python,database,migration,django-south","A_Id":5932967,"CreationDate":"2011-04-29T14:32:00.000","Title":"Django - South - Is There a way to view the SQL it runs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In my server process, it looks like this:\nMain backend processes:\n\nProcesses Huge list of files and , record them inside MySQL.\nOn every 500 files done, it writes \"Progress Report\" to a separate file \/var\/run\/progress.log like this \"200\/5000 files done\"\nIt is multi-processed with 4 children, each made sure to run on a separate file.\n\nWeb server process:\n\nRead the output of \/var\/run\/progress.log every 10 seconds via Ajax and report to progress bar.\n\nWhen processing a very large list of files (e.g. over 3 GB archive), the processes lock up after about 2 hours of processing.\nI can't find what is going on. Does that mean that \/var\/run\/progress.log caused an I\/O deadlock?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":773,"Q_Id":5848184,"Users Score":0,"Answer":"Quick advice, make sure (like, super sure) that you do close your file.\nSo ALWAYS use a try-except-final block for this\nRemember that the contens of a final block will ALWAYS be executed, that will prevent you a lot of head pain :)","Q_Score":1,"Tags":"python,linux,performance,io,deadlock","A_Id":12211059,"CreationDate":"2011-05-01T11:55:00.000","Title":"If I open and read the file which is periodically written, can I\/O deadlock occur?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm working with Tornado and MongoDB and I would like to send a confirmation email to the user when he creates an account in my application.\nFor the moment, I use a simple XHTML page with a form and I send information to my MongoDB database using Tornado. I would like to have an intermediate step which sends an email to the user before inserting the data into the database.\nI would like to know how could I send this email and insert the user account only after the user receives the email and confirms his registration.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3734,"Q_Id":5862238,"Users Score":6,"Answer":"I wonder why you would handle registration like that. The usual way to handle registration is:\n\nWrite the user info to the database, but with an 'inactive' label attached to the user.\nSend an email to the user.\nIf the user confirms the registration, then switch the user to 'active'.\n\nIf you don't want to write to the database, you can write to a cache (like memcache, redis), then when the user confirms the registration, you can get the user info from the cache and write it to the database.","Q_Score":5,"Tags":"python,email,tornado","A_Id":7483440,"CreationDate":"2011-05-02T20:41:00.000","Title":"How can I send a user registration confirmation email using Tornado and MongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"The identity map and unit of work patterns are part of the reasons sqlalchemy is much more attractive than django.db. However, I am not sure how the identity map would work, or if it works when an application is configured as wsgi and the orm is accessed directly through api calls, instead of a shared service. I would imagine that apache would create a new thread with its own python instance for each request. Each instance would therefore have their own instance of the sqlalchemy classes and not be able to make use of the identity map. Is this correct?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2298,"Q_Id":5869514,"Users Score":0,"Answer":"So this all depends on how you setup your sqlalchemy connection. Normally what you do is to manage each wsgi request to have it's own threadlocal session. This session will know about all of the goings-on of it, items added\/changed\/etc. However, each thread is not aware of the others. In this way the loading\/preconfiguring of the models and mappings is shared during startup time, however each request can operate independent of the others.","Q_Score":6,"Tags":"python,sqlalchemy,identity-map","A_Id":5869588,"CreationDate":"2011-05-03T12:35:00.000","Title":"sqlalchemy identity map question","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a data model called Game.\nIn the Game model, I have two properties called player1 and player2 which are their names.\nI want to find a player in gamebut I don't know how to buil the query because gql does not support OR clause and then I can't use select * from Game where player1 = 'tom' or player2 = 'tom' statement.\nSo, how can I solve this question?\nDo I have to modify my data model?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":631,"Q_Id":5875881,"Users Score":0,"Answer":"Note that there is no gain of performance in using Drew's schema, because queries in list properties must check for equality against all the elements of the list.","Q_Score":3,"Tags":"python,google-app-engine","A_Id":10265451,"CreationDate":"2011-05-03T21:24:00.000","Title":"Google app engine gql query two properties with same string","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can i use Berkeley DB python classes in mobile phone directly , i mean Do DB python classes and methods are ready to be used in any common mobile phone like Nokia,Samsong (windows mobile)..etc.\nIf a phone system supports python language, does that mean that it is easy and straightforward to use Berkeley DB on it...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":159,"Q_Id":5888854,"Users Score":1,"Answer":"Berkeley DB is a library that needs to be available. What you may have is Python bindings to Berkeley DB. If the library is not present, having Python will not help.\nLook for SQLite, which may be present (it is for iPhone) as it has SQL support and its library size is smaller than Berkeley DB, which makes it better suited for mobile OSes.","Q_Score":0,"Tags":"python,database,mobile,windows-mobile,berkeley-db","A_Id":5888966,"CreationDate":"2011-05-04T19:30:00.000","Title":"Can use Berkeley DB in mobile phone","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.\nThis will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.\nI have a few related questions:\n\nIs there a reliable library or module that can be used?\nWhat has your experience been using Access and python?\nAny tips, tricks, or must avoids I need to know about?\n\nThanks :)","AnswerCount":8,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":3427,"Q_Id":5891359,"Users Score":2,"Answer":"The various answers to the duplicate question suggest that your \"primary goal\" of creating an MS Access database on a linux server is not attainable.\nOf course, such a goal is of itself not worthwhile at all. If you tell us what the users\/consumers of the Access db are expected to do with it, maybe we can help you. Possibilities: (1) create a script and a (set of) file(s) which the user downloads and runs to create an Access DB (2) if it's just for casual user examination\/manipulation, an Excel file may do.","Q_Score":7,"Tags":"python,linux,ms-access","A_Id":5925032,"CreationDate":"2011-05-05T00:26:00.000","Title":"Building an MS Access database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.\nThis will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.\nI have a few related questions:\n\nIs there a reliable library or module that can be used?\nWhat has your experience been using Access and python?\nAny tips, tricks, or must avoids I need to know about?\n\nThanks :)","AnswerCount":8,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3427,"Q_Id":5891359,"Users Score":0,"Answer":"Could you create a self-extracting file to send to the Windows user who has Microsoft Access installed?\n\nInclude a blank .mdb file.\ndynamically build xml documents with tables, schema\nand data\nInclude an import executable that will take\nall of the xml docs and import into\nthe Access .mdb file.\n\nIt's an extra step for the user, but you get to rely on their existing drivers, software and desktop.","Q_Score":7,"Tags":"python,linux,ms-access","A_Id":5954299,"CreationDate":"2011-05-05T00:26:00.000","Title":"Building an MS Access database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.\nThis will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.\nI have a few related questions:\n\nIs there a reliable library or module that can be used?\nWhat has your experience been using Access and python?\nAny tips, tricks, or must avoids I need to know about?\n\nThanks :)","AnswerCount":8,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":3427,"Q_Id":5891359,"Users Score":2,"Answer":"If you know this well enough:\n\nPython, it's database modules, and ODBC configuration\n\nthen you should know how to do this:\n\nopen a database, read some data, insert it in to a different database\n\nIf so, then you are very close to your required solution. The trick is, you can open an MDB file as an ODBC datasource. Now: I'm not sure if you can \"CREATE TABLES\" with ODBC in an MDB file, so let me propose this recipe:\n\nCreate an MDB file with name \"TARGET.MDB\" -- with the necessary tables, forms, reports, etc. (Put some dummy data in and test that it is what the customer would want.)\nSet up an ODBC datasource to the file \"TARGET.MDB\". Test to make sure you can read\/write.\nRemove all the dummy data -- but leave the table defs intact. Rename the file \"TEMPLATE.MDB\".\nWhen you need to generate a new MDB file: with Python copy TEMPLATE.MDB to TARGET.MDB.\nOpen the datasource to write to TARGET.MDB. Create\/copy required records.\nClose the datasource, rename TARGET.MDB to TODAYS_REPORT.MDB... or whatever makes sense for this particular data export.\n\nWould that work for you?\nIt would almost certainly be easier to do that all on Windows as the support for ODBC will be most widely available. However, I think in principle you could do this on Linux, provided you find the right ODBC components to access MDB via ODBC.","Q_Score":7,"Tags":"python,linux,ms-access","A_Id":5964496,"CreationDate":"2011-05-05T00:26:00.000","Title":"Building an MS Access database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.\nThis will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.\nI have a few related questions:\n\nIs there a reliable library or module that can be used?\nWhat has your experience been using Access and python?\nAny tips, tricks, or must avoids I need to know about?\n\nThanks :)","AnswerCount":8,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3427,"Q_Id":5891359,"Users Score":0,"Answer":"Well, looks to me like you need a copy of vmware server on the linux box running windows, a web service in the vm to write to access, and communications to it from the main linux box. You aren't going to find a means of creating an access db on Linux. Calling it a requirement isn't going to make it technically possible.","Q_Score":7,"Tags":"python,linux,ms-access","A_Id":5972450,"CreationDate":"2011-05-05T00:26:00.000","Title":"Building an MS Access database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with the BeautifulSoup python library.\nI used the urllib2 library to download the HTML code from a page, and then I have parsed it with BeautifulSoup.\nI want to save some of the HTML content into a MySql table, but I'm having some problems with the encoding. The MySql table is encoded with 'utf-8' charset.\nSome examples:\nWhen I download the HTML code and parse it with BeautifulSoup I have something like:\n\"Ver las \\xc3\\xbaltimas noticias. Ent\\xc3\\xa9rate de las noticias de \\xc3\\xbaltima hora con la mejor cobertura con fotos y videos\"\nThe correct text would be:\n\"Ver las \u00faltimas noticias. Ent\u00e9rate de las noticias de \u00faltima hora con la mejor cobertura con fotos y videos\"\nI have tried to encode and decode that text with multiple charsets, but when I insert it into MySql I have somethig like:\n\"Ver las \u00c3\u00baltimas noticias y todos los titulares de hoy en Yahoo! Noticias Argentina. Ent\u00c3\u00a9rate de las noticias de \u00c3\u00baltima hora con la mejor cobertura con fotos y videos\"\nI'm having problems with the encoding, but I don't know how to solve them.\nAny suggestion?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":693,"Q_Id":5902914,"Users Score":2,"Answer":"BeautifulSoup returns all data as unicode strings. First triple check that the unicode strings are ccorrect. If not then there is some issue with the encoding of the input data.","Q_Score":2,"Tags":"python,mysql,encoding,urllib2,beautifulsoup","A_Id":5903100,"CreationDate":"2011-05-05T19:12:00.000","Title":"Wrong encoding with Python BeautifulSoup + MySql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using PyCrypto to store some files inside a SQLITE database.\nI'm using 4 fields :\nthe name of the file,\nthe length of the file (in bytes)\nthe SHA512 hash of the file\nthe encrypted file (with AES and then base64 to ASCII).\nI need all the fields to show some info about the file without decrypting it.\nThe question is : is it secure to store the data like this ?\nFor example, the first characters of a ZIP file, or executable file are always the same, and if you already know the hash and the length of the file ... is it possible to decrypt the file, maybe partially ?\nIf it's not secure, how can I store some information about the file to index the files without decrypting them ? (information like length, hash, name, tags, etc)\n(I use python, but you can give examples in any language)","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":484,"Q_Id":5919819,"Users Score":3,"Answer":"Data encrypted with AES has the same length as the plain data (give or take some block padding), so giving original length away doesn't harm security. SHA512 is a strong cryptographic hash designed to provide minimal information about the original content, so I don't see a problem here either.\nTherefore, I think your scheme is quite safe. Any information \"exposed\" by it is negligible. Key management will probably be a much bigger concern anyway.","Q_Score":3,"Tags":"python,database,security,encryption","A_Id":5919875,"CreationDate":"2011-05-07T07:56:00.000","Title":"Storing encrypted files inside a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using PyCrypto to store some files inside a SQLITE database.\nI'm using 4 fields :\nthe name of the file,\nthe length of the file (in bytes)\nthe SHA512 hash of the file\nthe encrypted file (with AES and then base64 to ASCII).\nI need all the fields to show some info about the file without decrypting it.\nThe question is : is it secure to store the data like this ?\nFor example, the first characters of a ZIP file, or executable file are always the same, and if you already know the hash and the length of the file ... is it possible to decrypt the file, maybe partially ?\nIf it's not secure, how can I store some information about the file to index the files without decrypting them ? (information like length, hash, name, tags, etc)\n(I use python, but you can give examples in any language)","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":484,"Q_Id":5919819,"Users Score":1,"Answer":"To avoid any problems concerning the first few bytes being the same, you should use AES in Block Cipher mode with a random IV. This ensures that even if the first block (length depends on the key size) of two encrypted files is exactly the same, the cipher text will be different.\nIf you do that, I see no problem with your approach.","Q_Score":3,"Tags":"python,database,security,encryption","A_Id":5920346,"CreationDate":"2011-05-07T07:56:00.000","Title":"Storing encrypted files inside a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using PyCrypto to store some files inside a SQLITE database.\nI'm using 4 fields :\nthe name of the file,\nthe length of the file (in bytes)\nthe SHA512 hash of the file\nthe encrypted file (with AES and then base64 to ASCII).\nI need all the fields to show some info about the file without decrypting it.\nThe question is : is it secure to store the data like this ?\nFor example, the first characters of a ZIP file, or executable file are always the same, and if you already know the hash and the length of the file ... is it possible to decrypt the file, maybe partially ?\nIf it's not secure, how can I store some information about the file to index the files without decrypting them ? (information like length, hash, name, tags, etc)\n(I use python, but you can give examples in any language)","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":484,"Q_Id":5919819,"Users Score":0,"Answer":"You really need to think about what attacks you want to protect against, and the resources of the possible attackers.\nIn general, storing some data encrypted is only useful if it satisfies your exact requirements. In particular, if there is a way an attacker could compromise the key at the same time as the data, then the encryption is effectively useless.","Q_Score":3,"Tags":"python,database,security,encryption","A_Id":5933351,"CreationDate":"2011-05-07T07:56:00.000","Title":"Storing encrypted files inside a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have large text files upon which all kinds of operations need to be performed, mostly involving row by row validations. The data are generally of a sales \/ transaction nature, and thus tend to contain a huge amount of redundant information across rows, such as customer names. Iterating and manipulating this data has become such a common task that I'm writing a library in C that I hope to make available as a Python module. \nIn one test, I found that out of 1.3 million column values, only ~300,000 were unique. Memory overhead is a concern, as our Python based web application could be handling simultaneous requests for large data sets. \nMy first attempt was to read in the file and insert each column value into a binary search tree. If the value has never been seen before, memory is allocated to store the string, otherwise a pointer to the existing storage for that value is returned. This works well for data sets of ~100,000 rows. Much larger and everything grinds to a halt, and memory consumption skyrockets. I assume the overhead of all those node pointers in the tree isn't helping, and using strcmp for the binary search becomes very painful.\nThis unsatisfactory performance leads me to believe I should invest in using a hash table instead. This, however, raises another point -- I have no idea ahead of time how many records there are. It could be 10, or ten million. How do I strike the right balance of time \/ space to prevent resizing my hash table again and again?\nWhat are the best data structure candidates in a situation like this?\nThank you for your time.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":501,"Q_Id":5931151,"Users Score":1,"Answer":"Hash table resizing isn't a concern unless you have a requirement that each insert into the table should take the same amount of time. As long as you always expand the hash table size by a constant factor (e.g. always increasing the size by 50%), the computational cost of adding an extra element is amortized O(1). This means that n insertion operations (when n is large) will take an amount of time that is proportionate to n - however, the actual time per insertion may vary wildly (in practice, one of the insertions will be very slow while the others will be very fast, but the average of all operations is small). The reason for this is that when you insert an extra element that forces the table to expand from e.g. 1000000 to 1500000 elements, that insert will take a lot of time, but now you've bought yourself 500000 extremely fast future inserts before you need to resize again. In short, I'd definitely go for a hash table.","Q_Score":3,"Tags":"python,c,data-structures,file-io","A_Id":5931175,"CreationDate":"2011-05-08T23:41:00.000","Title":"BST or Hash Table?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I try to install python-mysql today, I got a number of compilation error or complaining \/Developer\/SDKs\/MacOSX10.4u.sdk not found, like the following:\n\nrunning build\nrunning build_py\ncopying MySQLdb\/release.py -> build\/lib.macosx-10.3-i386-2.6\/MySQLdb\n running build_ext\nbuilding '_mysql' extension\nCompiling with an SDK that doesn't seem to exist: \/Developer\/SDKs\/MacOSX10.4u.sdk\nPlease check your Xcode installation\n\nHowever, I already installed latest xcode 4.0, which does include latest GCC and SDK.\nI tried to find out where the 10.4u.sdk is specified, but could not find it in the system environment, program source and setuptools source.\nI tried to export \n\nexport SDK=\/Developer\/SDKs\/MacOSX10.5.sdk\nexport SDKROOT=\/Developer\/SDKs\/MacOSX10.5.sdk\n\nbut still has no luck.\nso anyone has any idea where this is specified in Mac Snow Leopard pls?\nthx","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":358,"Q_Id":5935910,"Users Score":0,"Answer":"Check your environment for CFLAGS or LDFLAGS. Both of these can include the -isysroot argument that influences the SDK selection. The other place to start at is to look at the output of python2.6-config --cflags --ldflags since (I believe) that this influences the Makefile generation. Make sure to run easy_install with --verbose and see if it yields any additional insight.","Q_Score":0,"Tags":"python,mysql,macos,osx-snow-leopard,compilation","A_Id":5936425,"CreationDate":"2011-05-09T10:58:00.000","Title":"mac snow leopard setuptools stick to MacOSX10.4u.sdk when trying to install python-mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The Facts:\n\nI am working on a NoteBook with Intel Core 2 Duo 2,26 GHz and 4 Gigabyte of Ram.\nIt has a Apache Server and a MySQL Server running.\nMy Server (I did lshw | less) shows a 64 Bit CPU with 2,65 GHz and 4 Gigabyte Ram, too. It has the XAMPP-Package running on it.\nThe Database structures (tables, indices, ...) are identical and so is the Python script I am running.\n\nThe Problem:\nWhile the script runs in approximately 30 seconds on my macbook it took the script 11 minutes on the server!\nWhat are the points you would check first for a bottleneck?\nThe Solution:\nThere were two indices missing on one of the machines. I added them and voil\u00e1: Everything was super! The `EXPLAIN' keyword of MySQL was worth a mint. =)","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":101,"Q_Id":5944433,"Users Score":2,"Answer":"What kind of server? If you're renting a VPS or similar you're contending with other users for CPU time.\nWhat platform is running on both? Tell us more about your situation!","Q_Score":2,"Tags":"python,mysql,runtime","A_Id":5944478,"CreationDate":"2011-05-10T02:07:00.000","Title":"How do I find why a python scripts runs in significantly different running times on different machines?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The Facts:\n\nI am working on a NoteBook with Intel Core 2 Duo 2,26 GHz and 4 Gigabyte of Ram.\nIt has a Apache Server and a MySQL Server running.\nMy Server (I did lshw | less) shows a 64 Bit CPU with 2,65 GHz and 4 Gigabyte Ram, too. It has the XAMPP-Package running on it.\nThe Database structures (tables, indices, ...) are identical and so is the Python script I am running.\n\nThe Problem:\nWhile the script runs in approximately 30 seconds on my macbook it took the script 11 minutes on the server!\nWhat are the points you would check first for a bottleneck?\nThe Solution:\nThere were two indices missing on one of the machines. I added them and voil\u00e1: Everything was super! The `EXPLAIN' keyword of MySQL was worth a mint. =)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":5944433,"Users Score":0,"Answer":"I would check that the databases in question are of similar scope. You say they're the same structure, but are they sized similarly? If your test case only has 100 entries when production has 100000000, that's one huge potential area for performance problems.","Q_Score":2,"Tags":"python,mysql,runtime","A_Id":5956131,"CreationDate":"2011-05-10T02:07:00.000","Title":"How do I find why a python scripts runs in significantly different running times on different machines?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"It's possible you're hitting the python GIL but more likely you're doing something wrong.\nFor instance, putting 2M rows in a single batch would be Doing It Wrong.","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":5950881,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"Try running multiple clients in multiple processes, NOT threads.\nThen experiment with different insert sizes. \n1M inserts in 3 mins is about 5500 inserts\/sec, which is pretty good for a single local client. On a multi-core machine you should be able to get several times this amount provided that you use multiple clients, probably inserting small batches of rows, or individual rows.","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":5956519,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"You might consider Redis. Its single-node throughput is supposed to be faster. It's different from Cassandra though, so whether or not it's an appropriate option would depend on your use case.","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":6078703,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"The time taken doubled because you inserted twice as much data. Is it possible that you are I\/O bound?","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":8491215,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using this javascript library (http:\/\/valums.com\/ajax-upload\/) to upload file to a tornado web server, but I don't know how to get the file content. The javascript library is uploading using XHR, so I assume I have to read the raw post data to get the file content. But I don't know how to do it with Tornado. Their documentation doesn't help with this, as usual :(\nIn php they have something like this:\n$input = fopen(\"php:\/\/input\", \"r\");\nso what's the equivalence in tornado?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1836,"Q_Id":5983032,"Users Score":2,"Answer":"I got the answer.\nI need to use self.request.body to get the raw post data.\nI also need to pass in the correct _xsrf token, otherwise tornado will fire a 403 exception.\nSo that's about it.","Q_Score":3,"Tags":"python,file-upload,tornado,ajax-upload","A_Id":5989216,"CreationDate":"2011-05-12T19:00:00.000","Title":"asynchronous file upload with ajaxupload to a tornado web server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL?","AnswerCount":8,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":35088,"Q_Id":5987371,"Users Score":-2,"Answer":"Since this question is now over 2 years old I guess this is more for future references :)\nWhat I like to do is max('', mightBeNoneVar) or max(0, mightBeNoneVar) (depending on the context).\nMore elaborate example:\nprint max('', col1).ljust(width1) + ' ==> '+ max('', col2).ljust(width2)","Q_Score":15,"Tags":"python","A_Id":16633853,"CreationDate":"2011-05-13T04:54:00.000","Title":"Python equivalent for MySQL's IFNULL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL?","AnswerCount":8,"Available Count":2,"Score":0.024994793,"is_accepted":false,"ViewCount":35088,"Q_Id":5987371,"Users Score":1,"Answer":"nvl(v1,v2) will return v1 if not null otherwise it returns v2.\nnvl = lambda a,b: a or b","Q_Score":15,"Tags":"python","A_Id":50119942,"CreationDate":"2011-05-13T04:54:00.000","Title":"Python equivalent for MySQL's IFNULL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been running Davical on a CentOS 5 box for a while now with no problems.\nYesterday however, I installed Trac bug-tracker which eventually forced me to run a full update via Yum which updated a whole heap of packages.\nI cant seem to work out exactly what the issue is and time spent googling didn't seem to bring about much in the way of ideas.\nHas anyone had the same problem or could anyone indicate a way to better identify whats going on?\nMany Thanks!\nFull Error readout :\n\n[Wed May 11 17:52:53 2011] [error] davical: LOG: always: Query: QF: SQL error \"58P01\" - ERROR: could not load library \"\/usr\/lib\/pgsql\/plpgsql.so\": \/usr\/lib\/pgsql\/plpgsql.so: undefined symbol: PinPortal\"\n\nChecking to see if file exists\n[@shogun ~]# tree -a \/usr\/lib\/pgsql\/ | grep \"plpgsql\"\n|-- plpgsql.so\nVersion of pg installed\n[@shogun ~]# pg_config | grep \"VERSION\"\nVERSION = PostgreSQL 8.1.23\n[@shogun postgresql-8.3.8]# yum list installed | grep 'post'\npostgresql.i386 8.1.23-1.el5_6.1 installed\npostgresql-devel.i386 8.1.23-1.el5_6.1 installed\npostgresql-libs.i386 8.1.23-1.el5_6.1 installed\npostgresql-python.i386 8.1.23-1.el5_6.1 installed\npostgresql-server.i386 8.1.23-1.el5_6.1 installed","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5259,"Q_Id":5994297,"Users Score":8,"Answer":"I have had this problem before, although with 8.4 instead of 8.1, but the issue is the same, I believe.\nA recent minor upgrade of all supported maintenance branches of PostgreSQL introduced the function PinPortal in the server, and made PL\/pgSQL use it. So if you use a plpgsql.so from the newer version with a server from the older version, you will get this error. In your case, the change happened between 8.1.21 and 8.1.22. And even if all your installed packages show the newer version, you need to restart the server to make sure you actually use the newer version.\nThe problem is, as soon as you install the newer PL\/pgSQL, it will get used by the next session that is started, but the newer server binary won't get used until you restart the server. So if your upgrade process doesn't restart the server immediately, you will invariably get these errors as soon as something tries to use PL\/pgSQL. If this actually turns out to be the problem, you might want to review why your server wasn't restarted.","Q_Score":6,"Tags":"python,postgresql,centos,trac","A_Id":5998204,"CreationDate":"2011-05-13T15:38:00.000","Title":"Could not load library \"\/usr\/lib\/pgsql\/plpgsql.so\" & undefined symbol: PinPortal","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the recommonded way to interact between python and MySQL? Currently I am using MySQLdb and I heared from Oursql. But I asked myself, if there is a more appropriate way to manage this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":234,"Q_Id":6002147,"Users Score":0,"Answer":"I personally use pymysql, but have heard a lot of people use MySQLdb. Both are very similar in the way they behave, and could easily be interchangeable. Personally, (working as a python\/MySQL QA) I've yet to hear of \/ let alone work with OurSQL. \nWith that said, it honestly depends what you want to accomplish. Python has a lot of connectors, and a tons of bells and whistles to complete almost anything; (and) As such, it is important to note that it is important to always look at how popular the component is, as well as how frequently it gets updated.","Q_Score":3,"Tags":"python,mysql,interaction","A_Id":40519201,"CreationDate":"2011-05-14T13:32:00.000","Title":"Is there a recommended way for interaction between python and MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have setup an Apache server with mod_wsgi, python_sql, mysql and django.\nEverything works fine, except the fact that if I make some code changes, they do not reflect immidiately, though I thing that everything is compiled on the fly when it comes to python\/mod_wsgi.\nI have to shut down the server and come back again to see the changes.\nCan someone point me to how hot-deployment can be achieved with the above setup??\nThanks,\nNeeraj","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":768,"Q_Id":6006666,"Users Score":3,"Answer":"Just touching the wsgi file allways worked for me.","Q_Score":4,"Tags":"python,django,apache2,mod-wsgi,hotdeploy","A_Id":6007285,"CreationDate":"2011-05-15T05:26:00.000","Title":"Hot deployment using mod_wsgi,python and django on Apache","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to insert rows into PG one of the fields is date and time with time stamp, this is the time of incident, so I can not use --> current_timestamp function of Postgres at the time of insertion, so how can I then insert the time and date which I collected before into pg row in the same format as it would have been created by current_timestamp at that point in time.","AnswerCount":7,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":190399,"Q_Id":6018214,"Users Score":-4,"Answer":"Just use \n\nnow()\n\nor \n\nCURRENT_TIMESTAMP\n\nI prefer the latter as I like not having additional parenthesis but thats just personal preference.","Q_Score":64,"Tags":"python,postgresql,datetime","A_Id":18624640,"CreationDate":"2011-05-16T13:34:00.000","Title":"How to insert current_timestamp into Postgres via python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run a python script using python 2.6.4. The hosting company has 2.4 installed so I compiled my own 2.6.4 on a similar server and then moved the files over into ~\/opt\/python. that part seems to be working fine.\nanyhow, when I run the script below, I am getting ImportError: No module named _sqlite3 and I'm not sure what to do to fix this.\nMost online threads mention that sqlite \/ sqlite3 is included in python 2.6 - so I'm not sure why this isn't working.\n\n-jailshell-3.2$ .\/pyDropboxValues.py \n\nTraceback (most recent call last):\n File \".\/pyDropboxValues.py\", line 21, in \n import sqlite3\n File \"\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/__init__.py\", line 24, in \n from dbapi2 import *\n File \"\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dbapi2.py\", line 27, in \n from _sqlite3 import *\nImportError: No module named _sqlite3\n\nI think I have everything set up right as far as the directory structure.\n\n-jailshell-3.2$ find `pwd` -type d\n\n\/home\/myAccount\/opt\n\/home\/myAccount\/opt\/bin\n\/home\/myAccount\/opt\/include\n\/home\/myAccount\/opt\/include\/python2.6\n\/home\/myAccount\/opt\/lib\n\/home\/myAccount\/opt\/lib\/python2.6\n\/home\/myAccount\/opt\/lib\/python2.6\/distutils\n\/home\/myAccount\/opt\/lib\/python2.6\/distutils\/command\n\/home\/myAccount\/opt\/lib\/python2.6\/distutils\/tests\n\/home\/myAccount\/opt\/lib\/python2.6\/compiler\n\/home\/myAccount\/opt\/lib\/python2.6\/test\n\/home\/myAccount\/opt\/lib\/python2.6\/test\/decimaltestdata\n\/home\/myAccount\/opt\/lib\/python2.6\/config\n\/home\/myAccount\/opt\/lib\/python2.6\/json\n\/home\/myAccount\/opt\/lib\/python2.6\/json\/tests\n\/home\/myAccount\/opt\/lib\/python2.6\/email\n\/home\/myAccount\/opt\/lib\/python2.6\/email\/test\n\/home\/myAccount\/opt\/lib\/python2.6\/email\/test\/data\n\/home\/myAccount\/opt\/lib\/python2.6\/email\/mime\n\/home\/myAccount\/opt\/lib\/python2.6\/lib2to3\n\/home\/myAccount\/opt\/lib\/python2.6\/lib2to3\/pgen2\n\/home\/myAccount\/opt\/lib\/python2.6\/lib2to3\/fixes\n\/home\/myAccount\/opt\/lib\/python2.6\/lib2to3\/tests\n\/home\/myAccount\/opt\/lib\/python2.6\/xml\n\/home\/myAccount\/opt\/lib\/python2.6\/xml\/parsers\n\/home\/myAccount\/opt\/lib\/python2.6\/xml\/sax\n\/home\/myAccount\/opt\/lib\/python2.6\/xml\/etree\n\/home\/myAccount\/opt\/lib\/python2.6\/xml\/dom\n\/home\/myAccount\/opt\/lib\/python2.6\/site-packages\n\/home\/myAccount\/opt\/lib\/python2.6\/logging\n\/home\/myAccount\/opt\/lib\/python2.6\/lib-dynload\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/test\n\/home\/myAccount\/opt\/lib\/python2.6\/encodings\n\/home\/myAccount\/opt\/lib\/python2.6\/wsgiref\n\/home\/myAccount\/opt\/lib\/python2.6\/multiprocessing\n\/home\/myAccount\/opt\/lib\/python2.6\/multiprocessing\/dummy\n\/home\/myAccount\/opt\/lib\/python2.6\/curses\n\/home\/myAccount\/opt\/lib\/python2.6\/bsddb\n\/home\/myAccount\/opt\/lib\/python2.6\/bsddb\/test\n\/home\/myAccount\/opt\/lib\/python2.6\/idlelib\n\/home\/myAccount\/opt\/lib\/python2.6\/idlelib\/Icons\n\/home\/myAccount\/opt\/lib\/python2.6\/tmp\n\/home\/myAccount\/opt\/lib\/python2.6\/lib-old\n\/home\/myAccount\/opt\/lib\/python2.6\/lib-tk\n\/home\/myAccount\/opt\/lib\/python2.6\/hotshot\n\/home\/myAccount\/opt\/lib\/python2.6\/plat-linux2\n\/home\/myAccount\/opt\/lib\/python2.6\/ctypes\n\/home\/myAccount\/opt\/lib\/python2.6\/ctypes\/test\n\/home\/myAccount\/opt\/lib\/python2.6\/ctypes\/macholib\n\/home\/myAccount\/opt\/share\n\/home\/myAccount\/opt\/share\/man\n\/home\/myAccount\/opt\/share\/man\/man1\n\n\nAnd finally the contents of the sqlite3 directory:\n\n-jailshell-3.2$ find `pwd`\n\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/__init__.pyo\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dump.pyc\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/__init__.pyc\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dbapi2.pyo\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dbapi2.pyc\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dbapi2.py\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dump.pyo\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/__init__.py\n\/home\/myAccount\/opt\/lib\/python2.6\/sqlite3\/dump.py\n\n\nI feel like I need to add something into the sqlite3 directory - maybe sqlite3.so? But I don't know where to get that.\nWhat am I doing wrong here? Please remember that I'm using a shared host so that means installing \/ compiling on another server and then copying the files over. Thanks! :)\nUpdate\nJust wanted to confirm that the answer from @samplebias did work out very well. I needed to have the dev package installed on the machine I was compiling from to get it to add in sqlite3.so and related files. Also found the link in the answer very helpful. Thanks @samplebias !","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1962,"Q_Id":6026485,"Users Score":0,"Answer":"In general, the first thing to do is to ask your host. I seems a bit odd that SQLite is not installed (or installed properly). So they'll likely fix it quite fast if you ask them.","Q_Score":2,"Tags":"python,linux,unix,sqlite","A_Id":6026507,"CreationDate":"2011-05-17T05:10:00.000","Title":"How can I get sqlite working on a shared hosting server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to have some references in my table and a bunch of \"deferrable initially deferred\" modifiers, but I can't find a way to make this work in the default generated Django code.\nIs it safe to create the table manually and still use Django models?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":112,"Q_Id":6053426,"Users Score":2,"Answer":"Yes. \nI don't see why not, but that would be most unconventional and breaking convention usually leads to complications down the track.\nDescribe the problem you think it will solve and perhaps someone can offer a more conventional solution.","Q_Score":2,"Tags":"python,sql,django,postgresql","A_Id":6053509,"CreationDate":"2011-05-19T03:30:00.000","Title":"Is it safe to write your own table creation SQL for use with Django, when the generated tables are not enough?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am creating a software with user + password. After autentification, the user can access some semi public services, but also encrypt some files that only the user can access.\nThe user must be stored as is, without modification, if possible. After auth, the user and the password are kept in memory as long as the software is running (i don't know if that's okay either).\nThe question is how should i store this user + password combination in a potentially unsecure database?\nI don't really understand what should i expose.\nLet's say I create an enhanced key like this:\n\nsalt = random 32 characters string (is it okay?)\nkey = hash(usr password + salt)\nfor 1 to 65000 do\n key = hash(key + usr password + salt)\n\nShould I store the [plaintext user], [the enhanced key] and [the salt] in the database ?\nAlso, what should I use to encrypt (with AES or Blowfish) some files using a new password everytime ?\nShould I generate a new salt and create a new enhanced key using (the password stored in memory of the program + the salt) ?\nAnd in this case, if i store the encrypted file in the database, i should probably only store the salt.\nThe database is the same as where i store the user + password combination.\nThe file can only be decrypted if someone can generate the key, but he doesn't know the password. Right ?\nI use Python with PyCrypto, but it's not really important, a general example is just fine.\nI have read a few similar questions, but they are not very explicit.\nThank you very very much!","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3307,"Q_Id":6058019,"Users Score":2,"Answer":"If you use a different salt for each user, you must store it somewhere (ideally in a different place). If you use the same salt for every user, you can hardcode it in your app, but it can be considered less secure. \nIf you don't keep the salt, you will not be able to match a given password against the one in your database.\nThe aim of the salt is to make bruteforce or dictionnary attacks a lot harder. That is why it is more secure if store separately, to avoid someone having both hash passwords and corresponding salts.","Q_Score":10,"Tags":"python,security,passwords","A_Id":6058858,"CreationDate":"2011-05-19T11:37:00.000","Title":"Storing user and password in a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The sqlite docs says that using the pragma default_cache_size is deprecated. I looked, but I couldn't see any explanation for why. Is there a reason for this? I'm working on an embedded python program, and we open and close connections a lot. Is the only alternative to use the pragma cache_size on every database connection?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":890,"Q_Id":6062999,"Users Score":2,"Answer":"As Firefox is massively using SQLite I wouldn't be surprised if this request came from their camp to prevent any kind of 3rd party interference (e.g. \"trashing\" with large\/small\/invalid\/obscure values) by this kind of pragma propagating through all database connections\nHence, my strong belief is that there is no alternative and that you really need to set cache_size for each database connection","Q_Score":5,"Tags":"python,sqlite","A_Id":6175144,"CreationDate":"2011-05-19T18:10:00.000","Title":"Alternative to deprecated sqlite pragma \"default_cache_size\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a website where people post comments, pictures, and other content. I want to add a feature that users can like\/unlike these items.\nI use a database to store all the content.\nThere are a few approaches I am looking at:\nMethod 1:\n\nAdd a 'like_count' column to the table, and increment it whenever someone likes an item\nAdd a 'user_likes' table to keep a track that everything the user has liked.\n\nPros: Simple to implement, minimal queries required.\nCons: The item needs to be refreshed with each change in like count. I have a whole list of items cached, which will break.\nMethod 2:\n\nCreate a new table 'like_summary' and store the total likes of each item in that table\nAdd a 'user_likes' table to keep a track that everything the user has liked.\nCache the like_summary data in memcache, and only flush it if the value changes\n\nPros: Less load on the main items table, it can be cached without worrying.\nCons: Too many hits on memcache (a page shows 20 items, which needs to be loaded from memcache), might be slow\nAny suggestions?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":104,"Q_Id":6067919,"Users Score":1,"Answer":"You will actually only need the user_likes table. The like_count is calculated from that table. You will only need to store that if you need to gain performance, but since you're using memcached, It may be a good idea to not store the aggregated value in the database, but store it only in memcached.","Q_Score":0,"Tags":"python,architecture","A_Id":6067968,"CreationDate":"2011-05-20T05:46:00.000","Title":"What would be a good strategy to implement functionality similar to facebook 'likes'?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a website where people post comments, pictures, and other content. I want to add a feature that users can like\/unlike these items.\nI use a database to store all the content.\nThere are a few approaches I am looking at:\nMethod 1:\n\nAdd a 'like_count' column to the table, and increment it whenever someone likes an item\nAdd a 'user_likes' table to keep a track that everything the user has liked.\n\nPros: Simple to implement, minimal queries required.\nCons: The item needs to be refreshed with each change in like count. I have a whole list of items cached, which will break.\nMethod 2:\n\nCreate a new table 'like_summary' and store the total likes of each item in that table\nAdd a 'user_likes' table to keep a track that everything the user has liked.\nCache the like_summary data in memcache, and only flush it if the value changes\n\nPros: Less load on the main items table, it can be cached without worrying.\nCons: Too many hits on memcache (a page shows 20 items, which needs to be loaded from memcache), might be slow\nAny suggestions?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":104,"Q_Id":6067919,"Users Score":1,"Answer":"One relation table that does a many-to-many mapping between user and item should do the trick.","Q_Score":0,"Tags":"python,architecture","A_Id":6067953,"CreationDate":"2011-05-20T05:46:00.000","Title":"What would be a good strategy to implement functionality similar to facebook 'likes'?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm very new to Python and I have Python 3.2 installed on a Win 7-32 workstation. Trying to connect to MSSQLServer 2005 Server using adodbapi-2.4.2.2, the latest update to that package.\nThe code\/connection string looks like this:\nconn = adodbapi.connect('Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=XXX;Data Source=123.456.789');\nFrom adodbapi I continually get the error (this is entire error message from Wing IDE shell):\nTraceback (most recent call last):\n File \"D:\\Program Files\\Wing IDE 4.0\\src\\debug\\tserver_sandbox.py\", line 2, in \n if name == 'main':\n File \"D:\\Python32\\Lib\\site-packages\\adodbapi\\adodbapi.py\", line 298, in connect\n raise InterfaceError #Probably COM Error\nadodbapi.adodbapi.InterfaceError:\nI can trace through the code and see the exception as it happens.\nI also tried using conn strings with OLEDB provider and integrated Windows security, with same results. \nAll of these connection strings work fine from a UDL file on my workstation, and from SSMS, but fail with the same error in adodbapi.\nHow do I fix this?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":8199,"Q_Id":6086341,"Users Score":2,"Answer":"I had the same problem, and I tracked it down to a failure to load win32com.pyd, because of some system DLLs that was not in the \"dll load path\", such as msvcp100.dll\nI solved the problem by copying a lot of these dll's (probably too many) into C:\\WinPython-64bit-3.3.3.2\\python-3.3.3.amd64\\Lib\\site-packages\\win32","Q_Score":6,"Tags":"python,database,sql-server-2005,adodbapi","A_Id":21480454,"CreationDate":"2011-05-22T06:03:00.000","Title":"Connecting to SQLServer 2005 with adodbapi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB.\nI would like to know :\n1) Is the Greenplum DB the same as vanilla [PostgresSQL]? (I've worked on Postgres AS 8.3)\n2) Are there any (free) tools available for this task (extract and import)\n3) I have some knowledge of Python. Is it feasible, even easy to do this in a resonable amount of time?\nI have no idea how to do this. Any advice, tips and suggestions will be hugely welcome.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2294,"Q_Id":6110384,"Users Score":0,"Answer":"Many of Greenplum's utilities are written in python and the current DBMS distribution comes with python 2.6.2 installed, including the pygresql module which you can use to work inside the GPDB.\nFor data transfer into greenplum, I've written python scripts that connect to the source (Oracle) DB using cx_Oracle and then dumping that output either to flat files or named pipes. gpfdist can read from either sort of source and load the data into the system.","Q_Score":0,"Tags":"python,postgresql,db2,datamart,greenplum","A_Id":7550497,"CreationDate":"2011-05-24T12:28:00.000","Title":"Transferring data from a DB2 DB to a greenplum DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB.\nI would like to know :\n1) Is the Greenplum DB the same as vanilla [PostgresSQL]? (I've worked on Postgres AS 8.3)\n2) Are there any (free) tools available for this task (extract and import)\n3) I have some knowledge of Python. Is it feasible, even easy to do this in a resonable amount of time?\nI have no idea how to do this. Any advice, tips and suggestions will be hugely welcome.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2294,"Q_Id":6110384,"Users Score":0,"Answer":"Generally, it is really slow if you use SQL insert or merge to import big bulk data.\nThe recommended way is to use the external tables you define to use file-based, web-based or gpfdist protocol hosted files.\nAnd also greenplum has a utility named gpload, which can be used to define your transferring jobs, like source, output, mode(inert, update or merge).","Q_Score":0,"Tags":"python,postgresql,db2,datamart,greenplum","A_Id":23668974,"CreationDate":"2011-05-24T12:28:00.000","Title":"Transferring data from a DB2 DB to a greenplum DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a lot of objects which form a network by keeping references to other objects. All objects (nodes) have a dict which is their properties.\nNow I'm looking for a fast way to store these objects (in a file?) and reload all of them into memory later (I don't need random access). The data is about 300MB in memory which takes 40s to load from my SQL format, but I now want to cache it to have faster access.\nWhich method would you suggest?\n(my pickle attempt failed due to recursion errors despite trying to mess around with getstate :( maybe there is something fast anyway? :))","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":6128458,"Users Score":0,"Answer":"Perhaps you could set up some layer of indirection where the objects are actually held within, say, another dictionary, and an object referencing another object will store the key of the object being referenced and then access the object through the dictionary. If the object for the stored key is not in the dictionary, it will be loaded into the dictionary from your SQL database, and when it doesn't seem to be needed anymore, the object can be removed from the dictionary\/memory (possibly with an update to its state in the database before the version in memory is removed).\nThis way you don't have to load all the data from your database at once, and can keep a number of the objects cached in memory for quicker access to those. The downside would be the additional overhead required for each access to the main dict.","Q_Score":2,"Tags":"python,persistent-storage","A_Id":6130718,"CreationDate":"2011-05-25T17:31:00.000","Title":"Store and load a large number linked objects in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a written a very small web-based survey using cgi with python(This is my first web app. ).The questions are extracted from a MySQL database table and the results are supposed to be saved in the same database. I have created the database along with its table locally. My app works fine on my local computer(localhost). To create db,table and other transaction with the MySQL i had to do import MySQLdb in my code.\nNow I want to upload everything on my personal hosting. As far as I know my hosting supports Python,CGI and has MySQL database. And I know that I have to change some parameters in the connection string in my code, so I can connect to the database, but I have two problems:\n\nI remember that I installed MySQLdb as an extra to my Python, and in my code i am using it, how would I know that my hosting's python interpretor has this installed, or do I even need it, do I have to use another library?\nHow do I upload my database onto my hosting?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":513,"Q_Id":6139777,"Users Score":0,"Answer":"You can write a simple script like\nimport MySQLdb and catch any errors\nto see if the required package is\ninstalled. If this fails you can ask\nthe hosting provider to install your\npackage, typically via a ticket\nThe hosting providers typically also provide URL's to connect to the MySQL tables they provision for you, and some tools like phpmyadmin to load database dumps into the hosted MySQL instance","Q_Score":0,"Tags":"python,mysql,database-connection,cpanel","A_Id":6139936,"CreationDate":"2011-05-26T13:59:00.000","Title":"Uploading a mysql database to a webserver supporting python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a real-time service so my real-time database need to storage in memcached before fetch to DB (Avoid to read\/write mysql db too much). I want to fetch data to mysql DB when some events occur , etc : before data expire or have least-recently-used (LRU) data. What is solution for my problem ? My system used memcached , mysql ,django and python-memcache \nThank","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":329,"Q_Id":6143748,"Users Score":1,"Answer":"Memcached is not a persistent store, so if you need your data to be durable at all then you will need to store them in a persistent store immediately.\nSo you need to put them somewhere - possibly a MySQL table - as soon as the data arrive, and make sure they are fsync'd to disc. Storing them in memcached as well only speeds up access, so it is a nice to have.\nMemcached can discard any objects at any time for any reason, not just when they expire.","Q_Score":1,"Tags":"python,mysql,django,memcached","A_Id":6146042,"CreationDate":"2011-05-26T19:06:00.000","Title":"Can auto transfer data from memcached to mysql DB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can someone please point me in the right direction of how I can connect to MS SQL Server with Python? What I want to do is read a text file, extract some values and then insert the values from the text file into a table in my Sql Server database. I am using Python 3.1.3, and it seems some of the modules I have come across in research online are not included in the library. Am I missing something? Is there a good 3rd party module I should know about. Any help would be greatly appreciated.I am using Windows. thanks","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":8530,"Q_Id":6154069,"Users Score":1,"Answer":"I found a module called CEODBC that I was able to use with Python 3 after doing some research. It looks like they will also be releasing a Python3 compatible version of PYODBC soon. Thanks for all your help.","Q_Score":3,"Tags":"python,sql,python-3.x,database-connection,python-module","A_Id":6193973,"CreationDate":"2011-05-27T14:59:00.000","Title":"Connecting to Sql Server with Python 3 in Windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to write up a python program that communicates with a My SQL database to write in data... I have done the code however it does not enter all the data as it says there are duplicates... is there a way to just inclue them?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":6162827,"Users Score":0,"Answer":"You should provide more information like you SQL and database schema. It sounds like you are trying to insert items with the same primary key. If you remove the primary key you should be able to insert the data, or change the insert statement to not insert the field which is the primary key.","Q_Score":0,"Tags":"python,sql","A_Id":6162872,"CreationDate":"2011-05-28T16:14:00.000","Title":"Is there a way to write into python code a command to include duplicate entries into a My SQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am planning to develop a web-based application which could crawl wikipedia for finding relations and store it in a database. By relations, I mean searching for a name say,'Bill Gates' and find his page, download it and pull out the various information from the page and store it in a database. Information may include his date of birth, his company and a few other things. But I need to know if there is any way to find these unique data from the page, so that I could store them in a database. Any specific books or algorithms would be greatly appreciated. Also mentioning of good opensource libraries would be helpful. \nThank You","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":3092,"Q_Id":6171764,"Users Score":2,"Answer":"You mention Python and Open Source, so I would investigate the NLTK (Natural Language Toolkit). Text mining and natural language processing is one of those things that you can do a lot with a dumb algorithm (eg. Pattern matching), but if you want to go a step further and do something more sophisticated - ie. Trying to extract information that is stored in a flexible manner or trying to find information that might be interesting but is not known a priori, then natural language processing should be investigated.\nNLTK is intended for teaching, so it is a toolkit. This approach suits Python very well. There are a couple of books for it as well. The O'Reilly book is also published online with an open license. See NLTK.org","Q_Score":2,"Tags":"python,pattern-matching,data-mining,wikipedia,text-mining","A_Id":6171789,"CreationDate":"2011-05-30T02:24:00.000","Title":"Mining Wikipedia for mapping relations for text mining","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, \"john \"\",\" doe\". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import.\nI have windows os and linux so whatever solution you can think of please let me know.","AnswerCount":6,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":3332,"Q_Id":6172123,"Users Score":7,"Answer":"This may not be a usable answer but someone needs to say it. You shouldn't have to do this. CSV is a file format with an expected data encoding. If someone is supplying you a CSV file then it should be delimited and escaped properly, otherwise its a corrupted file and you should reject it. Make the supplier re-export the file properly from whatever data store it was exported from.\nIf you asked someone to send you JPG and they send what was a proper JPG file with every 5th byte omitted or junk bytes inserted you wouldnt accept that and say \"oh, ill reconstruct it for you\".","Q_Score":2,"Tags":"php,python,mysql,csv","A_Id":6172230,"CreationDate":"2011-05-30T03:54:00.000","Title":"What is an easy way to clean an unparsable csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, \"john \"\",\" doe\". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import.\nI have windows os and linux so whatever solution you can think of please let me know.","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":3332,"Q_Id":6172123,"Users Score":0,"Answer":"First of all - find all kinds of mistake. And then just replace them with empty strings. Just do it! If you need this corrupted data - only you can recover it.","Q_Score":2,"Tags":"php,python,mysql,csv","A_Id":6172324,"CreationDate":"2011-05-30T03:54:00.000","Title":"What is an easy way to clean an unparsable csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, \"john \"\",\" doe\". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import.\nI have windows os and linux so whatever solution you can think of please let me know.","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":3332,"Q_Id":6172123,"Users Score":0,"Answer":"MySQL import has many parameters including escape characters. Given the example, I think the quotes are escaped by putting a quote in the front. So an import with esaped by '\"' would work.","Q_Score":2,"Tags":"php,python,mysql,csv","A_Id":6172154,"CreationDate":"2011-05-30T03:54:00.000","Title":"What is an easy way to clean an unparsable csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, \"john \"\",\" doe\". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import.\nI have windows os and linux so whatever solution you can think of please let me know.","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":3332,"Q_Id":6172123,"Users Score":0,"Answer":"That's a really tough issue. I don't know of any real way to solve it, but maybe you could try splitting on \",\", cleaning up the items in the resulting array (unicorns :) ) and then re-joining the row?","Q_Score":2,"Tags":"php,python,mysql,csv","A_Id":6172145,"CreationDate":"2011-05-30T03:54:00.000","Title":"What is an easy way to clean an unparsable csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, \"john \"\",\" doe\". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import.\nI have windows os and linux so whatever solution you can think of please let me know.","AnswerCount":6,"Available Count":5,"Score":0.0333209931,"is_accepted":false,"ViewCount":3332,"Q_Id":6172123,"Users Score":1,"Answer":"You don't say if you have control over the creation of the CSV file. I am assuming you do, as if not, the CVS file is corrupt and cannot be recovered without human intervention, or some very clever algorithms to \"guess\" the correct delimiters vs the user entered ones.\nConvert user entered tabs (assuming there are some) to spaces and then export the data using TABS separator.\nIf the above is not possible, you need to implement an ESC sequence to ensure that user entered data is not treated as a delimiter.","Q_Score":2,"Tags":"php,python,mysql,csv","A_Id":6172224,"CreationDate":"2011-05-30T03:54:00.000","Title":"What is an easy way to clean an unparsable csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing the server for a Javascript app that has a syncing feature. Files and directories being created and modified by the client need to be synced to the server (the same changes made on the client need to be made on the server, including deletes).\nSince every file is on the server, I'm debating the need for a MySQL database entry corresponding to each file. The following information needs to be kept on each file\/directory for every user:\n\nWhether it was deleted or not (since deletes need to be synced to other clients)\nThe timestamp of when every file was last modified (so I know whether the file needs updating by the client or not)\n\nI could keep both of those pieces of information in files (e.g. .deleted file and .modified file in every user's directory containing file paths + timestamps in the latter) or in the database.\nHowever, I also have to fit under an 80mb memory constraint. Between file storage and \ndatabase storage, which would be more memory-efficient for this purpose?\nEdit: Files have to be stored on the filesystem (not in a database), and users have a quota for the storage space they can use.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":216,"Q_Id":6180732,"Users Score":0,"Answer":"In my opinion, the only real way to be sure is to build a test system and compare the space requirements. It shouldn't take that long to generate some random data programatically. One might think the file system would be more efficient, but databases can and might compress the data or deduplicate it, or whatever. Don't forget that a database would also make it easier to implement new features, perhaps access control.","Q_Score":1,"Tags":"python,django,memory","A_Id":6181167,"CreationDate":"2011-05-30T21:04:00.000","Title":"Memory usage of file versus database for simple data storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a job scheduler (a library) that queries a db every 5 minutes and, based on time, triggers events which have expired and rerun on failure.\nIt should be in Python or PHP.\nI researched and came up with Advanced Python Scheduler but it is not appropriate because it only schedules the jobs in its job store. Instead, I want that it takes jobs from a database.\nI also found Taskforest, which exactly fits my needs except it is a text-file based scheduler meaning the jobs have to be added to the text-file either through the scheduler or manually, which I don't want to do. \nCould anyone suggest me something useful?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":710,"Q_Id":6184491,"Users Score":1,"Answer":"Here's a possible solution\n- a script, either in php or python performing your database tasks\n- a scheduler : Cron for linux, or the windows task scheduler ; where you set the frequency of your jobs.\nI'm using this solution for multiple projects.\nVery easy to set up.","Q_Score":0,"Tags":"php,python,database","A_Id":6184556,"CreationDate":"2011-05-31T07:50:00.000","Title":"Database Based Job scheduler","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When accessing a MySQL database on low level using python, I use the MySQLdb module.\nI create a connection instance, then a cursor instance then I pass it to every function, that needs the cursor.\nSometimes I have many nested function calls, all desiring the mysql_cursor. Would it hurt to initialise the connection as global variable, so I can save me a parameter for each function, that needs the cursor?\nI can deliver an example, if my explanation was insufficient...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":301,"Q_Id":6190982,"Users Score":1,"Answer":"I think that database cursors are scarce resources, so passing them around can limit your scalability and cause management issues (e.g. which method is responsible for closing the connection)?\nI'd recommend pooling connections and keeping them open for the shortest time possible. Check out the connection, perform the database operation, map any results to objects or data structures, and close the connection. Pass the object or data structure with results around rather than passing the cursor itself. The cursor scope should be narrow.","Q_Score":2,"Tags":"python,connection,global-variables","A_Id":6191102,"CreationDate":"2011-05-31T17:03:00.000","Title":"What is the best way to handle connections (e.g. to mysql server using MySQLdb) in python, needed by multiple nested functions?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to push user account data from an Active Directory to our MySQL-Server. This works flawlessly but somehow the strings end up showing an encoded version of umlauts and other special characters.\nThe Active Directory returns a string using this sample format: M\\xc3\\xbcller\nThis actually is the UTF-8 encoding for M\u00fcller, but I want to write M\u00fcller to my database not M\\xc3\\xbcller.\nI tried converting the string with this line, but it results in the same string in the database:\ntempEntry[1] = tempEntry[1].decode(\"utf-8\")\nIf I run print \"M\\xc3\\xbcller\".decode(\"utf-8\") in the python console the output is correct.\nIs there any way to insert this string the right way? I need this specific format for a web developer who wants to have this exact format, I don't know why he is not able to convert the string using PHP directly.\nAdditional info: I am using MySQLdb; The table and column encoding is utf8_general_ci","AnswerCount":8,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77413,"Q_Id":6202726,"Users Score":0,"Answer":"and db.set_character_set('utf8'), imply that \nuse_unicode=True ?","Q_Score":37,"Tags":"python,unicode,utf-8","A_Id":7720395,"CreationDate":"2011-06-01T14:23:00.000","Title":"Writing UTF-8 String to MySQL with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to be able to include python code snippets in Excel (ideally, in a nice format -- all colors\/formats should be kept the same).\nWhat would be the best way to go about it?\nEDIT: I just want to store python code in an Excel spreadsheet for an easy overview -- I am not going to run it -- just want it to be nicely visible\/formatted as part of an Excel worksheet.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":486,"Q_Id":6216278,"Users Score":1,"Answer":"While Excel itself doesnot support other scripting Langauges than VBA, the open source OpenOffice and LibreOffice packages - which include a spreadsheet - can be scriptable with Python. Still, they won't allow Python code to be pasted on teh cells out of the box - but it is possible to write Python code which can act on the spredsheet contents (and do all the other things Python can do).","Q_Score":2,"Tags":"python,excel","A_Id":6220700,"CreationDate":"2011-06-02T14:58:00.000","Title":"Include Python Code In Excel?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database.\nThanks!","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":15473,"Q_Id":6222381,"Users Score":8,"Answer":"Well conceptually you can store a list as a bunch of rows in a table using a one-to-many relation, or you can focus on how to store a list in a particular database backend. For example postgres can store an array in a particular cell using the sqlalchemy.dialects.postgres.ARRAY data type which can serialize a python array into a postgres array column.","Q_Score":6,"Tags":"python,database,sqlalchemy,pyramid","A_Id":6224703,"CreationDate":"2011-06-03T02:22:00.000","Title":"The best way to store a python list to a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database.\nThanks!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":15473,"Q_Id":6222381,"Users Score":0,"Answer":"sqlalchemy.types.PickleType can store list","Q_Score":6,"Tags":"python,database,sqlalchemy,pyramid","A_Id":40277177,"CreationDate":"2011-06-03T02:22:00.000","Title":"The best way to store a python list to a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database.\nThanks!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":15473,"Q_Id":6222381,"Users Score":0,"Answer":"Use string(Varchar).\nFrom Zen of Python: \"Simple is better than complex.\"","Q_Score":6,"Tags":"python,database,sqlalchemy,pyramid","A_Id":6224600,"CreationDate":"2011-06-03T02:22:00.000","Title":"The best way to store a python list to a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a small Python CGI script that captures the User-Agent, parses the OS, browser name and version, maps it to a database, and returns a device grade (integer). Since this is only one table, it's a pretty simple operation, but I will likely have substantial traffic (10,000+ hits a day, potentially scaling much higher in the near and far future).\nWhich noSQL database would you recommend for this sort of application? I would also like to build an admin interface which allows for manual input and is searchable. I'm fairly new to Python and completely new to noSQL, and I'm having trouble finding any good info or libraries. Any suggestions?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":776,"Q_Id":6230793,"Users Score":2,"Answer":"It depends on your use-case. Are you planning on caching the records temporarily or do you want the records to persist? If the former, Redis would be the best choice because of its speed. If the latter, it would be better to choose either CouchDB or MongoDB because they can handle large datasets.","Q_Score":2,"Tags":"python,nosql","A_Id":6231573,"CreationDate":"2011-06-03T17:53:00.000","Title":"Recommendations for a noSQL database for use with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"MongoDB performs really well compared to our hacking of MySQL in de-normalized way. After database migration, I realized that we might need some server-side procedures to invoke after\/before database manipulation. Some sorta 3-tier architecture. I am just wondering the possible and easy way to prototype it. Are there any light server-side hooks for mongodb, just like server-side hooks for svn, git? \nex, post-commit, pre-commit, ...","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1855,"Q_Id":6273573,"Users Score":0,"Answer":"FWIW, one of the messages in the web UI seems to imply that some hooks do exist (\"adding sharding hook to enable versioning and authentication to remote servers\"), but they might be only avilable within the compiled binaries, not to clients.","Q_Score":2,"Tags":"python,mongodb,hook,server-side,3-tier","A_Id":19877756,"CreationDate":"2011-06-08T02:26:00.000","Title":"What is suggested way to have server-side hooks over mongodb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"MongoDB performs really well compared to our hacking of MySQL in de-normalized way. After database migration, I realized that we might need some server-side procedures to invoke after\/before database manipulation. Some sorta 3-tier architecture. I am just wondering the possible and easy way to prototype it. Are there any light server-side hooks for mongodb, just like server-side hooks for svn, git? \nex, post-commit, pre-commit, ...","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1855,"Q_Id":6273573,"Users Score":2,"Answer":"No, there are no features currently available in MongoDB equivalent to hooks or triggers. It'd be best to handle this sort of thing from within your application logic.","Q_Score":2,"Tags":"python,mongodb,hook,server-side,3-tier","A_Id":6277024,"CreationDate":"2011-06-08T02:26:00.000","Title":"What is suggested way to have server-side hooks over mongodb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script with several functions that all need to make database calls. I'm trying to get better at writing clean code rather than just throwing together scripts with horrible style. What is generally considered the best way to establish a global database connection that can be accessed anywhere in the script but is not susceptible to errors such as accidentally redefining the variable holding a connection. I'd imagine I should be putting everything in a module? Any links to actual code would be very useful as well. Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":317,"Q_Id":6281732,"Users Score":0,"Answer":"Use a model system\/ORM system.","Q_Score":3,"Tags":"python,database,coding-style,mysql-python","A_Id":6282794,"CreationDate":"2011-06-08T15:58:00.000","Title":"Proper way to establish database connection in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a python code that uses Sqlite in a multi-threaded program. A remote host calls some xmlrpc functions and new threads are created. Each function which is running in a new thread, uses sqlite for either inserting data into or reading data from the database. \nMy problem is that when call the server more than 5 time at the same time, the server breaks with \"segmentation fault\". And the output doesn't provide any other information. Any idea what can cause the problem?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1212,"Q_Id":6289821,"Users Score":2,"Answer":"If you read the sqlite documentation (http:\/\/www.sqlite.org\/threadsafe.html), you'll see that it says:\n\nSQLite support three different\n threading modes:\nSingle-thread. In this mode, all\n mutexes are disabled and SQLite is\n unsafe to use in more than a single\n thread at once.\nMulti-thread. In this mode, SQLite can\n be safely used by multiple threads\n provided that no single database\n connection is used simultaneously in\n two or more threads.\nSerialized. In serialized mode, SQLite\n can be safely used by multiple threads\n with no restriction.\n\nSo it would be that you're either in single-thread mode, or in multi-thread mode and reusing connections. Reusing the connection is only safe in sequential mode (which is slow)\nNow, the Python documentation states that it should not allow you to share connections. Are you using the python-sqlite3 module, or are you natively interfacing with the database?","Q_Score":0,"Tags":"python,multithreading,sqlite","A_Id":6289986,"CreationDate":"2011-06-09T08:04:00.000","Title":"Segmentation Fault in Python multi-threaded Sqlite use!","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a python code that uses Sqlite in a multi-threaded program. A remote host calls some xmlrpc functions and new threads are created. Each function which is running in a new thread, uses sqlite for either inserting data into or reading data from the database. \nMy problem is that when call the server more than 5 time at the same time, the server breaks with \"segmentation fault\". And the output doesn't provide any other information. Any idea what can cause the problem?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1212,"Q_Id":6289821,"Users Score":1,"Answer":"My APSW module is threadsafe and you can use that. The standard Python SQLite cannot be safely used concurrently across multiple threads.","Q_Score":0,"Tags":"python,multithreading,sqlite","A_Id":6313973,"CreationDate":"2011-06-09T08:04:00.000","Title":"Segmentation Fault in Python multi-threaded Sqlite use!","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose that I have a huge SQLite file (say, 500[MB]) stored in Amazon S3. \nCan a python script that is run on a small EC2 instance directly access and modify that SQLite file? or must I first copy the file to the EC2 instance, change it there and then copy over to S3? \nWill the I\/O be efficient?\nHere's what I am trying to do. As I wrote, I have a 500[MB] SQLite file in S3. I'd like to start say 10 different Amazon EC2 instances that will each read a subset of the file and do some processing (every instance will handle a different subset of the 500[MB] SQLite file). Then, once processing is done, every instance will update only the subset of the data it dealt with (as explained, there will be no overlap of data among processes).\nFor example, suppose that the SQLite file has say 1M rows:\ninstance 1 will deal with (and update) rows 0 - 100000\ninstance 2 will will deal with (and update) rows 100001 - 200000\n.........................\ninstance 10 will deal with (and update) rows 900001 - 1000000\n\nIs it at all possible? Does it sound OK? any suggestions \/ ideas are welcome.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7758,"Q_Id":6301795,"Users Score":0,"Answer":"Amazon EFS can be shared among ec2 instances. It's a managed NFS share. SQLITE will still lock the whole DB on write.\nThe SQLITE Website does not recommend NFS shares, though. But depending on the application you can share the DB read-only among several ec2 instances and store the results of your processing somewhere else, then concatenate the results in the next step.","Q_Score":4,"Tags":"python,sqlite,amazon-s3,amazon-ec2","A_Id":38705012,"CreationDate":"2011-06-10T03:54:00.000","Title":"Amazon EC2 & S3 When using Python \/ SQLite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose that I have a huge SQLite file (say, 500[MB]) stored in Amazon S3. \nCan a python script that is run on a small EC2 instance directly access and modify that SQLite file? or must I first copy the file to the EC2 instance, change it there and then copy over to S3? \nWill the I\/O be efficient?\nHere's what I am trying to do. As I wrote, I have a 500[MB] SQLite file in S3. I'd like to start say 10 different Amazon EC2 instances that will each read a subset of the file and do some processing (every instance will handle a different subset of the 500[MB] SQLite file). Then, once processing is done, every instance will update only the subset of the data it dealt with (as explained, there will be no overlap of data among processes).\nFor example, suppose that the SQLite file has say 1M rows:\ninstance 1 will deal with (and update) rows 0 - 100000\ninstance 2 will will deal with (and update) rows 100001 - 200000\n.........................\ninstance 10 will deal with (and update) rows 900001 - 1000000\n\nIs it at all possible? Does it sound OK? any suggestions \/ ideas are welcome.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":7758,"Q_Id":6301795,"Users Score":2,"Answer":"Since S3 cannot be directly mounted, your best bet is to create an EBS volume containing the SQLite file and work directly with the EBS volume from another (controller) instance. You can then create snapshots of the volume, and archive it into S3. Using a tool like boto (Python API), you can automate the creation of snapshots and the process of moving the backups into S3.","Q_Score":4,"Tags":"python,sqlite,amazon-s3,amazon-ec2","A_Id":6301870,"CreationDate":"2011-06-10T03:54:00.000","Title":"Amazon EC2 & S3 When using Python \/ SQLite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose that I have a huge SQLite file (say, 500[MB]). Can 10 different python instances access this file at the same time and update different records of it?. Note, the emphasis here is on different records.\nFor example, suppose that the SQLite file has say 1M rows:\ninstance 1 will deal with (and update) rows 0 - 100000\ninstance 2 will will deal with (and update) rows 100001 - 200000\n.........................\ninstance 10 will deal with (and update) rows 900001 - 1000000\n\nMeaning, each python instance will only be updating a unique subset of the file. Will this work, or will I have serious integrity issues?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1234,"Q_Id":6301816,"Users Score":4,"Answer":"Updated, thanks to Andr\u00e9 Caron. \nYou can do that, but only read operations supports concurrency in SQLite, since entire database is locked on any write operation. SQLite engine will return SQLITE_BUSY status in this situation (if it exceeds default timeout for access). Also consider that this heavily depends on how good file locking is implemented for given OS and file system. In general I wouldn't recommend to use proposed solution, especially considering that DB file is quite large, but you can try.\nIt will be better to use server process based database (MySQL, PostgreSQL, etc.) to implement desired app behaviour.","Q_Score":2,"Tags":"python,sqlite,concurrency","A_Id":6301903,"CreationDate":"2011-06-10T03:58:00.000","Title":"SQLite Concurrency with Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've Collective Intelligence book, but I'm not sure how it can be apply in practical.\nLet say I have a PHP website with mySQL database. User can insert articles with title and content in the database. For the sake of simplicity, we just compare the title.\n\nHow to Make Coffee?\n15 Things About Coffee.\nThe Big Question.\nHow to Sharpen A Pencil?\nGuy Getting Hit in Balls\n\nWe open 'How to Make Coffee?' article and because there are similarity in words with the second and fourth title, they will be displayed in Related Article section.\nHow can I implement this using PHP and mySQL? It's ok if I have to use Python. Thanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21630,"Q_Id":6302184,"Users Score":0,"Answer":"This can be simply achieved by using wildcards in SQL queries. If you have larger texts and the wildcard seems to be unable to capture the middle part of text then check if the substring of one matches the other. I hope this helps.\nBTW, your question title asks about implementing recommendation system and the question description just asks about matching a field among database records. Recommendation system is a broad topic and comes with many interesting algorithms (e.g, Collaborative filtering, content-based method, matrix factorization, neural networks, etc.). Please feel free to explore these advanced topics if your project is to that scale.","Q_Score":7,"Tags":"php,python,mysql,recommendation-engine","A_Id":47667603,"CreationDate":"2011-06-10T05:05:00.000","Title":"How to Implement A Recommendation System?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using MySQLdb in Python.\nI have an update that may succeed or fail:\n\n UPDATE table\n SET reserved_by = PID\n state = \"unavailable\"\n WHERE state = \"available\"\n AND id = REQUESTED_ROW_ID\n LIMIT 1;\n\nAs you may be able to infer, multiple processes are using the database, and I need processes to be able to securely grab rows for themselves, without race conditions causing problems.\nMy theory (perhaps incorrect) is that only one process will be able to succeed with this query (.rowcount=1) -- the others will fail (.rowcount=0) or get a different row (.rowcount=1).\nThe problem is, it appears that everything that happens through MySQLdb happens in a virtual world -- .rowcount reads =1, but you can't really know whether anything really happened, until you perform a .commit().\nMy questions:\n\nIn MySQL, is a single UPDATE atomic within itself? That is, if the same UPDATE above (with different PID values, but the same REQUESTED_ROW_ID) were sent to the same MySQL server at \"once,\" am I guaranteed that one will succeed and the other will fail?\nIs there a way to know, after calling \"conn.commit()\", whether there was a meaningful change or not?\n** Can I get a reliable .rowcount for the actual commit operation?\nDoes the .commit operation send the actual query (SET's and WHERE conditions intact,) or does it just perform the SETs on affected rows, independent the WHERE clauses that inspired them?\nIs my problem solved neatly by .autocommit?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":576,"Q_Id":6337798,"Users Score":0,"Answer":"Turn autocommit on. \nThe commit operation just \"confirms\" updates already done. The alternative is rollback, which \"undoes\" any updates already made.","Q_Score":0,"Tags":"python,mysql,connect,mysql-python,rowcount","A_Id":6339210,"CreationDate":"2011-06-14T00:11:00.000","Title":"How do I get the actual cursor.rowcount upon .commit?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do the following:\n\nHave a software running written in Python 2.7\nThis software connects to a database (Currently a MySQL database)\nThis software listen for connections on a port X on TCP\nWhen a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command).\n\nWhat I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time)\nNow my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be \"good\" examples on how to handle this kind of situation specifically even although I have been searching quite a lot.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":6337812,"Users Score":1,"Answer":"What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need?","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":6338431,"CreationDate":"2011-06-14T00:14:00.000","Title":"How to use SQLAlchemy in this context","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've trying to make large changes to a number of excel workbooks(over 20). Each workbook contains about 16 separate sheets, and I want to write a script that will loop through each workbook and the sheets contains inside and write\/modify the cells that I need. I need to keep all string validation, macros, and formatting. All the workbooks are in 2007 format.\nI've already looked at python excel libaries and PHPexcel, but macros, buttons, formulas, string validation, and formatting and not kept when the new workbook is written. Is there an easy way to do this, or will I have to open up each workbook individually and commit the changes. I'm trying to avoid creating a macro in VBscript and having to open up each workbook separately to commit the changes I need.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3284,"Q_Id":6348011,"Users Score":0,"Answer":"You can also use the PyWin32 libraries to script this with Python using typical COM techniques. This lets you use Python to do your processing, and still save all of the extra parts of each workbook that other Python Excel libraries may not handle.","Q_Score":4,"Tags":"python,vba,scripting,excel","A_Id":6361909,"CreationDate":"2011-06-14T18:09:00.000","Title":"Scripting changes to multiple excel workbooks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to implement an audit trail for a reasonably complicated relational database, whose schema is prone to change. One avenue I'm thinking of is using a DVCS to track changes.\n(The benefits I can imagine are: schemaless history, snapshots of entire system's state, standard tools for analysis, playback and migration, efficient storage, separate system, keeping DB clean. The database is not write-heavy and history is not not a core feature, it's more for the sake of having an audit trail. Oh and I like trying crazy new approaches to problems.)\nI'm not an expert with these systems (I only have basic git familiarity), so I'm not sure how difficult it would be to implement. I'm thinking of taking mercurial's approach, but possibly storing the file contents\/manifests\/changesets in a key-value data store, not using actual files.\nData rows would be serialised to json, each \"file\" could be an row. Alternatively an entire table could be stored in a \"file\", with each row residing on the line number equal to its primary key (assuming the tables aren't too big, I'm expecting all to have less than 4000 or so rows. This might mean that the changesets could be automatically generated, without consulting the rest of the table \"file\".\n(But I doubt it, because I think we need a SHA-1 hash of the whole file. The files could perhaps be split up by a predictable number of lines, eg 0 < primary key < 1000 in file 1, 1000 < primary key < 2000 in file 2 etc, keeping them smallish)\nIs there anyone familiar with the internals of DVCS' or data structures in general who might be able to comment on an approach like this? How could it be made to work, and should it even be done at all?\nI guess there are two aspects to a system like this: 1) mapping SQL data to a DVCS system and 2) storing the DVCS data in a key\/value data store (not files) for efficiency.\n(NB the json serialisation bit is covered by my ORM)","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":386,"Q_Id":6380623,"Users Score":0,"Answer":"If the database is not write-heavy (as you say), why not just implement the actual database tables in a way that achieves your goal? For example, add a \"version\" column. Then never update or delete rows, except for this special column, which you can set to NULL to mean \"current,\" 1 to mean \"the oldest known\", and go up from there. When you want to update a row, set its version to the next higher one, and insert a new one with no version. Then when you query, just select rows with the empty version.","Q_Score":2,"Tags":"python,git,mercurial,rdbms,audit-trail","A_Id":6380661,"CreationDate":"2011-06-17T01:55:00.000","Title":"Using DVCS for an RDBMS audit trail","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to implement an audit trail for a reasonably complicated relational database, whose schema is prone to change. One avenue I'm thinking of is using a DVCS to track changes.\n(The benefits I can imagine are: schemaless history, snapshots of entire system's state, standard tools for analysis, playback and migration, efficient storage, separate system, keeping DB clean. The database is not write-heavy and history is not not a core feature, it's more for the sake of having an audit trail. Oh and I like trying crazy new approaches to problems.)\nI'm not an expert with these systems (I only have basic git familiarity), so I'm not sure how difficult it would be to implement. I'm thinking of taking mercurial's approach, but possibly storing the file contents\/manifests\/changesets in a key-value data store, not using actual files.\nData rows would be serialised to json, each \"file\" could be an row. Alternatively an entire table could be stored in a \"file\", with each row residing on the line number equal to its primary key (assuming the tables aren't too big, I'm expecting all to have less than 4000 or so rows. This might mean that the changesets could be automatically generated, without consulting the rest of the table \"file\".\n(But I doubt it, because I think we need a SHA-1 hash of the whole file. The files could perhaps be split up by a predictable number of lines, eg 0 < primary key < 1000 in file 1, 1000 < primary key < 2000 in file 2 etc, keeping them smallish)\nIs there anyone familiar with the internals of DVCS' or data structures in general who might be able to comment on an approach like this? How could it be made to work, and should it even be done at all?\nI guess there are two aspects to a system like this: 1) mapping SQL data to a DVCS system and 2) storing the DVCS data in a key\/value data store (not files) for efficiency.\n(NB the json serialisation bit is covered by my ORM)","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":386,"Q_Id":6380623,"Users Score":2,"Answer":"I've looked into this a little on my own, and here are some comments to share.\nAlthough I had thought using mercurial from python would make things easier, there's a lot of functionality that the DVCS's have that aren't necessary (esp branching, merging). I think it would be easier to simply steal some design decisions and implement a basic system for my needs. So, here's what I came up with.\nBlobs\nThe system makes a json representation of the record to be archived, and generates a SHA-1 hash of this (a \"node ID\" if you will). This hash represents the state of that record at a given point in time and is the same as git's \"blob\".\nChangesets\nChanges are grouped into changesets. A changeset takes note of some metadata (timestamp, committer, etc) and links to any parent changesets and the current \"tree\".\nTrees\nInstead of using Mercurial's \"Manifest\" approach, I've gone for git's \"tree\" structure. A tree is simply a list of blobs (model instances) or other trees. At the top level, each database table gets its own tree. The next level can then be all the records. If there are lots of records (there often are), they can be split up into subtrees.\nDoing this means that if you only change one record, you can leave the untouched trees alone. It also allows each record to have its own blob, which makes things much easier to manage.\nStorage\nI like Mercurial's revlog idea, because it allows you to minimise the data storage (storing only changesets) and at the same time keep retrieval quick (all changesets are in the same data structure). This is done on a per record basis.\nI think a system like MongoDB would be best for storing the data (It has to be key-value, and I think Redis is too focused on keeping everything in memory, which is not important for an archive). It would store changesets, trees and revlogs. A few extra keys for the current HEAD etc and the system is complete.\nBecause we're using trees, we probably don't need to explicitly link foreign keys to the exact \"blob\" it's referring to. Justing using the primary key should be enough. I hope!\nUse case: 1. Archiving a change\nAs soon as a change is made, the current state of the record is serialised to json and a hash is generated for its state. This is done for all other related changes and packaged into a changeset. When complete, the relevant revlogs are updated, new trees and subtrees are generated with the new object (\"blob\") hashes and the changeset is \"committed\" with meta information.\nUse case 2. Retrieving an old state\nAfter finding the relevant changeset (MongoDB search?), the tree is then traversed until we find the blob ID we're looking for. We go to the revlog and retrieve the record's state or generate it using the available snapshots and changesets. The user will then have to decide if the foreign keys need to be retrieved too, but doing that will be easy (using the same changeset we started with).\nSummary\nNone of these operations should be too expensive, and we have a space efficient description of all changes to a database. The archive is kept separately to the production database allowing it to do its thing and allowing changes to the database schema to take place over time.","Q_Score":2,"Tags":"python,git,mercurial,rdbms,audit-trail","A_Id":6396514,"CreationDate":"2011-06-17T01:55:00.000","Title":"Using DVCS for an RDBMS audit trail","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to figure out how to use python's mysqldb. I can do my job with my current knownledge, but I want to use the best practices.\nShould I close properly my cursor? Exiting the program isn't close it autmatically? (Shouldn't I expect the object destructor to do it anyway?)\nShould I create new cursors for every query, or one cursor is enough for multiple different queries in the same DB?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":742,"Q_Id":6453067,"Users Score":2,"Answer":"Should I close properly my cursor?\n\nYes, you should. Explicit is better than implicit.\n\nShould I create new cursors for every\n query, or one cursor is enough for\n multiple different queries in the same\n DB?\n\nThis depends on how you use this cursor. For simple tasks it is enough to use one cursor. For some complex application it is better to create separate cursor for each batch of SQL-queries.","Q_Score":2,"Tags":"python,cursor,mysql-python","A_Id":6453159,"CreationDate":"2011-06-23T11:08:00.000","Title":"How to properly use mysqldb in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can Flask \/ SQLAlchemy be configured to create a new database connection if one is not present?\nI have an infrequently visited Python \/ Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a \"MySQL server has gone away\" error. Subsequent page views are fine, but it looks unprofessional to have this initial error.\nI'd like to know the correct way to handle this - advice like \"make a really long time out\", which would be about 4 days long in this case, doesn't seem correct. How can I test for the lack of a database connection and create one if needed?","AnswerCount":7,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":33654,"Q_Id":6471549,"Users Score":6,"Answer":"The pessimistic approach as described by @wim \n\npool_pre_ping=True\n\ncan now be done for Flask-SQLAlchemy using a config var -->\n\nSQLALCHEMY_POOL_PRE_PING = True","Q_Score":64,"Tags":"python,mysql,sqlalchemy,flask,database-connection","A_Id":58821330,"CreationDate":"2011-06-24T17:34:00.000","Title":"Avoiding \"MySQL server has gone away\" on infrequently used Python \/ Flask server with SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can Flask \/ SQLAlchemy be configured to create a new database connection if one is not present?\nI have an infrequently visited Python \/ Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a \"MySQL server has gone away\" error. Subsequent page views are fine, but it looks unprofessional to have this initial error.\nI'd like to know the correct way to handle this - advice like \"make a really long time out\", which would be about 4 days long in this case, doesn't seem correct. How can I test for the lack of a database connection and create one if needed?","AnswerCount":7,"Available Count":2,"Score":0.057080742,"is_accepted":false,"ViewCount":33654,"Q_Id":6471549,"Users Score":2,"Answer":"When I encountered this error I was storing a LONGBLOB \/ LargeBinary image ~1MB in size. I had to adjust the max_allowed_packet config setting in MySQL. \nI used mysqld --max-allowed-packet=16M","Q_Score":64,"Tags":"python,mysql,sqlalchemy,flask,database-connection","A_Id":51015137,"CreationDate":"2011-06-24T17:34:00.000","Title":"Avoiding \"MySQL server has gone away\" on infrequently used Python \/ Flask server with SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to read incoming data from a barcode scanner using pyserial. Then I have to store the contents into a MySQL database. I have the database part but not the serial part. can someone show me examples of how to do this. I'm using a windows machine.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":916,"Q_Id":6471569,"Users Score":1,"Answer":"You will find it easier to use a USB scanner. These will decode the scan, and send it as if it were typed on the keyboard, and entered with a trailing return. \nThe barcode is typically written with leading and trailing * characters, but these are not sent with the scan. \nThus you print \"*AB123*\" using a 3 of 9 font, and when it is scanned sys.stdin.readline().stript() will return \"AB123\".\nThere are more than a few options that can be set in the scanner, so you need to read the manual. I have shown the factory default above for a cheap nameless scanner I bought from Amazon.","Q_Score":1,"Tags":"python,pyserial","A_Id":6474062,"CreationDate":"2011-06-24T17:35:00.000","Title":"Reading incoming data from barcode","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I couldn't find any information about this in the documentation, but how can I get a list of tables created in SQLAlchemy?\nI used the class method to create the tables.","AnswerCount":14,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":133023,"Q_Id":6473925,"Users Score":99,"Answer":"There is a method in engine object to fetch the list of tables name. engine.table_names()","Q_Score":133,"Tags":"python,mysql,sqlalchemy,pyramid","A_Id":30554677,"CreationDate":"2011-06-24T21:25:00.000","Title":"SQLAlchemy - Getting a list of tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm dealing with some big (tens of millions of records, around 10gb) database files using SQLite. I'm doint this python's standard interface.\nWhen I try to insert millions of records into the database, or create indices on some of the columns, my computer slowly runs out of memory. If I look at the normal system monitor, it looks like the majority of the system memory is free. However, when I use top, it looks like I have almost no system memory free. If I sort the processes by their memory consuption, then non of them uses more than a couple percent of my memory (including the python process that is running sqlite).\nWhere is all the memory going? Why do top and Ubuntu's system monitor disagree about how much system memory I have? Why does top tell me that I have very little memory free, and at the same time not show which process(es) is (are) using all the memory?\nI'm running Ubuntu 11.04, sqlite3, python 2.7.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1060,"Q_Id":6491856,"Users Score":0,"Answer":"The memory may be not assigned to a process, but it can be e.g. a file on tmpfs filesystem (\/dev\/shm, \/tmp sometimes). You should show us the output of top or free (please note those tools do not show a single 'memory usage' value) to let us tell something more about the memory usage.\nIn case of inserting records to a database it may be a temporary image created for the current transaction, before it is committed to the real database. Splitting the insertion into many separate transactions (if applicable) may help.\nI am just guessing, not enough data.\nP.S. It seems I mis-read the original question (I assumed the computer slows down) and there is no problem. sehe's answer is probably better.","Q_Score":2,"Tags":"python,sqlite,memory,ubuntu,memory-leaks","A_Id":6491966,"CreationDate":"2011-06-27T11:03:00.000","Title":"Why does running SQLite (through python) cause memory to \"unofficially\" fill up?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a project that is serial based and has to update a database when a barcode is being read. Which programming language has better tools for working with a MySQl database and Serial communication. I debating right now between python and realbasic.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1691,"Q_Id":6498272,"Users Score":3,"Answer":"It's hard to imagine that Realbasic is a better choice than Python for any project.","Q_Score":0,"Tags":"python,mysql,serial-port,realbasic","A_Id":6498450,"CreationDate":"2011-06-27T20:02:00.000","Title":"What language is better for serial programming and working with MySQL database? Python? Realbasic?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a project that is serial based and has to update a database when a barcode is being read. Which programming language has better tools for working with a MySQl database and Serial communication. I debating right now between python and realbasic.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1691,"Q_Id":6498272,"Users Score":3,"Answer":"Python is a general purpose language with tremendous community support and a \"batteries-included\" philosophy that leads to simple-designs that focus on the business problem at hand. It is a good choice for a wide variety of projects.\nThe only reasons not to choose Python would be:\n\nYou (or your team) have greater experience in another general purpose language with good library and community support.\nYou have a particular problem that is handled best by a specialty language that was written with that sort of problem in mind.\n\nThe only thing I know about RealBASIC is that I hadn't heard of it until now, so it's a lock that it doesn't have quite the community of Python. (Exhibit A: 60,000 Python questions on SO, only 49 RealBASIC questions.) And if it is a derivative of BASIC, it would be not be a specialty language.\nPython seems a clear choice here, unless it means learning a new language, and you are proficient with RealBASIC.","Q_Score":0,"Tags":"python,mysql,serial-port,realbasic","A_Id":6498607,"CreationDate":"2011-06-27T20:02:00.000","Title":"What language is better for serial programming and working with MySQL database? Python? Realbasic?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When building a website, one have to decide how to store the session info, when a user is logged in.\nWhat is a pros and cons of storing each session in its own file versus storing it in a database?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2180,"Q_Id":6510075,"Users Score":3,"Answer":"I generally wouldnt ever store this information in a file - you run the risk of potentially swapping this file in and out of memory (yes it could be cached at times) but I would rather use an in memory mechanism designed as such and you are then using something that is fairly nonstandard.\nIn ASP.Net \n\nyou can use in in memory collection that is good for use on a single server. if you need multiple load balanced web servers (web farm) and a user could go to any other server as they come in for each request, this option is not good. If the web process restarts, the sessions are lost. They can also timeout. \nYou can use a state server in asp.net for multiple server access - this runs outside of your webserver's process. If the web process restarts - you are OK and multiple servers access this. This traffic going to the state server is not encrypted and you would ideally use IPSEC policies to secure the traffic in a more secure environment.\nYou can use sql server to manage state (automatically) by setting up the web.config to use sql server as your session database. This gives the advantage of a high performance database and multi server access.\nYou can use your own sessions in a database if you need them to persist to a long time outside of the normal mechanism and want tighter control on the database fields (maybe you want to query specific fields)\n\nAlso just out of curiosity - maybe you are referring to sessions as user preferences? In that case research asp.net profiles","Q_Score":7,"Tags":"php,asp.net,python,ruby-on-rails,ruby","A_Id":6510307,"CreationDate":"2011-06-28T16:45:00.000","Title":"What is the pros\/cons of storing session data in file vs database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When building a website, one have to decide how to store the session info, when a user is logged in.\nWhat is a pros and cons of storing each session in its own file versus storing it in a database?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2180,"Q_Id":6510075,"Users Score":1,"Answer":"I'm guessing, based on your previous questions, that this is being asked in the context of using perl's CGI::Application module, with CGI::Application::Plugin::Session. If you use that module with the default settings, it will write the session data into files stored in the \/tmp directory - which is very similar to what PHP does. If your app is running in a shared hosting environment, you probably do NOT want to do this, for security reasons, since other users may be able to view\/modify data in \/tmp. You can fix this by writing the files into a directory that only you have permission to read\/write (i.e., not \/tmp). While developing, I much prefer to use YAML for serialization, rather than the default (storable), since it is human-readable. If you have your own web server, and you're able to run your database (mysql) server on the same machine, then storing the session data in a database instead of a file will usually yield higher performance - especially if you're able to maintain a persistent database connection (i.e. using mod_perl or fastcgi). BUT - if your database is on a remote host, and you have to open a new connection each time you need to update session data, then performance may actually be worse, and writing to a file may be better. Note that you can also use sqlite, which looks like a database to your app, but is really just a file on your local file system. Regardless of performance, the database option may be undesirable in shared-host environments because of bandwidth limitations, and other resource restrictions. The performance difference is also probably negligible for a low-traffic site (i.e., a few thousand hits per day).","Q_Score":7,"Tags":"php,asp.net,python,ruby-on-rails,ruby","A_Id":6527272,"CreationDate":"2011-06-28T16:45:00.000","Title":"What is the pros\/cons of storing session data in file vs database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pyramid project that uses mongodb for storage. Now I'm trying to write a test but how do I specify connection to the mongodb?\nMore specifically, which database should I connect to (test?) and how do I use fixtures? In Django it creates a temporary database but how does it work in pyramid?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":594,"Q_Id":6515160,"Users Score":2,"Answer":"Just create a database in your TestCase.setUp and delete in TestCase.tearDown\nYou need mongodb running because there is no mongolite3 like sqlite3 for sql\nI doubt that django is able to create a temporary file to store a mongodb database. It probably just use sqlite:\/\/\/ which create a database with a memory storage.","Q_Score":2,"Tags":"python,mongodb,pyramid","A_Id":6934811,"CreationDate":"2011-06-29T02:30:00.000","Title":"How do i create unittest in pyramid with mongodb?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row.\nIs there any way that I can return an entire column in a django query set?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":6539687,"Users Score":0,"Answer":"You can use cursor.fetchall() instead of cursor.fetchone() to retrieve all rows.\nAnd then extract nessesary field:\n\nraw_items = cursor.fetchall()\nitems = [ item.field for item in raw_items ]","Q_Score":0,"Tags":"python,django","A_Id":6539716,"CreationDate":"2011-06-30T18:54:00.000","Title":"How do I use django db API to save all the elements of a given column in a dictionary?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row.\nIs there any way that I can return an entire column in a django query set?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":223,"Q_Id":6539687,"Users Score":1,"Answer":"dict(MyModel.objects.values_list('id', 'my_column')) will return a dictionary with all elements of my_column with the row's id as the key. But probably you're just looking for a list of all the values, which you should receive via MyModel.objects.values_list('my_column', flat=True)!","Q_Score":0,"Tags":"python,django","A_Id":6539798,"CreationDate":"2011-06-30T18:54:00.000","Title":"How do I use django db API to save all the elements of a given column in a dictionary?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a photo gallery with an album model (just title and date and stuff) and a photo model with a foriegn key to the album and three imageFields in it (regular, mid and thumb).\nWhen a user delete an album i need to delete all the photos reletaed to the album (from server) then all the DB records that point to the album and then the album itself...\nCouldn't find anything about this and actualy found so many answers what one say the oposite from the other.\nCan any one please clarify this point, how does this is beeing done in the real world?\nThank you very much,\nErez","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":80,"Q_Id":6550003,"Users Score":0,"Answer":"Here is a possible answer for the question i figured out:\n\nGetting the list of albums in a string, in my case separated by commas \nYou need to import shutil, then:\n@login_required\ndef remove_albums(request):\n if request.is_ajax():\n if request.method == 'POST':\n #if the ajax call for delete what ok we get the list of albums to delete\n albums_list = request.REQUEST['albums_list'].rsplit(',')\n for album in albums_list:\n obj_album = Album.objects.get(id=int(album))\n #getting the directory for the images than need to be deleted\n dir_path = MEDIA_ROOT + '\/images\/galleries\/%d' % obj_album.id\n #deleting the DB record\n obj_album.delete()\n #testing if there is a folder (there might be a record with no folder if no file was uploaded - deleting the album before uploading images)\n try:\n #deleting the folder and all the files in it\n shutil.rmtree(dir_path)\n except OSError:\n pass\n return HttpResponse('')\n\nSorry for how the code look like, don't know why, I can't make it show correct...\nHave fun and good luck :-)","Q_Score":1,"Tags":"python,django,django-models,django-views","A_Id":6553381,"CreationDate":"2011-07-01T15:30:00.000","Title":"How to delete an object and all related objects with all imageFields insite them (photo gallery)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"(1) What's the fastest way to check if an item I'm about to \"insert\" into a MongoDB collection is unique (and if so not insert it)\n(2) For an existing database, what's the fastest way to look at all the entries and remove duplicates but keep one copy i.e. like a \"set\" function: {a,b,c,a,a,b} -> {a,b,c}\n\nI am aware that technically speaking each entry is unique, since they get a unique ObjectID\nYou may assume the entries are completely flat key:value lists\nSolutions with indexing are fine\nI prefer Python code (i.e. mongo python API) if possible\n\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":824,"Q_Id":6567511,"Users Score":2,"Answer":"(1) Create a unique index on the related columns and catch the error upon insertion time","Q_Score":0,"Tags":"python,mongodb","A_Id":6567552,"CreationDate":"2011-07-04T05:11:00.000","Title":"Fastest Way to (1) not insert duplicate entry (2) consolidate duplicates in Mongo DB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our site has two separate projects connected to the same database. This is implemented by importing the models from project1 into project2 and using it to access and manipulate data.\nThis works fine on our test server, but we are planning deployment and we decided we would rather have the projects on two separate machines, with the database on a third one.\nI have been looking around for ideas on how to import the model from a project on another machine but that doesn't seem to be possible. \nAn obvious solution would be to put the models in a separate app and have it on both boxes, but that means code is duplicated and changes have to be applied twice.\nI'm looking for suggestions on how to deal with this and am wondering if other people have encountered similar issues. We'll be deploying on AWS if that helps. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":253,"Q_Id":6572203,"Users Score":1,"Answer":"This isn't really a Django question. It is more a Python Question.\nHowever to answer your question Django is going to have to be able to import these files one way or another. If they are on seperate machines you really should refactor the code out into it's own app and then install this app on each of the machines.\nThe only other way I can think of to do this is to make your own import hook that can import a file from across a network but that is a really bad idea for a multitude of reasons.","Q_Score":2,"Tags":"python,django,deployment,architecture,amazon-web-services","A_Id":6574660,"CreationDate":"2011-07-04T13:31:00.000","Title":"separate django projects on different machines using a common database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing my first web site, and am dealing with user registration.\nOne common problem to me like to everyone else is to detect user already exist.\nI am writing the app with python, and postgres as database.\nI have currently come up with 2 ideas:\n1) \nlock(mutex)\nu = select from db where name = input_name\nif u == null insert into db (name) values (input_name)\nelse return 'user already exist'\nunlock(mutex)\n2)\ntry: insert into db (name) values(input)\nexcept: return 'user already exist'\nThe first way is to use mutex lock for clear logic, while the second way using exception to indicate user existence. \nCan anyone discuss what are the pros and cons of both of the methods?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":235,"Q_Id":6580723,"Users Score":2,"Answer":"I think both will work, and both are equally bad ideas. :) My point is that implementing user authentication in python\/pg has been done so many times in the past that there's hardly justification for writing it yourself. Have you had a look at Django, for example? It will take care of this for you, and much more, and let you focus your efforts on your particular application.","Q_Score":0,"Tags":"python,sql,database","A_Id":6580794,"CreationDate":"2011-07-05T09:47:00.000","Title":"Detect users already exist in database on user registration","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing my first web site, and am dealing with user registration.\nOne common problem to me like to everyone else is to detect user already exist.\nI am writing the app with python, and postgres as database.\nI have currently come up with 2 ideas:\n1) \nlock(mutex)\nu = select from db where name = input_name\nif u == null insert into db (name) values (input_name)\nelse return 'user already exist'\nunlock(mutex)\n2)\ntry: insert into db (name) values(input)\nexcept: return 'user already exist'\nThe first way is to use mutex lock for clear logic, while the second way using exception to indicate user existence. \nCan anyone discuss what are the pros and cons of both of the methods?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":235,"Q_Id":6580723,"Users Score":0,"Answer":"Slightly different, I usually do a select query via AJAX to determine if a username already exists, that way I can display a message on the UI explaining that the name is already taken and suggest another before the submit the registration form.","Q_Score":0,"Tags":"python,sql,database","A_Id":6580784,"CreationDate":"2011-07-05T09:47:00.000","Title":"Detect users already exist in database on user registration","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to use Spyder with pyodbc to connect mysql using a PyQT4 gui framework. \nI have pyodbc in Spyder figure out.\nHow do I use PyQt4 to get info into gui's? I'm looking to use gui on Fedora and winx64.\nEdit: I figured out the fedora driver. Can anyone help me with QMYSQL driver.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":842,"Q_Id":6582404,"Users Score":0,"Answer":"Have you considered using PyQt's built-in MySQL support? This could make it a bit easier to display DB info, depending on what you want the interface to look like.","Q_Score":0,"Tags":"python,pyqt4,spyder","A_Id":6644662,"CreationDate":"2011-07-05T12:07:00.000","Title":"How connect Spyder to mysql on Winx64 and Fedora?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to have a purely in-memory SQLite database in Django, and I think I have it working, except for an annoying problem:\nI need to run syncdb before using the database, which isn't too much of a problem. The problem is that it needs to create a superuser (in the auth_user table, I think) which requires interactive input.\nFor my purposes, I don't want this -- I just want to create it in memory, and I really don't care about the password because I'm the only user. :) I just want to hard-code a password somewhere, but I have no idea how to do this programmatically.\nAny ideas?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1200,"Q_Id":6599716,"Users Score":3,"Answer":"Disconnect django.contrib.auth.management.create_superuser from the post_syncdb signal, and instead connect your own function that creates and saves a new superuser User with the desired password.","Q_Score":1,"Tags":"python,database,django,sqlite,in-memory-database","A_Id":6600219,"CreationDate":"2011-07-06T16:20:00.000","Title":"Django In-Memory SQLite3 Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there an elegant way to do an INSERT ... ON DUPLICATE KEY UPDATE in SQLAlchemy? I mean something with a syntax similar to inserter.insert().execute(list_of_dictionaries) ?","AnswerCount":9,"Available Count":1,"Score":-0.022218565,"is_accepted":false,"ViewCount":60718,"Q_Id":6611563,"Users Score":-1,"Answer":"As none of these solutions seem all the elegant. A brute force way is to query to see if the row exists. If it does delete the row and then insert otherwise just insert. Obviously some overhead involved but it does not rely on modifying the raw sql and it works on non orm stuff.","Q_Score":44,"Tags":"python,mysql,sqlalchemy","A_Id":17374720,"CreationDate":"2011-07-07T13:43:00.000","Title":"SQLAlchemy ON DUPLICATE KEY UPDATE","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python loader using Andy McCurdy's python library that opens multiple Redis DB connections and sets millions of keys looping through files of lines each containing an integer that is the redis-db number for that record. Alltogether, only 20 databases are open at the present time, but eventually there may be as many as 100 or more.\nI notice that the redis log (set to verbose) always tells me there are \"4 clients connected (0 slaves),... though I know that my 20 are open and are being used.\nSo I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess? If so the real question is is there a way to increase the pool size -- I have plenty of machine resources, a lot dedicated to Redis? Would increasing the pool size help performance as the number of virtual connections I'm making goes up?\nAt this point, I am actually hitting only ONE connection at a time though I have many open as I shuffle input records among them. But eventually there will be many scripts (2 dozen?) hitting Redis in parallel, mostly reading and I am wondering what effect increasing the pool size would have.\nThanks\nmatthew","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1371,"Q_Id":6628953,"Users Score":1,"Answer":"So I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess?\n\nYes.\n\nIf so the real question is is there a way to increase the pool size\n\nNot needed, it will increase connections up to 2**31 per default (andys lib). So your connections are idle anyways.\nIf you want to increase performance, you will need to change the application using redis.\n\nand I am wondering what effect increasing the pool size would have.\n\nNone, at least not in this case.\nIF redis becomes the bottleneck at some point, and you have a multi-core server. You must run multiple redis instances to increase performance, as it only runs on a single core. When you run multiple instances, and doing mostly reads, the slave feature can increase performance as the slaves can be used for all the reads.","Q_Score":1,"Tags":"python,configuration,redis,connection-pooling","A_Id":6703919,"CreationDate":"2011-07-08T18:43:00.000","Title":"configuring connection-pool size with Andy McCurdy's python-for-redis library","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient. \nIs this even a plausible strategy?\nOr is this the way they do it:\nWhen a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine?\nIf so, how do you do the countdown clock with javascript? And how would I be able to delete\/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2127,"Q_Id":6639247,"Users Score":2,"Answer":"I don't think this question has anything to do with SQL, really--except that you might retrieve an expiration time from SQL. What you really care about is just how to display the timeout real-time in the browser, right?\nObviously the easiest way is just to send a \"seconds remaining\" counter to the page, either on the initial load, or as part of an AJAX request, then use Javascript to display the timer, and update it every second with the current value. I would opt for using a \"seconds remaining\" counter rather than an \"end datetime\", because you can't trust a browser's clock to be set correctly--but you probably can trust it to count down seconds correctly.\nIf you don't trust Javascript, or the client's clock, to be accurate, you could periodically re-send the current \"seconds remaining\" value to the browser via AJAX. I wouldn't do this every second, maybe every 15 or 60 seconds at most.\nAs for deleting\/moving data when the clock expires, you'll need to do all of that in Javascript.\nI'm not 100% sure I answered all of your questions, but your questions seem a bit scattered anyway. If you need more clarification on the theory of operation, please ask.","Q_Score":0,"Tags":"javascript,python,django,time,countdown","A_Id":6639561,"CreationDate":"2011-07-10T04:52:00.000","Title":"Live countdown clock with django and sql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient. \nIs this even a plausible strategy?\nOr is this the way they do it:\nWhen a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine?\nIf so, how do you do the countdown clock with javascript? And how would I be able to delete\/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2127,"Q_Id":6639247,"Users Score":0,"Answer":"I have also encountered the same problem a while ago.\nFirst of all your problem is not related neither django nor sql. It is a general concept and it is not very easy to implement because of overhead in server.\nOne solution come into my mind is keeping start time of the process in the database.\nWhen someone request you to see remaingn time, read it from database, subtract the current time and server that time and in your browser initialize your javascript function with that value and countdown like 15 sec. After that do the same operation with AJAX without waiting user's request.\nHowever, there would be other implementations depending your application. If you explain your application in detail there could be other solutions.\nFor example, if you implement a questionnaire with limited time, then for every answer submit, you should pass the calculated javascript value for that second.","Q_Score":0,"Tags":"javascript,python,django,time,countdown","A_Id":6639878,"CreationDate":"2011-07-10T04:52:00.000","Title":"Live countdown clock with django and sql?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Django project which has a mysql database backend. How can I export contents from my db to an Excel (xls, xlsx) format?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1028,"Q_Id":6649990,"Users Score":0,"Answer":"phpMyAdmin has an Export tab, and you can export in CSV. This can be imported into Excel.","Q_Score":0,"Tags":"python,mysql,django,excel","A_Id":6650011,"CreationDate":"2011-07-11T12:20:00.000","Title":"MySQLdb to Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I try to connect to database in a domain from my virtual machine. \nIt works on XP, but somehow does not work on Win7 and quitting with:\n\"OperationalError: (1042, \"Can't get hostname for your address\")\"\nNow I tried disable Firewall and stuff, but that doesn't matter anyway.\nI don't need the DNS resolving, which will only slow everything down.\nSo I want to use the option \"skip-name-resolve\", but there is no my.ini\nor my.cnf when using MySQLdb for Python, so how can I still use this option?\nThanks for your help\n-Alex","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":60490,"Q_Id":6668073,"Users Score":1,"Answer":"This is an option which needs to be set in the MySQL configuration file on the server. It can't be set by client APIs such as MySQLdb. This is because of the potential security implications.\nThat is, I may want to deny access from a particular hostname. With skip-name-resolve enabled, this won't work. (Admittedly, access control via hostname is probably not the best idea anyway.)","Q_Score":12,"Tags":"python,mysql,mysql-python,resolve","A_Id":6668116,"CreationDate":"2011-07-12T17:00:00.000","Title":"How to use the option skip-name-resolve when using MySQLdb for Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using an object database (ZODB) in order to store complex relationships between many objects but am running into performance issues. As a result I started to construct indexes in order to speed up object retrieval and insertion. Here is my story and I hope that you can help.\nInitially when I would add an object to the database I would insert it in a branch dedicated to that object type. In order to prevent multiple objects representing the same entity I added a method that would iterate over existing objects in the branch in order to find duplicates. This worked at first but as the database grew in size the time it took to load each object into memory and check attributes grew exponentially and unacceptably.\nTo solve that issue I started to create indexes based on the attributes in the object so that when an object would be added it would be saved in the type branch as well as within an attribute value index branch. For example, say I was saving an person object with attributes firstName = 'John' and lastName = 'Smith', the object would be appended to the person object type branch and would also be appended to lists within the attribute index branch with keys 'John' and 'Smith'.\nThis saved a lot of time with duplicate checking since the new object could be analysed and only the set of objects which intersect within the attribute indexes would need to be checked.\nHowever, I quickly ran into another issue with regards to dealing when updating objects. The indexes would need to updated to reflect the fact that they may not be accurate any more. This requires either remembering old values so that they could be directly accessed and the object removed or iterating over all values of an attribute type in order to find then remove the object. Either way performance is quickly beginning to degrade again and I can't figure out a way to solve it.\nHas you had this kind of issue before? What did you do solve it, or is this just something that I have to deal with when using OODBMS's?\nThank in advance for the help.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":601,"Q_Id":6668234,"Users Score":8,"Answer":"Yes, repoze.catalog is nice, and well documented.\nIn short : don't make indexing part of your site structure!\n\nLook at using a container\/item hierarchy to store and traverse content item objects; plan to be able to traverse content by either (a) path (graph edges look like a filesystem) or (b) by identifying singleton containers at some distinct location. \nIdentify your content using either RFC 4122 UUIDs (uuid.UUID type) or 64-bit integers.\nUse a central catalog to index (e.g. repoze.catalog); the catalog should be at a known location relative to the root application object of your ZODB. And your catalog will likely index attributes of objects and return record-ids (usually integers) on query. Your job is to map those integer ids to (perhaps indrecting via UUIDs) to some physical traversal path in the database where you are storing content. It helps if you use zope.location and zope.container for common interfaces for traversal of your object graph from root\/application downward.\nUse zope.lifecycleevent handlers to index content and keep things fresh.\n\nThe problem -- generalized\nZODB is too flexible: it is just a persistent object graph with transactions, but this leaves room for you to sink or swim in your own data-structures and interfaces. \nThe solution -- generalized\nUsually, just picking pre-existing idioms from the community around the ZODB will work: zope.lifecycleevent handlers, \"containerish\" traversal using zope.container and zope.location, and something like repoze.catalog.\nMore particular\nOnly when you exhaust the generalized idioms and know why they won't work, try to build your own indexes using the various flavors of BTrees in ZODB. I actually do this more than I care to admit, but usually have good cause.\nIn all cases, keep your indexes (search, discovery) and site (traversal and storage) structure distinct.\nThe idioms for the problem domain\n\nMaster ZODB BTrees: you likely want:\n\nTo store content objects as subclasses of Persistent in containers that are subclasses of OOBTree providing container interfaces (see below).\nTo store BTrees for your catalog or global indexes or use packages like repoze.catalog and zope.index to abstract that detail away (hint: catalog solutions typically store indexes as OIBTrees that will yield integer record ids for search results; you then typically have some sort of document mapper utility that translates those record ids into something resolvable in your application like a uuid (provided you can traverse the graph to the UUID) or a path (the way the Zope2 catalog does).\n\nIMHO, don't bother working with intids and key-references and such (these are less idiomatic and more difficult if you don't need them). Just use a Catalog and DocumentMap from repoze.catalog to get results in integer to uuid or path form, and then figure out how to get your object. Note, you likely want some utility\/singleton that has the job of retrieving your object given an id or uuid returned from a search.\nUse zope.lifecycleevent or similar package that provides synchronous event callback (handler) registrations. These handlers are what you should call whenever an atomic edit is made on your object (likely once per transaction, but not in transaction machinery).\nLearn the Zope Component Architecture; not an absolute requirement, but surely helpful, even if just to understand zope.interface interfaces of upstream packages like zope.container \nUnderstanding of how Zope2 (ZCatalog) does this: a catalog fronts for multiple indexes or various sorts, which each search for a query, each have specialized data structures, and each return integer record id sequences. These are merged across indexes by the catalog doing set intersections and returned as a lazy-mapping of \"brain\" objects containing metadata stubs (each brain has a getObject() method to get the actual content object). Getting actual objects from a catalog search relies upon the Zope2 idiom of using paths from the root application object to identify the location of the item cataloged.","Q_Score":5,"Tags":"python,indexing,zodb,object-oriented-database","A_Id":6674416,"CreationDate":"2011-07-12T17:14:00.000","Title":"Method for indexing an object database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using an object database (ZODB) in order to store complex relationships between many objects but am running into performance issues. As a result I started to construct indexes in order to speed up object retrieval and insertion. Here is my story and I hope that you can help.\nInitially when I would add an object to the database I would insert it in a branch dedicated to that object type. In order to prevent multiple objects representing the same entity I added a method that would iterate over existing objects in the branch in order to find duplicates. This worked at first but as the database grew in size the time it took to load each object into memory and check attributes grew exponentially and unacceptably.\nTo solve that issue I started to create indexes based on the attributes in the object so that when an object would be added it would be saved in the type branch as well as within an attribute value index branch. For example, say I was saving an person object with attributes firstName = 'John' and lastName = 'Smith', the object would be appended to the person object type branch and would also be appended to lists within the attribute index branch with keys 'John' and 'Smith'.\nThis saved a lot of time with duplicate checking since the new object could be analysed and only the set of objects which intersect within the attribute indexes would need to be checked.\nHowever, I quickly ran into another issue with regards to dealing when updating objects. The indexes would need to updated to reflect the fact that they may not be accurate any more. This requires either remembering old values so that they could be directly accessed and the object removed or iterating over all values of an attribute type in order to find then remove the object. Either way performance is quickly beginning to degrade again and I can't figure out a way to solve it.\nHas you had this kind of issue before? What did you do solve it, or is this just something that I have to deal with when using OODBMS's?\nThank in advance for the help.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":601,"Q_Id":6668234,"Users Score":0,"Answer":"Think about using an attribute hash (something like Java's hashCode()), then use the 32-bit hash value as the key. Python has a hash function, but I am not real familiar with it.","Q_Score":5,"Tags":"python,indexing,zodb,object-oriented-database","A_Id":6668904,"CreationDate":"2011-07-12T17:14:00.000","Title":"Method for indexing an object database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking at the Flask tutorial, and it suggests to create a new database connection for each web request. Is it the right way to do things ? I always thought that the database connection should only once be created once for each thread. Can that be done, while maintaining the application as thread-safe, with flask, or other python web servers.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":11438,"Q_Id":6688413,"Users Score":0,"Answer":"In my experience, it's often a good idea to close connections frequently. In particular, MySQL likes to close connections that have been idle for a while, and sometimes that can leave the persistent connection in a stale state that can make the application unresponsive.\nWhat you really want to do is optimize the \"dead connection time\", the fraction of the time a connection is up but isn't doing any work. In the case of creating a new connection with every request, that dead time is really just the setup and teardown time. If only make a connection once (per thread), and it never goes bad, then the dead time is the idle time. \nWhen your application is serving only a few requests, the number of connections that occur will also be small, and so there's not much advantage of keeping a connection open, but idle. On the other extreme, when the application is very busy, connections are almost never idle, and closing a connection that will just be reopened immediately is also wasted. In the middle, when new requests sometimes follow in flight requests, but sometimes not, you'll have to do some performance tuning on things like pool size, request timeout, and so on.\nA very busy app, which uses a connection pool to keep connections open will only ever see one kind of dead time; waiting for requests that will never return because the connection has gone bad. A simple solution to this problem is to execute a known, good query (which in MySQL is spelled SELECT 1) before providing a connection from the pool to a request and recycle the connection if it doesn't return quickly.","Q_Score":21,"Tags":"python,mysql,flask","A_Id":6698054,"CreationDate":"2011-07-14T04:13:00.000","Title":"How to preserve database connection in a python web server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The datetime module does date validation and math which is fine when you care about reality.\nI need an object that holds dates generated even if they were invalid. Date time is way too strict as sometimes I know year only or year and month only and sometimes I have a date like 2011-02-30.\nIs there a module out there that is like datetime but that can handle invalid dates?\nIf not, what's the best way to handle this while duplicating as little functionality as possible and still allowing date math when it is possible to perform?\nUPDATE\nMotivation for this is integration with multiple systems that use dates and don't care about invalid dates (mysql and perl) in addition to wanting the ability to tell basic general ranges of time. For fuzzy date math start from the beginning of the known unit of time (if I know year and month but not day, use the first, if i know year but no month or day, use january first). This last bit is not necessary but would be nice and I get why it is not common as people who need special case date math will probably build it themselves.\nOne of the major issues I have is loading dates from mysql into python using sqlalchemy and mysqldb -- if you load a value from a date column im mysql that looks like '2011-01-00' in mysql, you get None in python. This is not cool by any stretch.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2334,"Q_Id":6697770,"Users Score":0,"Answer":"I haven't heard of such module out there and don't think there is one.\nI would probably end up storing two dates for every instance: 1. the original input as a string, which could contain anything, even \"N\/A\", just for showing back the original value, and 2. parsed and \"normalized\" datetime object which is the closest representation of the input. Depending on the purpose I would allow Null\/None objects where it really couldn't be estimated (like the mentioned \"N\/A\" case) or not. This solution will allow you to revert\/change the \"estimation\" as you do not lose any information.\nIf you don't care about it so much, SQLAlchemy allows declaring your own column and data types for transparently converting such values back and forth into a string column in the DB.","Q_Score":6,"Tags":"python,datetime","A_Id":12212950,"CreationDate":"2011-07-14T17:56:00.000","Title":"allowing invalid dates in python datetime","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know of a way of accessing MS Excel from Python? Specifically I am looking to create new sheets and fill them with data, including formulae. \nPreferably I would like to do this on Linux if possible, but can do it from in a VM if there is no other way.","AnswerCount":6,"Available Count":2,"Score":0.1651404129,"is_accepted":false,"ViewCount":37522,"Q_Id":6698229,"Users Score":5,"Answer":"Long time after the original question, but last answer pushed it top of feed again. Others might benefit from my experience using python and excel.\nI am using excel and python quite bit. Instead of using the xlrd, xlwt modules directly, I normally use pandas. I think pandas uses these modules as imports, but i find it much easier using the pandas provided framework to create and read the spreadsheets. Pandas's Dataframe structure is very \"spreadsheet-like\" and makes life a lot easier in my opinion.\nThe other option that I use (not in direct answer to your problem) is DataNitro. It allows you to use python directly within excel. Different use case, but you would use it where you would normally have to write VBA code in Excel.","Q_Score":22,"Tags":"python,excel","A_Id":21573501,"CreationDate":"2011-07-14T18:34:00.000","Title":"Excel Python API","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know of a way of accessing MS Excel from Python? Specifically I am looking to create new sheets and fill them with data, including formulae. \nPreferably I would like to do this on Linux if possible, but can do it from in a VM if there is no other way.","AnswerCount":6,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":37522,"Q_Id":6698229,"Users Score":3,"Answer":"It's surely possible through the Excel object model via COM: just use win32com modules for Python. Can't remember more but I once controlled the Media Player through COM from Python. It was piece of cake.","Q_Score":22,"Tags":"python,excel","A_Id":6698343,"CreationDate":"2011-07-14T18:34:00.000","Title":"Excel Python API","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted to Paradox data processing however while I'm fairly confident in how to write the conversion code (i.e. simply calling a few file open, close, and write functions) I'm concerned about error detection and ensuring that when running the script, I don't spend hours waiting for it to run only to see 16k corrupt files exported.\nAlso, I'm not fully sure about the logic loop for calling the files. I'm thinking of having the program generate a list of all the files with the appropriate extension and then looping through the list, however I'm not sure if that's ideal for a directory of this size.\nThis is being run on a local Windows 7 x64 system with XAMPP setup (the database is all internal use) so I'm not sure if pure PHP is the best idea -- so I've been wondering if Python or some other lightweight scripting language might be better for handling this.\nThanks very much in advance for any insights and assistance,","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":734,"Q_Id":6709833,"Users Score":0,"Answer":"If you intend to just convert the data which I guess is a process you do only once you will run the script locally as a command script. For that you don't need a web site and thus XAMPP. What language you take is secondary except you say that PHP has a library. Does python or others have one?\nAbout your concern of error detection why not test your script with only one file first. If that conversion is successful you can build your loop and test this on maybe five files, i.e. have a counter that ends the process after that number. It that is still okay you can go on with the rest. You can also write log data and dump a result for every 100 files processed. This way you can see if your script is doing something or idling.","Q_Score":1,"Tags":"php,python,mysql,xampp,php-gtk","A_Id":6711276,"CreationDate":"2011-07-15T15:56:00.000","Title":"Batch converting Corel Paradox 4.0 Tables to CSV\/SQL -- via PHP or other scripts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted to Paradox data processing however while I'm fairly confident in how to write the conversion code (i.e. simply calling a few file open, close, and write functions) I'm concerned about error detection and ensuring that when running the script, I don't spend hours waiting for it to run only to see 16k corrupt files exported.\nAlso, I'm not fully sure about the logic loop for calling the files. I'm thinking of having the program generate a list of all the files with the appropriate extension and then looping through the list, however I'm not sure if that's ideal for a directory of this size.\nThis is being run on a local Windows 7 x64 system with XAMPP setup (the database is all internal use) so I'm not sure if pure PHP is the best idea -- so I've been wondering if Python or some other lightweight scripting language might be better for handling this.\nThanks very much in advance for any insights and assistance,","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":734,"Q_Id":6709833,"Users Score":1,"Answer":"This is doubtless far too late to help you, but for posterity...\nIf one has a Corel Paradox working environment, one can just use it to ease the transition.\nWe moved the Corel Paradox 9 tables we had into an Oracle schema we built by connecting to the schema (using an alias such as SCHEMA001) then writing this Procedure in a script from inside Paradox:\nProc writeTable(targetTable String)\n errorTrapOnWarnings(Yes)\n try\n tc.open(targetTable)\n tc.copy(\":SCHEMA001:\" + targetTable) \n tc.close()\n onFail\n errorShow()\n endTry\nendProc\nOne could highly refine this with more Paradox programming, but you get the idea. One thing we discovered, though, is that Paradox uses double quotes for the column names when it creates the Oracle version, which means you can get lower-case letters in column names in Oracle, which is a pain. We corrected that by writing a quick Oracle query to upper() all the resulting column names.\nWe called the procedure like so:\nVar\n targetTable String\n tc TCursor\nendVar\nmethod run(var eventInfo Event)\n targetTable = \"SomeTableName\" \n writeTable(targetTable)\n msgInfo(\"TransferData.ssl--script finished\",\n \"That's all, folks!\" )\n return\nendMethod","Q_Score":1,"Tags":"php,python,mysql,xampp,php-gtk","A_Id":39728385,"CreationDate":"2011-07-15T15:56:00.000","Title":"Batch converting Corel Paradox 4.0 Tables to CSV\/SQL -- via PHP or other scripts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general.\nI need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":14347,"Q_Id":6713715,"Users Score":0,"Answer":"SQLAlchemy do sanitize for you if you will use regular queries. Maybe the problem that you use like clause. Like require addition escape for such symbols: _%. Thus you will need replace methods if you want to quote like expression.","Q_Score":9,"Tags":"python,mysql,sqlalchemy,escaping,filepath","A_Id":7404552,"CreationDate":"2011-07-15T22:12:00.000","Title":"Escaping special characters in filepaths using SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general.\nI need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.","AnswerCount":4,"Available Count":4,"Score":-0.1488850336,"is_accepted":false,"ViewCount":14347,"Q_Id":6713715,"Users Score":-3,"Answer":"Why do you need to escape the file paths? As far as you are not manually writing select \/ insert queries, SQLAlchemy will take care of the escaping when it generates the query internally.\nThe file paths can be inserted as they are into the database.","Q_Score":9,"Tags":"python,mysql,sqlalchemy,escaping,filepath","A_Id":6720094,"CreationDate":"2011-07-15T22:12:00.000","Title":"Escaping special characters in filepaths using SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general.\nI need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.","AnswerCount":4,"Available Count":4,"Score":-1.0,"is_accepted":false,"ViewCount":14347,"Q_Id":6713715,"Users Score":-4,"Answer":"You don't need do anything SQLAlchemy will do it for you.","Q_Score":9,"Tags":"python,mysql,sqlalchemy,escaping,filepath","A_Id":7435697,"CreationDate":"2011-07-15T22:12:00.000","Title":"Escaping special characters in filepaths using SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general.\nI need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.","AnswerCount":4,"Available Count":4,"Score":-0.1488850336,"is_accepted":false,"ViewCount":14347,"Q_Id":6713715,"Users Score":-3,"Answer":"As I know there isn\u2019t what you are looking for in SQLAlchemy. Just go basestring.replace() method by yourself.","Q_Score":9,"Tags":"python,mysql,sqlalchemy,escaping,filepath","A_Id":7479678,"CreationDate":"2011-07-15T22:12:00.000","Title":"Escaping special characters in filepaths using SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working in python and using xlwt. \nI have got a sample excel sheet and have to generate same excel sheet from python. Now the problem is heading columns are highlighted using some color from excel color palette and I am not able to find the name of color. I need to generate exact copy of sample given to me.\nIs there any function in xlwt which let me read color of cell of one sheet and then put that color in my sheet?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1582,"Q_Id":6723242,"Users Score":0,"Answer":"Best you read the colours from the sample given to you with xlrd. \nIf there are only a few different colours and they stay the same over time, you can also open the file in Excel and use a colour picker tool to get the RGB values of the relevant cells.","Q_Score":3,"Tags":"python,excel,colors,cell,xlwt","A_Id":15435059,"CreationDate":"2011-07-17T10:29:00.000","Title":"Reading background color of a cell of an excel sheet from python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: \"Error loading MySQLdb module: No module named MySQLdb\"\nI thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true? \nI don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1094,"Q_Id":6738310,"Users Score":2,"Answer":"You'll need to install MySQL for python as Django needs this to do the connecting, once you have the package installed you shouldn't need to configure it though as Django just needs to import from it. \nEdit: from your comments there is a setuptools bundled but it has been replaced by the package distribute, install this python package and you should have access to easy_install which makes it really easy to get new packages. Assuming you've added PYTHONPATH\/scripts to your environment variables, you can call easy_install mysql_python","Q_Score":0,"Tags":"python,mysql,django,mysql-python,bitnami","A_Id":6738365,"CreationDate":"2011-07-18T19:29:00.000","Title":"Mysql-python not installed with bitnami django stack? \"Error loading MySQLdb module: No module named MySQLdb\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: \"Error loading MySQLdb module: No module named MySQLdb\"\nI thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true? \nI don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1094,"Q_Id":6738310,"Users Score":0,"Answer":"BitNami DjangoStack already includes the mysql-python components components. I guess you selected MySQL as the database when installing the BitNami Stack, right? (it also includes PostgreSQL and SQLite). Do you receive the error at installation time? Or later working with your Django project? \nIn which platform are you using the BitNami DjangoStack?","Q_Score":0,"Tags":"python,mysql,django,mysql-python,bitnami","A_Id":6981742,"CreationDate":"2011-07-18T19:29:00.000","Title":"Mysql-python not installed with bitnami django stack? \"Error loading MySQLdb module: No module named MySQLdb\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: \"Error loading MySQLdb module: No module named MySQLdb\"\nI thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true? \nI don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1094,"Q_Id":6738310,"Users Score":0,"Answer":"So I got this error after installing Bitnami Django stack on Windows Vista. Turns out that I had all components installed, but easy_install mysql_python didn't unwrap the entire package... ?\nI inst... uninst... inst... uninst multiple times, but no combination (using mysql for the startup Project) made any difference.\nIn the end, I simply renamed the egg file (in this case MySQL_python-1.2.3-py2.7-win32.egg) file to .zip and extracted the missing parts into a directory on my PYTHONPATH and everything worked like a charm.","Q_Score":0,"Tags":"python,mysql,django,mysql-python,bitnami","A_Id":12083825,"CreationDate":"2011-07-18T19:29:00.000","Title":"Mysql-python not installed with bitnami django stack? \"Error loading MySQLdb module: No module named MySQLdb\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am developing a Django app being a Web frontend to some Oracle database with another local DB keeping app's data such as Guardian permissions. The problem is that it can be modified from different places that I don't have control of.\nLet's say we have 3 models: User, Thesis and UserThesis.\nUserThesis - a table specifying relationship between Thesis and User (User being co-author of Thesis)\nScenario:\n\nUser is removed as an author of Thesis by removing entry in UserThesis table by some other app.\nUser tries to modify Thesis using our Django app. And he succeeds, because Guardian and Django do not know about change in UserThesis.\n\nI thought about some solutions:\n\nHaving some cron job look for changes in UserThesis by checking the modification date of entry. Easy to check for additions, removals would require looking on all relationships again.\nModifying Oracle DB schema to add Guardian DB tables and creating triggers on UserThesis table. I wouldn't like to do this, because of Oracle DB being shared among number of different apps.\nManually checking for relationship in views and templates (heavier load on Oracle).\n\nWhich one is the best? Any other ideas?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":159,"Q_Id":6775359,"Users Score":0,"Answer":"I decided to go with manually checking the permissions, caching it whenever I can. I ended up with get_perms_from_cache(self, user) model method which helps me a lot.","Q_Score":1,"Tags":"python,django,database-permissions,django-permissions","A_Id":7011483,"CreationDate":"2011-07-21T11:34:00.000","Title":"Django-guardian on DB with shared (non-exclusive) access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting files outside of the respective applications. Setting files could be created, altered or deleted without the django application being restarted.\nFor every request to the django application, there will be a http header or a url parameter which can be used to deduce which setting file to look at to determine which database credentials to use.\nMy first thought is to use a custom django middleware that would parse the settings files (possibly with caching) and create a new connection object on each request, patching it into django.db before any ORM activity.\nIs there a more graceful method to handle this situation? Are there any thread safety issues I should consider with the middleware approach?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1027,"Q_Id":6780827,"Users Score":2,"Answer":"rereading the file is a heavy penalty to pay when it's unlikely that the file has changed.\nMy usual approach is to use INotify to watch for configuration file changes, rather than trying to read a file on every request. Additionally, I tend to keep a \"current\" configuration, parsed from the file, and only replace it with a new value once i've finished parsing the config file and i'm certain it's valid. You could resolve some of your concerns about thread safety by setting the current configuration on each incoming request, so that the configuration can't change mid-way through a request.","Q_Score":1,"Tags":"python,django","A_Id":6782234,"CreationDate":"2011-07-21T18:26:00.000","Title":"Dynamic per-request database connections in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting files outside of the respective applications. Setting files could be created, altered or deleted without the django application being restarted.\nFor every request to the django application, there will be a http header or a url parameter which can be used to deduce which setting file to look at to determine which database credentials to use.\nMy first thought is to use a custom django middleware that would parse the settings files (possibly with caching) and create a new connection object on each request, patching it into django.db before any ORM activity.\nIs there a more graceful method to handle this situation? Are there any thread safety issues I should consider with the middleware approach?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1027,"Q_Id":6780827,"Users Score":0,"Answer":"You could start different instances with different settings.py files (by setting different DJANGO_SETTINGS_MODULE) on different ports, and redirect the requests to the specific apps. Just my 2 cents.","Q_Score":1,"Tags":"python,django","A_Id":6780942,"CreationDate":"2011-07-21T18:26:00.000","Title":"Dynamic per-request database connections in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7.\nWhen I importing cx_Oracle module I get this error:\nTraceback (most recent call last):\n File \"C:\\Osebno\\test.py\", line 1, in \n import cx_oracle\nImportError: No module named cx_oracle\nCan someone please tell me what is wrong?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":17773,"Q_Id":6788937,"Users Score":0,"Answer":"It's not finding the module.\nThings to investigate: Do you have several python installations? Did it go to the right one? Do a global search for cx_oracle and see if it's in the correct place. Check your PYTHONPATH variable. Check Python's registry values HKLM\\Software\\Python\\Pyhoncore. Are they correct?","Q_Score":3,"Tags":"python,windows-7,oracle10g","A_Id":6788993,"CreationDate":"2011-07-22T10:51:00.000","Title":"Error when importing cx_Oracle module [Python]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7.\nWhen I importing cx_Oracle module I get this error:\nTraceback (most recent call last):\n File \"C:\\Osebno\\test.py\", line 1, in \n import cx_oracle\nImportError: No module named cx_oracle\nCan someone please tell me what is wrong?","AnswerCount":5,"Available Count":3,"Score":0.1586485043,"is_accepted":false,"ViewCount":17773,"Q_Id":6788937,"Users Score":4,"Answer":"Have you tried import cx_Oracle (upper-case O) instead of import cx_oracle?","Q_Score":3,"Tags":"python,windows-7,oracle10g","A_Id":6789312,"CreationDate":"2011-07-22T10:51:00.000","Title":"Error when importing cx_Oracle module [Python]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7.\nWhen I importing cx_Oracle module I get this error:\nTraceback (most recent call last):\n File \"C:\\Osebno\\test.py\", line 1, in \n import cx_oracle\nImportError: No module named cx_oracle\nCan someone please tell me what is wrong?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":17773,"Q_Id":6788937,"Users Score":1,"Answer":"after installing the cx_Oracle download the instant client form oracle owth all DLLs , then copy then in the same directory of cx_Oracle.pyd , it will work directly\ntried and worked for me.","Q_Score":3,"Tags":"python,windows-7,oracle10g","A_Id":16885226,"CreationDate":"2011-07-22T10:51:00.000","Title":"Error when importing cx_Oracle module [Python]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The type of a field in a collection in my mongodb database is unicode string. This field currently does not have any data associated with it in any of the documents in the collection.\nI dont want the type to be string because,i want to add subfields to it from my python code using pymongo.\nThe collection already has many records in it.So, is it possible to change the type of the field to something like a dictionary in python for all the documents in the collection ?\nPlease Help\nThank You","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1765,"Q_Id":6789562,"Users Score":1,"Answer":"Sure, simply create a script that iterates over your current collection, reads the existing value and overwrite it with the new value (an embedded document in your case). You change the typ of the field by simply setting a new value for that field. E.g. setting a string field to an integer field :\ndb.test.update({field:\"string\"}, {$set:{field:23}})","Q_Score":2,"Tags":"python,mongodb,pymongo","A_Id":6789704,"CreationDate":"2011-07-22T11:50:00.000","Title":"Changing the type of a field in a collection in a mongodb database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a rather complex Excel 2010 file that I automate using python and win32com. For this I run windows in virtual box on an ubuntu machine.\nHowever, that same excel file solves\/runs fine on Ubuntu Maverick directly using wine 1.3. Any hope of automating Excel on wine so I can drop the VM?\nOr is that just crazy talk (which I suspect).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2617,"Q_Id":6847684,"Users Score":3,"Answer":"You'd need a Windows version of Python, not a Linux version -- I'm saying you'd have to run Python under wine as well.\nHave you tried with just a normal Windows install of Python on wine? I don't see any reason why this wouldn't work.\nThere is are numerous pages in a Google search that show Windows Python (32-bit) working fine.","Q_Score":2,"Tags":"python,linux,excel,win32com,wine","A_Id":6847960,"CreationDate":"2011-07-27T16:10:00.000","Title":"automating excel with win32com on linux with wine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have lots of data to operate on (write, sort, read). This data can potentially be larger than the main memory and doesn't need to be stored permanently.\nIs there any kind of library\/database that can store these data for me in memory and that does have and automagically fallback to disk if system runs in a OOM situation? The API and storage type is unimportant as long as it can store basic Python types (str, int, list, date and ideally dict).","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":105,"Q_Id":6942105,"Users Score":0,"Answer":"I will go for the in memory solution and let the OS swap. I can still replace the storage component if this will be really a problem. Thanks agf.","Q_Score":0,"Tags":"python","A_Id":7056331,"CreationDate":"2011-08-04T13:16:00.000","Title":"In memory database with fallback to disk on OOM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database and csv file that gets updated once a day. I managed to updated my table1 from this file by creating a separate log file with the record of the last insert.\nNo, I have to create a new table table2 where I keep calculations from the table1. \nMy issue is that those calculations are based on 10, 20 and 90 previous rows from table1. \nThe question is - how can I efficiently update table2 from the data of the table1 on a daily basis? I don't want to re-do the calculations everyday from the beginning of the table since it will be very time consuming for me.\nThanks for your help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":778,"Q_Id":6945953,"Users Score":0,"Answer":"The answer is \"as well as one could possibly expect.\"\nWithout seeing your tables, data, and queries, and the stats of your machine it is hard to be too specific. However in general updates basically doing three steps. This is a bit of an oversimplification but it allows you to estimate performance.\nFirst it selects the data necessary. Then it marks the rows that were updated as deleted, then it inserts new rows with the new data into the table. In general, your limit is usually the data selection. As long as you can efficiently run the SELECT query to get the data you want, update should perform relatively well.","Q_Score":0,"Tags":"python,postgresql","A_Id":15492291,"CreationDate":"2011-08-04T17:33:00.000","Title":"Table updates using daily data from other tables Postgres\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that sqlite3 isn\u00b4t really capable nor reliable when i use it inside a multiprocessing enviroment. Each process tries to write some data into the same database, so that a connection is used by multiple threads. I tried it with the check_same_thread=False option, but the number of insertions is pretty random: Sometimes it includes everything, sometimes not. Should I parallel-process only parts of the function (fetching data from the web), stack their outputs into a list and put them into the table all together or is there a reliable way to handle multi-connections with sqlite?","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":20258,"Q_Id":6969820,"Users Score":8,"Answer":"I've actually just been working on something very similar: \n\nmultiple processes (for me a processing pool of 4 to 32 workers)\neach process worker does some stuff that includes getting information\nfrom the web (a call to the Alchemy API for mine)\neach process opens its own sqlite3 connection, all to a single file, and each\nprocess adds one entry before getting the next task off the stack\n\nAt first I thought I was seeing the same issue as you, then I traced it to overlapping and conflicting issues with retrieving the information from the web. Since I was right there I did some torture testing on sqlite and multiprocessing and found I could run MANY process workers, all connecting and adding to the same sqlite file without coordination and it was rock solid when I was just putting in test data.\nSo now I'm looking at your phrase \"(fetching data from the web)\" - perhaps you could try replacing that data fetching with some dummy data to ensure that it is really the sqlite3 connection causing you problems. At least in my tested case (running right now in another window) I found that multiple processes were able to all add through their own connection without issues but your description exactly matches the problem I'm having when two processes step on each other while going for the web API (very odd error actually) and sometimes don't get the expected data, which of course leaves an empty slot in the database. My eventual solution was to detect this failure within each worker and retry the web API call when it happened (could have been more elegant, but this was for a personal hack).\nMy apologies if this doesn't apply to your case, without code it's hard to know what you're facing, but the description makes me wonder if you might widen your considerations.","Q_Score":8,"Tags":"python,sqlite,multiprocessing","A_Id":12809817,"CreationDate":"2011-08-07T00:05:00.000","Title":"SQLite3 and Multiprocessing","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm in the process of setting up a webserver from scratch, mainly for writing webapps with Python. On looking at alternatives to Apache+mod_wsgi, it appears that pypy plays very nicely indeed with pretty much everything I intend to use for my own apps. Not really having had a chance to play with PyPy properly, I feel this is a great opportunity to get to use it, since I don't need the server to be bulletproof. \nHowever, there are some PHP apps that I would like to run on the webserver for administrative purposes (PHPPgAdmin, for example). Is there an elegant solution that allows me to use PyPy within a PHP-compatible webserver like Apache? Or am I going to have to run CherryPy\/Paste or one of the other WSGI servers, with Apache and mod_wsgi on a separate port to provide administrative services?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":815,"Q_Id":6976578,"Users Score":-1,"Answer":"I know that mod_wsgi doesn't work with mod_php\nI heavily advise you, running PHP and Python applications on CGI level. \nPHP 5.x runs on CGI, for python there exists flup, that makes it possible to run WSGI Applications on CGI. \nTamer","Q_Score":5,"Tags":"php,python,apache,wsgi,pypy","A_Id":8920308,"CreationDate":"2011-08-07T23:41:00.000","Title":"PyPy + PHP on a single webserver","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a Python script which uses a MySQL database, which is locally hosted. The program will be delivered as source code. As a result, the MySQL password will be visible to bare eyes. Is there a good way to protect this? \nThe idea is to prevent some naughty people from looking at the source code, gaining direct access to MySQL, and doing something ... well, naughty.","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":14190,"Q_Id":6981064,"Users Score":16,"Answer":"Short answer\nYou can't.\nIf the password is stored in the artifact that's shipped to the end-user you must consider it compromised! Even if the artifact is a compiled binary, there are always (more or less complicated) ways to get at the password.\nThe only way to protect your resources is by exposing only a limited API to the end-user. Either build a programmatic API (REST, WS+SOAP, RMI, JavaEE+Servlets, ...) or only expose certain functionalities in your DB via SPROCs (see below).\nSome things first...\nThe question here should not be how to hide the password, but how to secure the database. Remember that passwords only are often a very weak protection and should not be considered the sole mechanism of protecting the DB. Are you using SSL? No? Well, then even if you manage to hide the password in the application code, it's still easy to sniff it on the network!\nYou have multiple options. All with varying degrees of security:\n\"Application Role\"\nCreate one database-user for the application. Apply authorization for this role. A very common setup is to only allow CRUD ops.\nPros\n\nvery easy to set-up\nPrevents DROP queries (f.ex. in SQL injections?)\n\nCons\n\nEverybody seeing the password has access to all the data in the database. Even if that data is normally hidden in the application.\nIf the password is compromised, the user can run UPDATE and DELETE queries without criteria (i.e.: delete\/update a whole table at once).\n\nAtomic auth&auth\nCreate one database user per application-\/end-user. This allows you to define atomic access rights even on a per-column basis. For example: User X can only select columns far and baz from table foo. And nothing else. But user Y can SELECT everything, but no updates, while user Z has full CRUD (select, insert, update, delete) access.\nSome databases allow you to reuse the OS-level credentials. This makes authentication to the user transparent (only needs to log-in to the workstation, that identity is then forwarded to the DB). This works easiest in a full MS-stack (OS=Windows, Auth=ActiveDirectory, DB=MSSQL) but is - as far as I am aware - also possible to achieve in other DBs.\nPros\n\nFairly easy to set up.\nVery atomic authorization scheme\n\nCons\n\nCan be tedious to set up all the access rights in the DB.\nUsers with UPDATE and DELETE rights can still accidentally (or intentionally?) delete\/update without criteria. You risk losing all the data in a table.\n\nStored Procedures with atomic auth&auth\nWrite no SQL queries in your application. Run everything through SPROCs. Then create db-accounts for each user and assign privileges to the SPROCs only.\nPros\n\nMost effective protection mechanism.\nSPROCs can force users to pass criteria to every query (including DELETE and UPDATE)\n\nCons\n\nnot sure if this works with MySQL (my knowledge in that area is flaky).\ncomplex development cycle: Everything you want to do, must first be defined in a SPROC.\n\nFinal thoughts\nYou should never allow database administrative tasks to the application. Most of the time, the only operations an application needs are SELECT, INSERT, DELETE and UPDATE. If you follow this guideline, there is hardly a risk involved by users discovering the password. Except the points mentioned above.\nIn any case, keep backups. I assume you want to project you database against accidental deletes or updates. But accidents happen... keep that in mind ;)","Q_Score":17,"Tags":"python,mysql","A_Id":6981725,"CreationDate":"2011-08-08T10:52:00.000","Title":"Safeguarding MySQL password when developing in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a Python script which uses a MySQL database, which is locally hosted. The program will be delivered as source code. As a result, the MySQL password will be visible to bare eyes. Is there a good way to protect this? \nThe idea is to prevent some naughty people from looking at the source code, gaining direct access to MySQL, and doing something ... well, naughty.","AnswerCount":4,"Available Count":2,"Score":-1.0,"is_accepted":false,"ViewCount":14190,"Q_Id":6981064,"Users Score":-5,"Answer":"Either use simple passwor like root.Else Don't use password.","Q_Score":17,"Tags":"python,mysql","A_Id":6981128,"CreationDate":"2011-08-08T10:52:00.000","Title":"Safeguarding MySQL password when developing in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find a solution for a problem I am working on. I have a python program which is is using a custom built sqlite3 install (which allows > 10 simultaneous connections) and in addition requires the use of Tix (which does not come as a stand install with the python package for the group I am distributing to.) \nI want to know if there is a way to specify to distutils to use this certain sqlite3 build and include this third party install of Tix, such that I can distribute the file as an rpm and not require the end user to install Tix or modify their sqlite3 install... \nAny help is greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":6986925,"Users Score":3,"Answer":"One possible solution: Create a custom package for that program containing the custom sqlite3\/etc. stuff and use relative imports to refer to those custom subpackages from a main module in your package, which you'd hook into with a simple importing script that would execute a your_package.run() function or something. You'd then use distutils to install your package in site-packages or whatever.","Q_Score":2,"Tags":"python,distutils","A_Id":6988101,"CreationDate":"2011-08-08T18:40:00.000","Title":"Packaging a Python Program with custom built libraries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to debug queries as they are executed and I was wondering if there is a way to have MySQLdb print out the actual query that it runs, after it has finished inserting the parameters and all that? From the documentation, it seems as if there is supposed to be a Cursor.info() call that will give information about the last query run, but this does not exist on my version (1.2.2).\nThis seems like an obvious question, but for all my searching I haven't been able to find the answer. Thanks in advance.","AnswerCount":10,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":67798,"Q_Id":7071166,"Users Score":126,"Answer":"We found an attribute on the cursor object called\u00a0cursor._last_executed that holds the last query string to run even when an exception occurs. This was easier and better for us in production than using profiling all the time or MySQL query logging as both of those have a performance impact and involve more code or more correlating separate log files, etc.\nHate to answer my own question but this is working better for us.","Q_Score":82,"Tags":"python,mysql,mysql-python","A_Id":7190914,"CreationDate":"2011-08-15T21:43:00.000","Title":"Print the actual query MySQLdb runs?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is this possible? Generation of Excel combobox in a cell using xlwt or similar module?\nWhen I load the xls using xlrd, then copy and save it using xlwt, the combobox from original xls is lost.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":597,"Q_Id":7094771,"Users Score":1,"Answer":"No, it's not possible. xlrd doesn't pick up the combo box and suchlike.","Q_Score":2,"Tags":"python,excel,combobox,xlwt,xlrd","A_Id":7266184,"CreationDate":"2011-08-17T14:45:00.000","Title":"Excel Combobox in Python xlwt module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. \nMy best option so far is to use memcached as it seems perfect for this task (very fast and key\/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values.\nIs there something that is specifically designed for this task, preferably with a Python back end?","AnswerCount":6,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":145,"Q_Id":7112347,"Users Score":2,"Answer":"Can you accept some degree of vote loss? If so, you can do a hybrid solution. Every modulo 100 (10, something), update the SQL database with the current memcache value. You can also have a periodic script scan and update if required.","Q_Score":3,"Tags":"python,memcached,voting","A_Id":7112410,"CreationDate":"2011-08-18T18:33:00.000","Title":"Best way of storing incremental numbers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. \nMy best option so far is to use memcached as it seems perfect for this task (very fast and key\/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values.\nIs there something that is specifically designed for this task, preferably with a Python back end?","AnswerCount":6,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":145,"Q_Id":7112347,"Users Score":0,"Answer":"Mongodb can work well.Since it can be faster or Google App Engine was designed to scale.","Q_Score":3,"Tags":"python,memcached,voting","A_Id":7112511,"CreationDate":"2011-08-18T18:33:00.000","Title":"Best way of storing incremental numbers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. \nMy best option so far is to use memcached as it seems perfect for this task (very fast and key\/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values.\nIs there something that is specifically designed for this task, preferably with a Python back end?","AnswerCount":6,"Available Count":4,"Score":0.0665680765,"is_accepted":false,"ViewCount":145,"Q_Id":7112347,"Users Score":2,"Answer":"MySQL isn't very good at handing lots and lots of simple queries\n\nYou may have something drastically misconfigured in your MySQL server. MySQL should easily be able to handle 4000 queries per minute. There are benchmarks of MySQL handling over 25k INSERTs per second.","Q_Score":3,"Tags":"python,memcached,voting","A_Id":7112669,"CreationDate":"2011-08-18T18:33:00.000","Title":"Best way of storing incremental numbers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. \nMy best option so far is to use memcached as it seems perfect for this task (very fast and key\/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values.\nIs there something that is specifically designed for this task, preferably with a Python back end?","AnswerCount":6,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":145,"Q_Id":7112347,"Users Score":0,"Answer":"If you like memcached but don't like the fact that it doesn't persist data then you should consider using Membase. Membase is basically memcached with sqlite as the persistence layer. It is very easy to set up and supports the memcached protocol so if you already have memcached set up you can use Membase as a drop in replacement.","Q_Score":3,"Tags":"python,memcached,voting","A_Id":7116659,"CreationDate":"2011-08-18T18:33:00.000","Title":"Best way of storing incremental numbers?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I looked through several SO-Questions for how to pickle a python object and store it into a database. The information I collected is:\n\nimport pickle or import cpickle. Import the latter, if performance is an issue.\nAssume dict is a python dictionary (or what so ever python object): pickled = pickle.dumps(dict).\nstore pickled into a MySQL BLOB Column using what so ever module to communicate with Database.\nGet it out again. And use pickle.loads(pickled) to restore the python dictionary.\n\nI just want to make sure I understood this right. Did I miss something critical? Are there sideeffects? Is it really that easy?\nBackground-Info: The only thing I want to do, is store Googlegeocoder-Responses, which are nested python dictionarys in my case. I am only using a little part of the response object and I don't know if I will ever need more of it later on. That's why I thought of storing the response to save me repetition of some million querys.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5063,"Q_Id":7117525,"Users Score":2,"Answer":"It's really that easy... so long as you don't need your DB to know anything about the dictionary. If you need any sort of structured data access to the contents of the dictionary, then you're going to have to get more involved.\nAnother gotcha might be what you intend to put in the dict. Python's pickle serialization is quite intelligent and can handle most cases without any need for adding custom support. However, when it doesn't work, it can be very difficult to understand what's gone wrong. So if you can, restrict the contents of the dict to Python's built-in types. If you start adding instances of custom classes, keep them to simple custom classes that don't do any funny stuff with attribute storage or access. And beware of adding instances of classes or types from add-ons. In general, if you start running into hard-to-understand problems with the pickling or unpickling, look at the non-built-in types in the dict.","Q_Score":7,"Tags":"python,mysql,pickle","A_Id":7117674,"CreationDate":"2011-08-19T05:59:00.000","Title":"How to Pickle a python dictionary into MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get a django site deployed from a repository. I was almost there, and then changed something (I'm not sure what!!) and was back to square one.\nNow I'm trying to run .\/manage.py syncdb and get the following error:\n\ndjango.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: this is MySQLdb version (1, 2, 3, 'final', 0), but _mysql is version (1, 2, 2, 'final', 0)\n\nI've searched forums for hours and none of the solutions presented helped. I tried uninstalling and re-installing MySQL-python and upgrading it. I get the same error when trying to import it from the python command line interpreter.\nDoes anyone have any suggestions?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4064,"Q_Id":7137214,"Users Score":1,"Answer":"For those who come upon this question: \nIt turns out that ubuntu _mysql version was different from the one in my venv. Uninstalling that and re-installing in my venv did the trick.","Q_Score":6,"Tags":"mysql,django,deployment,ubuntu,mysql-python","A_Id":7352188,"CreationDate":"2011-08-21T08:27:00.000","Title":"Django MySQLdb version doesn't match _mysql version Ubuntu","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm trying to build a web server using apache as the http server, mod_wsgi + python as the logic handler, the server was supposed to handler long request without returning, meaning i want to keep writing stuff into this request.\nthe problem is, when the link is broken, the socket is in a CLOSE_WAIT status, apache will NOT notify my python program, which means, i have to write something to get an exception, says the link is broken, but those messages were lost and can't be restored.\ni tried to get the socket status before writing through \/proc\/net\/tcp, but it could not prevent a quick connect\/break connection.\nanybody has any ideas, please help, very much thanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":393,"Q_Id":7144011,"Users Score":1,"Answer":"You cant. It is a limitation of the API defined by the WSGI specification. So, nothing to do with Apache or mod_wsgi really as you will have the same issue with any WSGI server if you follow the WSGI specification.\nIf you search through the mod_wsgi mailing list on Google Groups you will find a number of discussions about this sort of problem in the past.","Q_Score":0,"Tags":"python,apache,webserver,mod-wsgi","A_Id":7145199,"CreationDate":"2011-08-22T06:52:00.000","Title":"apache server with mod_wsgi + python as backend, how can i be able to notified my connection status?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently working on two projects in python. One need python 2.5 and other 2.7.\nNow the problem is when I installed mysql python for 2.5 it required 32 bit version of mysql and it was not working with 64 bit version. So I installed 32 bit version. This project is done by using virtualenv.\nNow I need to run it on 2.7 and it wants 64 bit version of mysql.\nI cannot reinstall mysql as old project is still on.\nIs it possible to install both bit versions of mysql in my Snow Leopard 10.6? If possible then how?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":235,"Q_Id":7158929,"Users Score":0,"Answer":"It is possible but you'll need to to compile them by hand, start by creating separate folders for them to live in, then get the source and dependencies that they'll need and keep them separate, you'll need to alter the .\/configure commands to point them to the correct places and they should build fine.","Q_Score":0,"Tags":"mysql,osx-snow-leopard,32bit-64bit,mysql-python,python-2.5","A_Id":7159017,"CreationDate":"2011-08-23T09:34:00.000","Title":"Install both 32 bit and 64 bit versions of mysql on a same mac machine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Okay, so I'm connected to an oracle database in python 2.7 and cx_Oracle 5.1 compiled against the instant client 11.2. I've got a cursor to the database and running SQL is not an issue, except this:\n\n\n cursor.execute('ALTER TRIGGER :schema_trigger_name DISABLE',\n schema_trigger_name='test.test_trigger')\n\n\nor\n\n\n cursor.prepare('ALTER TRIGGER :schema_trigger_name DISABLE')\n cursor.execute(None,{'schema_trigger_name': 'test.test_trigger'})\n\n\nboth result in an error from oracle:\n\n\n Traceback (most recent call last):\n File \"connect.py\", line 257, in \n cursor.execute('ALTER TRIGGER :schema_trigger_name DISABLE',\n schema_trigger_name='test.test_trigger')\n cx_Oracle.DatabaseError: ORA-01036: illegal variable name\/number\n\n\nWhile running:\n\n\n cursor.execute('ALTER TRIGGER test.test_trigger DISABLE')\n\n\nworks perfectly. What's the issue with binding that variable?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1300,"Q_Id":7174741,"Users Score":0,"Answer":"You normally can't bind an object name in Oracle. For variables it'll work but not for trigger_names, table_names etc.","Q_Score":1,"Tags":"python,oracle,cx-oracle","A_Id":7174814,"CreationDate":"2011-08-24T11:32:00.000","Title":"Exception binding variables with cx_Oracle in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As a personal project, I have been developing my own database software in C#. Many current database systems can use SQL commands for queries. Is there anyone here that could point me in the right direction of implementing such a system in a database software written completely from scratch? For example a user familiar with SQL could enter a statement as a string into an application, that statement will be analyzed by my application and the proper query will be run. Does anyone have any experience with something like that here? This is probably a very unusual questions haha. Basically what I am asking, are there any tools available out there that can dissect SQL statements or will I have to write my own from scratch for that?\nThanks in advance for any help!\n(I may transfer some of my stuff to Python and Java, so any potential answers need not be limited to C#)\nALSO: I am not using any current SQL database or anything like that, my system is completely from scratch, I hope my question makes sense. Basically I want my application to be able to interface with programs that send SQL commands.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2117,"Q_Id":7211204,"Users Score":3,"Answer":"A full-on database engine is a pretty serious undertaking. You're not going to sit down and have a complete engine next week, so I'd have thought you would want to write the SQL parser piecemeal: adding features to the parser as the features are supported in the engine.\nI'm guessing this is just something fun to do, rather than something you want working ASAP. Given that, I'd have thought writing an SQL parser is one of the best bits of the project! I've done lots of work with flat file database engines, because the response times required for queries don't allow a RDBMS. One of the most enjoyable bits has been adding support for SQL fragments in e.g. the UI, where response time isn't quite as vital.\nThe implementation I work on is plain old C, but in fact from what I've seen, most relational databases are still written primarily in C. And there is something satisfying about writing these things in a really low level language :)","Q_Score":1,"Tags":"c#,java,python,sql,database","A_Id":7211297,"CreationDate":"2011-08-26T22:36:00.000","Title":"C# custom database engine, how to implement SQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to pull only one column from a datastore table\nI have a Books model with\nid, key, title, author, isbn and price\neverything = db.GqlQuery('SELECT * FROM Books') gives me everything, but say i only want the title\nbooks = db.GqlQuery('SELECT title FROM Books')\nIve tried everything people have suggested but nothing seems to work\nAny help is much appreciated \nThanks","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1553,"Q_Id":7213991,"Users Score":3,"Answer":"You can't. GQL is not SQL, and the datastore is not a relational database. An entity is stored as a single serialized protocol buffer, and it's impossible to fetch part of an entity; the whole thing needs to be deserialized.","Q_Score":2,"Tags":"python,google-app-engine,gql,gqlquery","A_Id":7214401,"CreationDate":"2011-08-27T10:36:00.000","Title":"Google App Engine python, GQL, select only one column from datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am relatively new to SQLalchemy and have done basic database creation, insert, update and delete. I have found it quite simple to use so far. My question is:\nI want to move records from one database to another backup database. What is the simplest way to do this in SQLalchemy?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1472,"Q_Id":7216100,"Users Score":0,"Answer":"You would just go direct to the database utiltites and back it up there. Nothing to do with SQLAlchemy","Q_Score":2,"Tags":"python,sqlalchemy","A_Id":7216293,"CreationDate":"2011-08-27T17:17:00.000","Title":"What is the easiest way to move data from one database to another backup database using SQLalchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started.\nI was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance?\nNote, results are huge too, I am talking about an ERP in production developed by other people.","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":3111,"Q_Id":7279761,"Users Score":7,"Answer":"Let the DB figure out how best to retrieve the information that you want, else you'll have to duplicate the functionality of the RDBMS in your code, and that will be way more complex than your SQL queries. \nPlus, you'll waste time transferring all that unneeded information from the DB to your app, so that you can filter and process it in code.\nAll this is true because you say you're dealing with large data.","Q_Score":9,"Tags":"python,sql,performance","A_Id":7279821,"CreationDate":"2011-09-02T06:05:00.000","Title":"Should I use complex SQL queries or process results in the application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started.\nI was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance?\nNote, results are huge too, I am talking about an ERP in production developed by other people.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3111,"Q_Id":7279761,"Users Score":3,"Answer":"I would have the business logic in the application, as much as possible. Complex business logic in queries are difficult to maintain. (when I finish understanding one I have already forgotten how it all started)Complex logic in stored procedures are ok. But with a typical python application, you would want your business logic to be in python.\nNow, the database is way better in handling data than your application code. So if your logic involves huge amount of data, you may get better performance with the logic in the database. But this will be for complex reports, bookkeeping operations and such, that operate on a large volume of data. You may want to use stored procedures, or systems that specialize in such operations (a data warehouse for reports) for these types of operations.\nNormal OLTP operations do not involve much of data. The database may be huge, but the data required for a typical transaction will be (typically) a very small part of it. Querying this in a large database may cause performance issues, but you can optimize this in several ways (indexes, full text searches, redundancy, summary tables... depends on your actual problem).\nEvery rule has exceptions, but as a general guideline, try to have your business logic in your application code. Stored procedures for complex logic. A separate data warehouse or a set of procedures for reporting.","Q_Score":9,"Tags":"python,sql,performance","A_Id":7280826,"CreationDate":"2011-09-02T06:05:00.000","Title":"Should I use complex SQL queries or process results in the application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started.\nI was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance?\nNote, results are huge too, I am talking about an ERP in production developed by other people.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":3111,"Q_Id":7279761,"Users Score":1,"Answer":"@Nivas is generally correct.\nThese are pretty common patterns\n\nDivision of labour - the DBAs have to return all the data the business need, but they only have a database to work with. The developers could work with the DBAs to do it better but departmental responsbilities make it nearly impossible. So SQL to do morethan retrieve data is used.\nlack of smaller functions. Could the massive query be broken down into smaller stages, using working tables? Yes, but I have known environments where a new table needs reams of approavals - a heavy Query is just written\n\nSo, in general, getting data out of the database - thats down to the database. But if a SQL query is too long its going to be hard for the RDBMS to optimise, and it probably means the query is spanning data, business logic and even presentation in one go.\nI would suggest a saner approach is usually to seperate out the \"get me the data\" portions into stored procedures or other controllable queries that populate staging tables. Then the business logic can be written into a scripting language sitting above and controlling the stored procedures. And presentation is left elsewhere. In essence solutions like cognos try to do this anyway.\nBut if you are looking at an ERP in production, the constraints and the solutions above probably already exist - are you talking to the right people?","Q_Score":9,"Tags":"python,sql,performance","A_Id":7282367,"CreationDate":"2011-09-02T06:05:00.000","Title":"Should I use complex SQL queries or process results in the application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My python application is dying, this oracle trace file is being generated. I am using cx_Oracle, how do I go about using this trace file to resolve this crash?\nora_18225_139690296567552.trc\nkpedbg_dmp_stack()+360<-kpeDbgCrash()+192<-kpureq2()+3194<-OCIStmtPrepare2()+157<-Cursor_InternalPrepare()+298<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":653,"Q_Id":7285135,"Users Score":0,"Answer":"Do you have an Oracle support contract? If I would file an SR and upload the trace to Oracle and have them tell you what it is complaining about. Those code calls are deep in their codebase from the looks of it.","Q_Score":1,"Tags":"python,cx-oracle","A_Id":7530424,"CreationDate":"2011-09-02T14:43:00.000","Title":"I have an Oracle Stack trace file Python cx_Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a regular application (like on Windows), when objects\/variables are created on a global level it is available to the entire program during the entire duration the program is running.\nIn a web application written in PHP for instance, all variables\/objects are destroyed at the end of the script so everything has to be written to the database.\na) So what about python running under apache\/modwsgi? How does that work in regards to the memory? \nb) How do you create objects that persist between web page requests and how do you ensure there isn't threading issues in apache\/modwsgi?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":7293290,"Users Score":0,"Answer":"All Python globals are created when the module is imported. When module is re-imported the same globals are used.\nPython web servers do not do threading, but pre-forked processes. Thus there is no threading issues with Apache.\nThe lifecycle of Python processes under Apache depends. Apache has settings how many child processes are spawned, keep in reserve and killed. This means that you can use globals in Python processes for caching (in-process cache), but the process may terminate after any request so you cannot put any persistent data in the globals. But the process does not necessarily need to terminate and in this regard Python is much more efficient than PHP (the source code is not parsed for every request - but you need to have the server in reload mode to read source code changes during the development).\nSince globals are per-process and there can be N processes, the processes share \"web server global\" state using mechanisms like memcached.\nUsually Python globals only contain\n\nSetting variables set during the process initialization\nCached data (session\/user neutral)","Q_Score":4,"Tags":"python,apache,memory-management,mod-wsgi","A_Id":7293404,"CreationDate":"2011-09-03T13:09:00.000","Title":"Memory model for apache\/modwsgi application in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table).\nI tried select count(*) from table, but that seems to access each row and is super slow.\nI also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack.\nAny ideas on how to find the table size quickly and cleanly?\n\nUsing Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3251,"Q_Id":7346079,"Users Score":0,"Answer":"To follow up on Thilo's answer, as a data point, I have a sqlite table with 2.3 million rows. Using select count(*) from table, it took over 3 seconds to count the rows. I also tried using SELECT rowid FROM table, (thinking that rowid is a default primary indexed key) but that was no faster. Then I made an index on one of the fields in the database (just an arbitrary field, but I chose an integer field because I knew from past experience that indexes on short fields can be very fast, I think because the index is stored a copy of the value in the index itself). SELECT my_short_field FROM table brought the time down to less than a second.","Q_Score":2,"Tags":"python,sqlite","A_Id":34628302,"CreationDate":"2011-09-08T09:44:00.000","Title":"Fast number of rows in Sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table).\nI tried select count(*) from table, but that seems to access each row and is super slow.\nI also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack.\nAny ideas on how to find the table size quickly and cleanly?\n\nUsing Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":3251,"Q_Id":7346079,"Users Score":1,"Answer":"Do you have any kind of index on a not-null column (for example a primary key)? If yes, the index can be scanned (which hopefully does not take that long). If not, a full table scan is the only way to count all rows.","Q_Score":2,"Tags":"python,sqlite","A_Id":7346136,"CreationDate":"2011-09-08T09:44:00.000","Title":"Fast number of rows in Sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table).\nI tried select count(*) from table, but that seems to access each row and is super slow.\nI also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack.\nAny ideas on how to find the table size quickly and cleanly?\n\nUsing Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":3251,"Q_Id":7346079,"Users Score":1,"Answer":"Other way to get the rows number of a table is by using a trigger that stores the actual number of rows in other table (each insert operation will increment a counter). \nIn this way inserting a new record will be a little slower, but you can immediately get the number of rows.","Q_Score":2,"Tags":"python,sqlite","A_Id":7346821,"CreationDate":"2011-09-08T09:44:00.000","Title":"Fast number of rows in Sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm migrating a GAE\/Java app to Python (non-GAE) due new pricing, so I'm getting a little server and I would like to find a database that fits the following requirements:\n\nLow memory usage (or to be tuneable or predictible)\nFastest querying capability for simple document\/tree-like data identified by key (I don't care about performance on writing and I assume it will have indexes)\nBindings with Pypy 1.6 compatibility (or Python 2.7 at least)\n\nMy data goes something like this:\n\nId: short key string\nTitle\nCreators: an array of another data structure which has an id - used as key -, a name, a site address, etc.\nTags: array of tags. Each of them can has multiple parent tags, a name, an id too, etc.\nLicense: a data structure which describes its license (CC, GPL, ... you say it) with name, associated URL, etc.\nAddition time: when it was add in our site.\nTranslations: pointers to other entries that are translations of one creation.\n\nMy queries are very simple. Usual cases are:\n\nFilter by tag ordered by addition time.\nSelect a few (pagination) ordered by addition time.\n(Maybe, not done already) filter by creator.\n(Not done but planned) some autocomplete features in forms, so I'm going to need search if some fields contains a substring ('LIKE' queries).\n\nThe data volume is not big. Right now I have about 50MB of data but I'm planning to have a huge dataset around 10GB.\nAlso, I want to rebuild this from scratch, so I'm open to any option. What database do you think can meet my requirements?\nEdit: I want to do some benchmarks around different options and share the results. I have selected, so far, MongoDB, PostgreSQL, MySQL, Drizzle, Riak and Kyoto Cabinet.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":830,"Q_Id":7375415,"Users Score":1,"Answer":"I would recommend Postresql, only because it does what you want, can scale, is fast, rather easy to work with and stable.\nIt is exceptionally fast at the example queries given, and could be even faster with document querying.","Q_Score":3,"Tags":"python,database,nosql,rdbms","A_Id":7377444,"CreationDate":"2011-09-10T23:49:00.000","Title":"Low memory and fastest querying database for a Python project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Psycopg2 with PostgreSQL 8.4. While reading from a huge table, I suddenly get this cryptic error at the following line of code, after this same line of code has successfully fetched a few hundred thousand rows.\nsomerows = cursorToFetchData.fetchmany(30000)\npsycopg2.DataError: invalid value \"L\u00c3\" for \"DD\"\nDETAIL: Value must be an integer.\nMy problem is that I have no column named \"DD\", and about 300 columns in that table (I know 300 columns is a design flaw). I would appreciate a hint about the meaning of this error message, or how to figure out where the problem lies. I do not understand how Psycop2 can have any requirements about the datatype while fetching rows.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":176,"Q_Id":7375572,"Users Score":2,"Answer":"Can you paste in the data from the row that's causing the problem? At a guess I'd say it's a badly formatted date entry, but hard to say.\n(Can't comment, so has to be in a answer...)","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":7378101,"CreationDate":"2011-09-11T00:33:00.000","Title":"Cryptic Psycopg2 error message","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Psycopg2 with PostgreSQL 8.4. While reading from a huge table, I suddenly get this cryptic error at the following line of code, after this same line of code has successfully fetched a few hundred thousand rows.\nsomerows = cursorToFetchData.fetchmany(30000)\npsycopg2.DataError: invalid value \"L\u00c3\" for \"DD\"\nDETAIL: Value must be an integer.\nMy problem is that I have no column named \"DD\", and about 300 columns in that table (I know 300 columns is a design flaw). I would appreciate a hint about the meaning of this error message, or how to figure out where the problem lies. I do not understand how Psycop2 can have any requirements about the datatype while fetching rows.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":176,"Q_Id":7375572,"Users Score":1,"Answer":"This is not a psycopg error, it is a postgres error.\nAfter the error is raised, take a look at cur.query to see the query generated. Copy and paste it into psql and you'll see the same error. Then debug it from there.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":40247155,"CreationDate":"2011-09-11T00:33:00.000","Title":"Cryptic Psycopg2 error message","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large amount of data that I am pulling from an xml file that all needs to be validated against each other (in excess of 500,000 records). It is location data, so it has information such as: county, street prefix, street suffix, street name, starting house number, ending number. There are duplicates, house number overlaps, etc. and I need to report on all this data (such as where there are issues). Also, there is no ordering of the data within the xml file, so each record needs to be matched up against all others. \nRight now I'm creating a dictionary of the location based on the street name info, and then storing a list of the house number starting and ending locations. After all this is done, I'm iterating through the massive data structure that was created to find duplicates and overlaps within each list. I am running into problems with the size of the data structure and how many errors are coming up.\nOne solution that was suggested to me was to create a temporary SQLite DB to hold all data as it is read from the file, then run through the DB to find all issues with the data, report them out, and then destroy the DB. Is there a better\/more efficient way to do this? And any suggestions on a better way to approach this problem?\nAs an fyi, the xml file I'm reading in is over 500MB (stores other data than just this street information, although that is the bulk of it), but the processing of the file is not where I'm running into problems, it's only when processing the data obtained from the file.\nEDIT: I could go into more detail, but the poster who mentioned that there was plenty of room in memory for the data was actually correct, although in one case I did have to run this against 3.5 million records, in that instance I did need to create a temporary database.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1110,"Q_Id":7391148,"Users Score":0,"Answer":"Unless this data has already been sanitised against the PAF (UK Post office Address file - every address in UK basically) then you will have addresses in there that are the same actual house, but spelt differently, wrong postcode, postcode in wrong field etc. This will completely change your approach.\nCheck out if this is sanitised before you start. The person giving it to you will either say \"yes of course it has and I did it\" or they will look blankly - in which case no.\nIf it is sanitised, great, probably an external agency is supplying your data and they probably can do this for you, but I expect oyu are being asked because its cheaper. Get on.\nIf not, you have a range of problems and need to talk with your boss about what they want, how confidnet they want to be of matches etc.\nIn general the idea is to come up with a number of match algorithms per field, that output a confidence value that the two address under compare are the same. Then a certain number of these values are weighted, and a total confidnece value has to be passed to consider the two addresses a match\nBut I am not clear this is your problem, but I do suggest you check what your boss exactly wants - this is not a clearly understood area between marketing and technical depats.","Q_Score":0,"Tags":"python,xml,sanitization","A_Id":7392402,"CreationDate":"2011-09-12T16:40:00.000","Title":"Large temporary database to sanitize data in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Well, I might be doing some work in Python that would end up with hundreds of thousands, maybe millions of rows of data, each with entries in maybe 50 or more columns. I want a way to keep track of this data and work with it. Since I also want to learn Microsoft Access, I suggest putting the data in there. Is there any easy way to do this? I also want to learn SAS, so that would be fine too. Or, is there some other program\/method I should know for such a situation?\nThanks for any help!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":323,"Q_Id":7410458,"Users Score":1,"Answer":"Yes, you can talk to any ODBC database from Python, and that should include Access. You'll want the \"windows\" version of Python (which includes stuff like ODBC) from ActiveState.\nI'd be more worried about the \"millions of rows\" in Access, it can get a bit slow on retrieval if you're actually using it for relational tasks (that is, JOINing different tables together).\nI'd also take a look at your 50 column tables \u2014 sometimes you need 50 columns but more often it means you haven't decomposed your data sufficiently to get it in normal form.\nFinally, if you use Python to read and write an Access database I don't know if I'd count that as \"learning Access\". Really learning Access would be using the front end to create and maintain the database, creating forms and reports in Access (which would not be available from Python) and programming in Visual Basic for Applications (VBA).\nI really like SQLite as an embedded database solution, especially from Python, and its SQL dialect is probably \"purer\" than Access's.","Q_Score":1,"Tags":"python,database,ms-access","A_Id":7410499,"CreationDate":"2011-09-14T01:56:00.000","Title":"Is it possible to store data from Python in Access file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an application where a user can search for items near his location.\nWhen a user registers for my service, their long\/lat coordinates are taken (this is actually grabbed from a zip\/postcode and then gets looked up via Google for the long\/lats). This also happens when a user adds an item, they are asked for the zip\/postcode of the item, and that is converted to the long\/lat.\nMy question is how would i run a query using MySQL that would search within, say 20 miles, from the user's location and get all the items within that 20 mile radius?","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4012,"Q_Id":7413619,"Users Score":0,"Answer":"To be performant, you don't want to do a complete scan through the database and compute distances for each row, you want conditions that can be indexed. The simplest way to do this is to compute a box with a minimum\/maximum latitude and minimum\/maximum longitude, and use BETWEEN to exclude everything outside of those ranges. Since you're only dealing with US locations (zip code based), you won't have to worry about the transition between +180 and -180 degrees.\nThe only remaining problem is to compute the bounds of the box in lat\/long when your conditions are in miles. You need to convert miles to degrees. For latitude this is easy, just divide 360 degrees by the circumference of the earth and multiply by 20; 0.289625 degrees. Longitude is tougher because it varies by latitude, the circumference is roughly cosine(latitude)*24901.461; 20 miles is 20*360\/(cos(latitude)*24901.461).","Q_Score":3,"Tags":"python,mysql,geolocation,latitude-longitude","A_Id":7420726,"CreationDate":"2011-09-14T08:49:00.000","Title":"Find long\/lat's within 20 miles of user's long\/lat","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an application that needs to interface with another app's database. I have read access but not write.\nCurrently I'm using sql statements via pyodbc to grab the rows and using python manipulate the data. Since I don't cache anything this can be quite costly.\nI'm thinking of using an ORM to solve my problem. The question is if I use an ORM like \"sql alchemy\" would it be smart enough to pick up changes in the other database?\nE.g. sql alchemy accesses a table and retrieves a row. If that row got modified outside of sql alchemy would it be smart enough to pick it up?\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nEdit: To be more clear\nI have one application that is simply a reporting tool lets call App A.\nI have another application that handles various financial transactions called App B.\nA has access to B's database to retrieve the transactions and generates various reports. There's hundreds of thousands of transactions. We're currently caching this info manually in python, if we need an updated report we refresh the cache. If we get rid of the cache, the sql queries combined with the calculations becomes unscalable.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":122,"Q_Id":7426564,"Users Score":2,"Answer":"I don't think an ORM is the solution to your problem of performance. By default ORMs tend to be less efficient than row SQL because they might fetch data that you're not going to use (eg. doing a SELECT * when you need only one field), although SQLAlchemy allows fine-grained control over the SQL generated.\nNow to implement a caching mechanism, depending on your application, you could use a simple dictionary in memory or a specialized system such as memcached or Redis.\nTo keep your cached data relatively fresh, you can poll the source at regular intervals, which might be OK if your application can tolerate a little delay. Otherwise you'll need the application that has write access to the db to notify your application or your cache system when an update occurs.\nEdit: since you seem to have control over app B, and you've already got a cache system in app A, the simplest way to solve your problem is probably to create a callback in app A that app B can call to expire cached items. Both apps need to agree on a convention to identify cached items.","Q_Score":1,"Tags":"python,sql,orm,sqlalchemy","A_Id":7429664,"CreationDate":"2011-09-15T06:12:00.000","Title":"How to interface with another database effectively using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise.\nMy current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data.\nThe problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions.\nNote that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example.\nOne solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution.\nIdeally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however.\nStoring the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction.\nDoes anyone know of a suitable implementation for large global lookup tables?\nPlease understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":143,"Q_Id":7451163,"Users Score":1,"Answer":"If you can, try and fit the data into instance memory. If it won't fit in instance memory, you have a few options available to you.\nYou can store the data in a resource file that you upload with the app, if it only changes infrequently, and access it off disk. This assumes you can build a data structure that permits easy disk lookups - effectively, you're implementing your own read-only disk based table.\nLikewise, if it's too big to fit as a static resource, you could take the same approach as above, but store the data in blobstore.\nIf your data absolutely must be in the datastore, you may need to emulate your own read-modify-write transactions. Add a 'revision' property to your records. To modify it, fetch the record (outside a transaction), perform the required changes, then inside a transaction, fetch it again to check the revision value. If it hasn't changed, increment the revision on your own record and store it to the datastore.\nNote that the underlying RPC layer does theoretically support multiple independent transactions (and non-transactional operations), but the APIs don't currently expose any way to access this from within a transaction, short of horrible (and I mean really horrible) hacks, unfortunately.\nOne final option: You could run a backend provisioned with more memory, exposing a 'SpellCheckService', and make URLFetch calls to it from your frontends. Remember, in-memory is always going to be much, much faster than any disk-based option.","Q_Score":0,"Tags":"python,google-app-engine,transactions,google-cloud-datastore,entity-groups","A_Id":7466485,"CreationDate":"2011-09-16T22:55:00.000","Title":"GAE Lookup Table Incompatible with Transactions?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise.\nMy current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data.\nThe problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions.\nNote that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example.\nOne solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution.\nIdeally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however.\nStoring the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction.\nDoes anyone know of a suitable implementation for large global lookup tables?\nPlease understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":143,"Q_Id":7451163,"Users Score":1,"Answer":"First, if you're under the belief that a namespace is going to help avoid key collisions, it's time to take a step back. A key consists of an entity kind, a namespace, a name or id, and any parents that the entity might have. It's perfectly valid for two different entity kinds to have the same name or id. So if you have, say, a LookupThingy that you're matching against, and have created each member by specifying a unique name, the key isn't going to collide with anything else.\nAs for the challenge of doing the equivalent of a spell-check against an unparented lookup table within a transaction, is it possible to keep the lookup table in code?\nOr can you think of an analogy that's closer to what you need? One that motivates the need to do the lookup within a transaction?","Q_Score":0,"Tags":"python,google-app-engine,transactions,google-cloud-datastore,entity-groups","A_Id":7452303,"CreationDate":"2011-09-16T22:55:00.000","Title":"GAE Lookup Table Incompatible with Transactions?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a function where I save a large number of models, (thousands at a time), this takes several minutes so I have written a progress bar to display progress to the user. The progress bar works by polling a URL (from Javascript) and looking a request.session value to see the state of the first call (the one that is saving).\nThe problem is that the first call is within a @transaction.commit_on_success decorator and because I am using Database Backed sessions when I try to force request.session.save() instead of it immediately committing it is appended to the ongoing transaction. This results in the progress bar only being updated once all the saves are complete, thus rendering it useless.\nMy question is, (and I'm 99.99% sure I already know the answer), can you commit statements within a transaction without doing the whole lot. i.e. I need to just commit the request.session.save() whilst leaving all of the others..\nMany thanks, Alex","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":839,"Q_Id":7472348,"Users Score":1,"Answer":"No, both your main saves and the status bar updates will be conducted using the same database connection so they will be part of the same transaction.\nI can see two options to avoid this.\n\nYou can either create your own, separate database connection and save the status bar updates using that. \nDon't save the status bar updates to the database at all and instead use a cache to store them. As long as you don't use the database cache backend (ideally you'd use memcached) this will work fine.\n\nMy preferred option would be the second one. You'll need to delve into the Django internals to get your own database connection so that could is likely to end up fragile and messy.","Q_Score":3,"Tags":"python,sql,django","A_Id":7473401,"CreationDate":"2011-09-19T14:15:00.000","Title":"Force commit of nested save() within a transaction","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i use pymongo to test the performance of the mongodb.\ni use 100 threads, every thread excecute 5000 insert, and everything work ok.\nbut when i excecute 10000 insert in every thead, i meet some error:\n\"AutoReconnect: Connection reset by peer\"","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1676,"Q_Id":7479907,"Users Score":1,"Answer":"Driver can't remove dropped socket from connection from pool until your code try use it.","Q_Score":3,"Tags":"python,mongodb,pymongo","A_Id":18267147,"CreationDate":"2011-09-20T03:48:00.000","Title":"Mongodb : AutoReconnect, Connection reset by peer","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently writing a Python script to interact with an SQLite database but it kept returning that the database was \"Encrypted or Corrupted\".\nThe database is definitely not encrypted and so I tried to open it using the sqlite3 library at the command line (returned the same error) and with SQLite Manager add-on for Firefox...\nI had a copy of the same database structure but populated by a different instance of this program on a windows box, I tried to open it using SQLite Manager and it was fine, so as a quick test I loaded the \"Encrypted or Corrupted\" database onto a USB stick and plugged it into the windows machine, using the manager it opened first time without issues.\nDoes anyone have any idea what may be causing this?\nEDIT:\nOn the Linux machine I tried accessing it as root with no luck, I also tried chmoding it to 777 just as a test (on a copied version of the DB), again with no luck","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1207,"Q_Id":7511965,"Users Score":0,"Answer":"You should check the user privileges, the user on linux may not have enough privileges.","Q_Score":0,"Tags":"python,sql,database,sqlite","A_Id":7512015,"CreationDate":"2011-09-22T08:40:00.000","Title":"SQLite3 Database file - Corrupted\/Encrypted only on Linux","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes.\nI'm seeking advise on how to approach this development task? \nUpdate: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python).","AnswerCount":6,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":570,"Q_Id":7528360,"Users Score":3,"Answer":"How about doing the scripting on the client. That will ensure maximum security and also save server resources.\nIn other words Javascript would be your scripting platform. What you do is expose the functionality of your backend as javascript functions. Depending on how your app is currently written that might require backend work or not.\nOh and by the way you are not limited to javascript for the actual language. Google \"compile to javascript\" and first hit should be a list of languages you can use.","Q_Score":11,"Tags":"php,python,dsl,plpgsql","A_Id":7660613,"CreationDate":"2011-09-23T11:36:00.000","Title":"Embed python\/dsl for scripting in an PHP web application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes.\nI'm seeking advise on how to approach this development task? \nUpdate: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python).","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":570,"Q_Id":7528360,"Users Score":0,"Answer":"You could do it without Python, by ie. parsing the user input for pre-defined \"tags\" and returning the result.","Q_Score":11,"Tags":"php,python,dsl,plpgsql","A_Id":7605372,"CreationDate":"2011-09-23T11:36:00.000","Title":"Embed python\/dsl for scripting in an PHP web application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to store windows path in MySQL without escaping the backslashes. How can I do this in Python? I am using MySQLdb to insert records into the database. When I use MySQLdb.escape_string(), I notice that the backslashes are removed.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1040,"Q_Id":7553200,"Users Score":0,"Answer":"Have a look at os.path.normpath(thePath)\nI can't remember if it's that one, but there IS a standard os.path formating function that gives double backslashes, that can be stored in a db \"as is\" and reused later \"as is\". I have no more windows machine and cannot test it anymore.","Q_Score":0,"Tags":"python,mysql","A_Id":7553317,"CreationDate":"2011-09-26T09:36:00.000","Title":"Storing windows path in MySQL without escaping backslashes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a pop3 and imap clients in python using available libs, which will download email headers (and subsequently entire email bodies) from various servers and save them in a mongodb database. The problem I'm facing is that this client downloads emails in addition to a user's regular email client. So with the assumption that a user might or might not leave emails on the server when downloading using his mail client, I'd like to fetch the headers but only collect them from a certain date, to avoid grabbing entire mailboxes every time I fetch the headers. \nAs far as I can see the POP3 list call will get me all messages on the server, even those I probably already downloaded. IMAP doesn't have this problem.\nHow do email clients handle this situation when dealing with POP3 servers?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1501,"Q_Id":7553606,"Users Score":3,"Answer":"Outlook logs in to a POP3 server and issues the STAT, LIST and UIDL commands; then if it decides the user has no new messages it logs out. I have observed Outlook doing this when tracing network traffic between a client and my DBMail POP3 server. I have seen Outlook fail to detect new messages on a POP3 server using this method. Thunderbird behaves similarly but I have never seen it fail to detect new messages.\nIssue the LIST and UIDL commands to the server after logging in. LIST gives you an index number (the message's linear position in the mailbox) and the size of each message. UIDL gives you the same index number and a computed hash value for each message.\nFor each user you can store the size and hash value given by LIST and UIDL. If you see the same size and hash value, assume it is the same message. When a given message no longer appears in this list, assume it has been deleted and clear it from your local memory.\nFor complete purity, remember the relative positions of the size\/hash pairs in the message list, so that you can support the possibility that they may repeat. (My guess on Outlook's new message detection failure is that sometimes these values do repeat, at least for DBMail, but Outlook remembers them even after they are deleted, and forever considers them not new. If it were me, I would try to avoid this behavior.)\nFootnote: Remember that the headers are part of the message. Do not trust anything in the header for this reason: dates, senders, even server hand-off information can be easily faked and cannot be assumed unique.","Q_Score":3,"Tags":"python,email,pop3","A_Id":7556750,"CreationDate":"2011-09-26T10:14:00.000","Title":"Download POP3 headers from a certain date (Python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been doing lots of searching and reading to solve this.\nThe main goal is let a Django-based web management system connecting to a device which runs a http server as well. Django will handle user request and ask device for the real data, then feedback to user.\nNow I have a \"kinda-work-in-concept\" solution:\n\nBrowser -> Apache Server: Browser have jQuery and HTML\/CSS to collect user request.\nApache Server-> Device HTTP Server:\n\nApache + mod_python(or somesay Apache + mod_wsgi?) , so I might control the Apache to do stuff like build up a session and cookies to record login.\nBut, this is the issue actually bugs me. \nHow to make it work? Using what to build up socket connection between this two servers?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":320,"Q_Id":7565812,"Users Score":0,"Answer":"If you have control over what runs on the device side, consider using XML-RPC to talk from client to server.","Q_Score":0,"Tags":"python,django,apache,mod-wsgi,mod-python","A_Id":7567682,"CreationDate":"2011-09-27T07:52:00.000","Title":"How to control Apache via Django to connect to mongoose(another HTTP server)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I know about the XLWT library, which I've used before on a Django project. XLWT is very neat but as far as I know, it doesn't support .xlsx which is the biggest obstacle in my case. I'm probably going to be dealing with more than 2**16 rows of information. Is there any other mature similar library? Or even better, is there a fork for the XLWT with this added functionality? I know there are libraries in C#, but if a python implementation already exists, it would be a lot better.\nThanks a bunch!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1504,"Q_Id":7576309,"Users Score":0,"Answer":"Export a CSV don't use .xlsx..","Q_Score":3,"Tags":"python,xls,xlsx,xlwt,openpyxl","A_Id":7576355,"CreationDate":"2011-09-27T22:14:00.000","Title":"Exporting to Excel .xlsx from a Python Pyramid project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am researching a project that would require hundreds of database writes per a minute. I have never dealt with this level of data writes before and I am looking for good scalable techniques and technologies.\nI am a comfortable python developer with experience in django and sql alchemy. I am thinking I will build the data interface on django, but I don't think that it is a good idea to go through the orm to do the amount of data writes I will require. I am definitely open to learning new technologies.\nThe solution will live on Amazon web services, so I have access to all their tools. Ultimately I am looking for advice on database selection, data writing techniques, and any other needs I may have that I do not realize. \nAny advice on where to start?\nThanks,\nCG","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":324,"Q_Id":7586999,"Users Score":0,"Answer":"You should actually be okay with low hundreds of writes per minute through SQLAlchemy (thats only a couple a second); if you're talking more like a thousand a minute, yeah that might be problematic.\nWhat kind of data do you have? If it's fairly flat (few tables, few relations), you might want to investigate a non-relational database such as CouchDB or Mongo. If you want to use SQL, I strongly reccommend PostgreSQL, it seems to deal with large databases and frequent writes a lot better than MySQL.\nIt also depends how complex the data is that you're inserting.\nI think unfortunately, you're going to just have to try a couple things and run benchmarks, as each situation is different and query optimizers are basically magic.","Q_Score":0,"Tags":"python,django,database-design,amazon-web-services","A_Id":7587624,"CreationDate":"2011-09-28T17:14:00.000","Title":"Setup for high volume of database writing","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am researching a project that would require hundreds of database writes per a minute. I have never dealt with this level of data writes before and I am looking for good scalable techniques and technologies.\nI am a comfortable python developer with experience in django and sql alchemy. I am thinking I will build the data interface on django, but I don't think that it is a good idea to go through the orm to do the amount of data writes I will require. I am definitely open to learning new technologies.\nThe solution will live on Amazon web services, so I have access to all their tools. Ultimately I am looking for advice on database selection, data writing techniques, and any other needs I may have that I do not realize. \nAny advice on where to start?\nThanks,\nCG","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":324,"Q_Id":7586999,"Users Score":0,"Answer":"If it's just a few hundred writes you still can do with a relational DB. I'd pick PostgreSQL (8.0+),\nwhich has a separate background writer process. It also has tuneable serialization levels so you\ncan enable some tradeoffs between speed and strict ACID compliance, some even at transaction level.\nPostgres is well documented, but it assumes some deeper understanding of SQL and relational DB theory to fully understand and make the most of it.\nThe alternative would be new fangled \"NO-SQL\" system, which can probably scale even better, but at the cost of buying into a very different technology system.\nAny way, if you are using python and it is not 100% critical to lose writes on shutdown or power loss, and you need a low latency, use a threadsafe Queue.Queue and worker threads to decouple the writes from your main application thread(s).","Q_Score":0,"Tags":"python,django,database-design,amazon-web-services","A_Id":7587774,"CreationDate":"2011-09-28T17:14:00.000","Title":"Setup for high volume of database writing","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Problem\nI am writing a program that reads a set of documents from a corpus (each line is a document). Each document is processed using a function processdocument, assigned a unique ID, and then written to a database. Ideally, we want to do this using several processes. The logic is as follows:\n\nThe main routine creates a new database and sets up some tables.\nThe main routine sets up a group of processes\/threads that will run a worker function.\nThe main routine starts all the processes.\nThe main routine reads the corpus, adding documents to a queue.\nEach process's worker function loops, reading a document from a queue, extracting the information from it using processdocument, and writes the information to a new entry in a table in the database. \nThe worker loops breaks once the queue is empty and an appropriate flag has been set by the main routine (once there are no more documents to add to the queue).\n\nQuestion\nI'm relatively new to sqlalchemy (and databases in general). I think the code used for setting up the database in the main routine works fine, from what I can tell. Where I'm stuck is I'm not sure exactly what to put into the worker functions for each process to write to the database without clashing with the others. \nThere's nothing particularly complicated going on: each process gets a unique value to assign to an entry from a multiprocessing.Value object, protected by a Lock. I'm just not sure whether what I should be passing to the worker function (aside from the queue), if anything. Do I pass the sqlalchemy.Engine instance I created in the main routine? The Metadata instance? Do I create a new engine for each process? Is there some other canonical way of doing this? Is there something special I need to keep in mind?\nAdditional Comments\nI'm well aware I could just not bother with the multiprocessing but and do this in a single process, but I will have to write code that has several processes reading for the database later on, so I might as well figure out how to do this now.\nThanks in advance for your help!","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":1473,"Q_Id":7603790,"Users Score":5,"Answer":"The MetaData and its collection of Table objects should be considered a fixed, immutable structure of your application, not unlike your function and class definitions. As you know with forking a child process, all of the module-level structures of your application remain present across process boundaries, and table defs are usually in this category.\nThe Engine however refers to a pool of DBAPI connections which are usually TCP\/IP connections and sometimes filehandles. The DBAPI connections themselves are generally not portable over a subprocess boundary, so you would want to either create a new Engine for each subprocess, or use a non-pooled Engine, which means you're using NullPool.\nYou also should not be doing any kind of association of MetaData with Engine, that is \"bound\" metadata. This practice, while prominent on various outdated tutorials and blog posts, is really not a general purpose thing and I try to de-emphasize this way of working as much as possible.\nIf you're using the ORM, a similar dichotomy of \"program structures\/active work\" exists, where your mapped classes of course are shared between all subprocesses, but you definitely want Session objects to be local to a particular subprocess - these correspond to an actual DBAPI connection as well as plenty of other mutable state which is best kept local to an operation.","Q_Score":1,"Tags":"python,database,multithreading,sqlalchemy,multiprocessing","A_Id":7603832,"CreationDate":"2011-09-29T21:50:00.000","Title":"How to use simple sqlalchemy calls while using thread\/multiprocessing","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am pretty sure that I have managed to dork up my MySQLdb installation. I have all of the following installed correctly on a fresh install of OS X Lion:\n\nphpMyAdmin\nMySQL 5.5.16\nDjango 1.3.1\n\nAnd yet when I try to run \"from django.db import connection\" in a django console, I get the following:\n\n\n\n\nfrom django.db import connection Traceback (most recent call\n last): File \"\", line 1, in File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/init.py\",\n line 78, in \n connection = connections[DEFAULT_DB_ALIAS] File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/utils.py\",\n line 93, in getitem\n backend = load_backend(db['ENGINE']) File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/utils.py\",\n line 33, in load_backend\n return import_module('.base', backend_name) File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/utils\/importlib.py\",\n line 35, in import_module\n import(name) File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/backends\/mysql\/base.py\",\n line 14, in \n raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\n ImproperlyConfigured: Error loading MySQLdb module: dlopen(\/Users\/[my\n username]\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp\/_mysql.so,\n 2): Library not loaded: libmysqlclient.18.dylib Referenced from:\n \/Users\/[my\n username]\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp\/_mysql.so\n Reason: image not found\n\n\n\n\nI have no idea why this is happening, could somebody help walk me through this?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3834,"Q_Id":7605212,"Users Score":1,"Answer":"Install pip if you haven't already, and run \npip install MySQL-Python","Q_Score":2,"Tags":"python,mysql,django,macos,mysql-python","A_Id":7605229,"CreationDate":"2011-09-30T01:53:00.000","Title":"Having an issue with setting up MySQLdb on Mac OS X Lion in order to support Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I am pretty sure that I have managed to dork up my MySQLdb installation. I have all of the following installed correctly on a fresh install of OS X Lion:\n\nphpMyAdmin\nMySQL 5.5.16\nDjango 1.3.1\n\nAnd yet when I try to run \"from django.db import connection\" in a django console, I get the following:\n\n\n\n\nfrom django.db import connection Traceback (most recent call\n last): File \"\", line 1, in File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/init.py\",\n line 78, in \n connection = connections[DEFAULT_DB_ALIAS] File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/utils.py\",\n line 93, in getitem\n backend = load_backend(db['ENGINE']) File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/utils.py\",\n line 33, in load_backend\n return import_module('.base', backend_name) File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/utils\/importlib.py\",\n line 35, in import_module\n import(name) File\n \"\/Library\/Python\/2.7\/site-packages\/Django-1.3.1-py2.7.egg\/django\/db\/backends\/mysql\/base.py\",\n line 14, in \n raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\n ImproperlyConfigured: Error loading MySQLdb module: dlopen(\/Users\/[my\n username]\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp\/_mysql.so,\n 2): Library not loaded: libmysqlclient.18.dylib Referenced from:\n \/Users\/[my\n username]\/.python-eggs\/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp\/_mysql.so\n Reason: image not found\n\n\n\n\nI have no idea why this is happening, could somebody help walk me through this?","AnswerCount":3,"Available Count":2,"Score":0.3215127375,"is_accepted":false,"ViewCount":3834,"Q_Id":7605212,"Users Score":5,"Answer":"I found the following solution for this issue. It worked for me. I have encountered this problem when I was running python console from PyCharm.\nsudo ln -s \/usr\/local\/mysql\/lib\/libmysqlclient.18.dylib \/usr\/lib\/libmysqlclient.18.dylib","Q_Score":2,"Tags":"python,mysql,django,macos,mysql-python","A_Id":12027574,"CreationDate":"2011-09-30T01:53:00.000","Title":"Having an issue with setting up MySQLdb on Mac OS X Lion in order to support Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been spending the better part of the weekend trying to figure out the best way to transfer data from an MS Access table into an Excel sheet using Python. I've found a few modules that may help (execsql, python-excel), but with my limited knowledge and the modules I have to use to create certain data (I'm a GIS professional, so I'm creating spatial data using the ArcGIS arcpy module into an access table) \nI'm not sure what the best approach should be. All I need to do is copy 4 columns of data from access to excel and then format the excel. I have the formatting part solved.\nShould I:\nIterate through the rows using a cursor and somehow load the rows into excel?\nCopy the columns from access to excel?\nExport the whole access table into a sheet in excel?\nThanks for any suggestions.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":4767,"Q_Id":7630142,"Users Score":1,"Answer":"Another idea - how important is the formatting part? If you can ditch the formatting, you can output your data as CSV. Excel can open CSV files, and the CSV format is much simpler then the Excel format - it's so simple you can write it directly from Python like a text file, and that way you won't need to mess with Office COM objects.","Q_Score":2,"Tags":"python,excel,ms-access","A_Id":7636416,"CreationDate":"2011-10-03T00:23:00.000","Title":"Copy data from MS Access to MS Excel using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been spending the better part of the weekend trying to figure out the best way to transfer data from an MS Access table into an Excel sheet using Python. I've found a few modules that may help (execsql, python-excel), but with my limited knowledge and the modules I have to use to create certain data (I'm a GIS professional, so I'm creating spatial data using the ArcGIS arcpy module into an access table) \nI'm not sure what the best approach should be. All I need to do is copy 4 columns of data from access to excel and then format the excel. I have the formatting part solved.\nShould I:\nIterate through the rows using a cursor and somehow load the rows into excel?\nCopy the columns from access to excel?\nExport the whole access table into a sheet in excel?\nThanks for any suggestions.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":4767,"Q_Id":7630142,"Users Score":1,"Answer":"The best approach might be to not use Python for this task.\nYou could use the macro recorder in Excel to record the import of the External data into Excel. \nAfter starting the macro recorder click Data -> Get External Data -> New Database Query and enter your criteria. Once the data import is complete you can look at the code that was generated and replace the hard coded search criteria with variables.","Q_Score":2,"Tags":"python,excel,ms-access","A_Id":7630189,"CreationDate":"2011-10-03T00:23:00.000","Title":"Copy data from MS Access to MS Excel using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am importing text files into excel using xlwt module. But it allows only 256 columns to be stored. Are there any ways to solve this problem?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":18626,"Q_Id":7658513,"Users Score":0,"Answer":"If you trying to write to the columns in the for loop and getting this error, then re-initalize the column to 0 while iterating.","Q_Score":14,"Tags":"python","A_Id":70290332,"CreationDate":"2011-10-05T08:21:00.000","Title":"Python - Xlwt more than 256 columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am importing text files into excel using xlwt module. But it allows only 256 columns to be stored. Are there any ways to solve this problem?","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":18626,"Q_Id":7658513,"Users Score":1,"Answer":"Is that a statement of fact or should xlwt support more than 256 columns? What error do you get? What does your code look like?\nIf it truly does have a 256 column limit, just write your data in a csv-file using the appropriate python module and import the file into Excel.","Q_Score":14,"Tags":"python","A_Id":7658627,"CreationDate":"2011-10-05T08:21:00.000","Title":"Python - Xlwt more than 256 columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Python with Celery and RabbitMQ to make a web spider to count the number of links on a page.\nCan a database, such as MySQL, be written into asynchronously? Is it OK to commit the changes after every row added, or is it required to batch them (multi-add) and then commit after a certain number of rows\/duration?\nI'd prefer to use SQLAlchemy and MySQL, unless there is a more recommended combination for Celery\/RabbitMQ. I also see NoSQL (CouchDB?) recommended.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":728,"Q_Id":7659246,"Users Score":1,"Answer":"For write intensive operation like Counters and Logs NoSQL solution are always the best choice. Personally I use a mongoDB for this kind of tasks.","Q_Score":0,"Tags":"python,database,asynchronous,rabbitmq,celery","A_Id":7780116,"CreationDate":"2011-10-05T09:31:00.000","Title":"Python Celery Save Results in Database Asynchronously","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to edit several excel files (.xls) without changing the rest of the sheet. The only thing close so far that I've found is the xlrd, xlwt, and xlutils modules. The problem with these is it seems that xlrd evaluates formulae when reading, then puts the answer as the value of the cell. Does anybody know of a way to preserve the formulae so I can then use xlwt to write to the file without losing them? I have most of my experience in Python and CLISP, but could pick up another language pretty quick if they have better support. Thanks for any help you can give!","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":8902,"Q_Id":7665486,"Users Score":1,"Answer":"As of now, xlrd doesn't read formulas. It's not that it evaluates them, it simply doesn't read them.\nFor now, your best bet is to programmatically control a running instance of Excel, either via pywin32 or Visual Basic or VBScript (or some other Microsoft-friendly language which has a COM interface). If you can't run Excel, then you may be able to do something analogous with OpenOffice.org instead.","Q_Score":0,"Tags":"python,excel,formula,xlwt,xlrd","A_Id":7667880,"CreationDate":"2011-10-05T17:52:00.000","Title":"Is there any way to edit an existing Excel file using Python preserving formulae?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using postgresql and python and I need to store data group by week of the year. So, there's plenty alternatives:\n\nweek and year in two separated fields\na date pointing to the start of the week (or a random day of the week)\nAnd, the one I like: an interval type.\n\nI never use it, but reading the docs, seems to fit. But then, reading psycopg docs I found interval mapped to python timedelta object... seems weird to me, a timedelta is just a difference.\nSo, there are two question here, really:\n\nCan I handle this choice using psycopg2? \nIs the better alternative?\n\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":661,"Q_Id":7668822,"Users Score":4,"Answer":"The PostgreSQL interval type isn't really what you're looking for -- it's specifically intended for storing an arbitrary length of time, ranging anywhere from a microsecond to a few million years. An interval has no starting or ending point; it's just a measure of \"how long\".\nIf you're specifically after storing which week an event is associated with, you're probably better off with either of your first two options.","Q_Score":0,"Tags":"python,postgresql,psycopg2","A_Id":7668912,"CreationDate":"2011-10-05T23:11:00.000","Title":"psycopg2: interval type for storing weeks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script to access data in an established database and unfortunately, I'm breaking the DB. I'm able to recreate the issue from the command line:\n\n [user@box tmp]# python\n Python 2.7.2 (default, Sep 19 2011, 15:02:41) \n [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> import pgdb\n >>> db = pgdb.connect('localhost:my_db:postgres')\n >>> cur = db.cursor()\n >>> cur.execute(\"SELECT * FROM mytable LIMIT 10\")\n >>> cur.close()\n >>> \n\nAt this point any activity to mytable is greatly degraded and \"select * from pg_stat_activity\" shows my connection as \"IDLE in transaction\". If I call db.close() everything is fine, but my script loops infinitely and I didn't think I'd need to open and close the db connection with each loop. I don't think it has anything to do with the fact that I'm not using the data above as in my real script I am calling fetchone() (in a loop) to process the data. I'm not much of a DB guy so I'm not sure what other info would be useful. My postgres version is 9.1.0 and python is 2.7.2 as shown above.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1471,"Q_Id":7669434,"Users Score":2,"Answer":"I suggest using psycopg2 instead of pgdb. pgdb uses the following semantics:\nconnect() -> open database connection, begin transaction\ncommit() -> commit, begin transaction\nrollback() -> rollback, begin transaction\nexecute() -> execute statement \npsycopg2, on the other hand, uses the following semantics:\nconnect() -> open database connection\ncommit() -> commit\nrollback() -> rollback\nexecute() -> begin transaction unless already in transaction, execute statement \nso, as Amber mentioned, you can do a rollback or commit after your select statement and terminate the transaction. Unfortunately, with pgdb, you will immediately start a new transaction after you rollback or commit (even if you haven't performed any work).\nFor many database systems, pgdb's behavior is fine, but because of the way PostgreSQL handles transactions, it can cause trouble for you if you've got lots of connections accessing the same tables (trouble specifically with vacuum).\nWhy does pgdb start a transaction right away? The Python DB-API (2.0) spec calls for it to do so. Seems kind of silly to me, but that's the way the spec is written.","Q_Score":2,"Tags":"python,postgresql,pgdb","A_Id":7670330,"CreationDate":"2011-10-06T01:15:00.000","Title":"python pgdb hanging database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script to access data in an established database and unfortunately, I'm breaking the DB. I'm able to recreate the issue from the command line:\n\n [user@box tmp]# python\n Python 2.7.2 (default, Sep 19 2011, 15:02:41) \n [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> import pgdb\n >>> db = pgdb.connect('localhost:my_db:postgres')\n >>> cur = db.cursor()\n >>> cur.execute(\"SELECT * FROM mytable LIMIT 10\")\n >>> cur.close()\n >>> \n\nAt this point any activity to mytable is greatly degraded and \"select * from pg_stat_activity\" shows my connection as \"IDLE in transaction\". If I call db.close() everything is fine, but my script loops infinitely and I didn't think I'd need to open and close the db connection with each loop. I don't think it has anything to do with the fact that I'm not using the data above as in my real script I am calling fetchone() (in a loop) to process the data. I'm not much of a DB guy so I'm not sure what other info would be useful. My postgres version is 9.1.0 and python is 2.7.2 as shown above.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1471,"Q_Id":7669434,"Users Score":2,"Answer":"Try calling db.rollback() before you close the cursor (or if you're doing a write operation, db.commit()).","Q_Score":2,"Tags":"python,postgresql,pgdb","A_Id":7669476,"CreationDate":"2011-10-06T01:15:00.000","Title":"python pgdb hanging database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pretty recent (but not newborn) to both Python, SQLAlchemy and Postgresql, and trying to understand inheritance very hard.\nAs I am taking over another programmer's code, I need to understand what is necessary, and where, for the inheritance concept to work.\nMy questions are:\n\nIs it possible to rely only on SQLAlchemy for inheritance? In other words, can SQLAlchemy apply inheritance on Postgresql database tables that were created without specifying INHERITS=?\nIs the declarative_base technology (SQLAlchemy) necessary to use inheritance the proper way. If so, we'll have to rewrite everything, so please don't discourage me.\nAssuming we can use Table instance, empty Entity classes and mapper(), could you give me a (very simple) example of how to go through the process properly (or a link to an easily understandable tutorial - I did not find any easy enough yet).\n\nThe real world we are working on is real estate objects. So we basically have\n- one table immobject(id, createtime)\n- one table objectattribute(id, immoobject_id, oatype)\n- several attribute tables: oa_attributename(oa_id, attributevalue)\nThanks for your help in advance.\nVincent","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1451,"Q_Id":7672569,"Users Score":4,"Answer":"Welcome to Stack Overflow: in the future, if you have more than one question; you should provide a separate post for each. Feel free to link them together if it might help provide context.\nTable inheritance in postgres is a very different thing and solves a different set of problems from class inheritance in python, and sqlalchemy makes no attempt to combine them.\nWhen you use table inheritance in postgres, you're doing some trickery at the schema level so that more elaborate constraints can be enforced than might be easy to express in other ways; Once you have designed your schema; applications aren't normally aware of the inheritance; If they insert a row; it just magically appears in the parent table (much like a view). This is useful, for instance, for making some kinds of bulk operations more efficient (you can just drop the table for the month of january).\nThis is a fundamentally different idea from inheritance as seen in OOP (in python or otherwise, with relational persistence or otherwise). In that case, the application is aware that two types are related, and that the subtype is a permissible substitute for the supertype. \"A holding is an address, a contact has an address therefore a contact can have a holding.\" \nWhich of these, (mostly orthogonal) tools you need depends on the application. You might need neither, you might need both. \n\n\nSqlalchemy's mechanisms for working with object inheritance is flexible and robust, you should use it in favor of a home built solution if it is compatible with your particular needs (this should be true for almost all applications).\nThe declarative extension is a convenience; It allows you to describe the mapped table, the python class and the mapping between the two in one 'thing' instead of three. It makes your code more \"DRY\"; It is however only a convenience layered on top of \"classic sqlalchemy\" and it isn't necessary by any measure.\nIf you find that you need table inheritance that's visible from sqlalchemy; your mapped classes won't be any different from not using those features; tables with inheritance are still normal relations (like tables or views) and can be mapped without knowledge of the inheritance in the python code.","Q_Score":1,"Tags":"python,postgresql,inheritance,sqlalchemy","A_Id":7675115,"CreationDate":"2011-10-06T09:40:00.000","Title":"Python, SQLAlchemy and Postgresql: understanding inheritance","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"At work we want our next generation product to be based on a graph database. I'm looking for suggestions as to what database engine might be appropriate for our new project:\nOut product is intended to keep track of a large number of prices for goods. Here's a simplistic example of what it does - supposing you wanted to estimate the price of gasoline in the UK - you know that Gasoline is refined from crude-oil. If you new the price of crude oil in the UK you could estimate the price of anything simply by adding the cost of refining, transporting (etc). Actually things are more complex because there are a number of sources of crude-oil and hundreds of refined oil products. The prices of oil products can be affected by the availability of other energy sources (e.g. nuclear, wind, natural gas) and the demand. It's kind of complex!\nThe idea is that we want to model the various inter-related goods and their costs of refining, transportation (etc) as an asyclic directed graph. The idea being, when an event causes a price to change then we want to be quickly able to determine what kinds of things are affected and re-calculate those prices ASAP. \nEssentially we need a database which can represent the individual commodities as nodes in the graph. Each node will store a number of curves and surfaces of information pertaining to the product. \nWe want to represent the various costs & transformations (e.g. refining, transportation) as labels on the edges. As with the nodes, the information we want to store could be quite complex - not just single values but curves and surfaces. \nThe calculations we do are all linear with respect to the size of the objects, however since the graph could be very big we need to be able to traverse the graph very quickly. \nWe are Java and Python centric - ideally we are after a product that runs on the JVM but has really good APIs for both Python and Java. We don't care so much about other languages... but .Net would be nice to have (even though it might be years before we get round to doing something with it).\nWe'd definitely like something which was high-performance - but more importantly the system needs to have a degree of hardware fault tolerance. For example, we'd like to distribute the database across a number of physical servers. In the event that any of the servers go down we'd like to be able to continue without an interruption. \nOh, and we are really lazy. We dont want to spend much time writing infrastructure - so if the database came with tools that allow us to do as much as possible of this kind of thing with very little work that's fine by us. It would also be a real bonus if there was a grid technology associated with the graph DB, that way we could push a sequence of re-calculate jobs onto a compute grid and have much of our calculation done in paralell. \nSo, that's a description of the kind of thing we want to build. What I want to know is whether there are any mature technologies which will help us achieve this. As I mentioned before, we have a preference for Python & JVM, however if the technology is really good and comes with great bindings for Python + Java we'd consider almost anything.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":303,"Q_Id":7674895,"Users Score":3,"Answer":"Neo4J is the most mature graphDB I know of - and is java, with bindings for python too, or REST","Q_Score":2,"Tags":"java,python,database,graph","A_Id":7675078,"CreationDate":"2011-10-06T13:27:00.000","Title":"I'm looking for a graph-database for a Java\/Python centric organization","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a bunch of utility scripts in Visual FoxPro, which we use to interactively cleanse\/format data. We'd like to start migrating this code to make use of other database platforms, like MySQL or SQLite. \nFor instance we have a script that we run which converts the name and\/or address lines to proper upper\/lower case. This code goes through an entire table and analyzes\/fixes each row. There are others, that do things like parse and standardize the address and even duplicate detection...\nWe're thinking of migrating the code to Python and possibly using something like SQLAlchemy as a \"middleman\". \nIn Visual FoxPro the database\/tables are integrated so we can just open the table and run commands. MySQL is different in that we need to extract data from it, then work on that extracted data, then update the table. \nWhat would be the best approach? \nI see several possibilities:\n1) Extract the the entire data set to be worked on, say all the address fields, if that's what we're going to be working with, then updating it all and writing it all back...\n2) Extract the data set in chunks, so as to not potentially consume vast amounts of system memory... then update and write back\n3) Generate SQL code, perhaps with the help of a tool like SQLAlchemy, that gets sent to and executed by the server...\n4) ??? Anything else I didn't think of?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":789,"Q_Id":7681017,"Users Score":0,"Answer":"It seems like you're trying to do several things all at once. Could you take a step-by-step approach? Perhaps cleansing the data as they are right now using your normal, usual scripts. Then migrate the database to MySQL.\nIt is easy to migrate the database if VisualFoxPro offers a way to export the database to, say, CSV. You can then import that CSV into MySQL directly, with very little trouble. That gives you two databases that should be functionally identical. Of course, you have to prove that they are indeed identical, which isn't too hard but is time-consuming. You might be able to use SQLAlchemy to help.\nWhen the MySQL database is right, that's the time to port your cleansing scripts to Python or something and get those working. \nThat's how I would approach this problem: break it into pieces and not try to do too much in any single step.\nHTH","Q_Score":1,"Tags":"python,mysql,sqlalchemy,foxpro,data-cleaning","A_Id":7681237,"CreationDate":"2011-10-06T22:03:00.000","Title":"What's the best language\/technique to perform advanced data cleansing and formatting on a SQL\/MySQL\/PostgreSQL table?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Pylons application using SQLAlchemy with SQLite as backend. I would like to know if every read operation going to SQLite will always lead to a hard disk read (which is very slow compared to RAM) or some caching mechanisms are already involved. \n\ndoes SQLite maintain a subset of the database in RAM for faster access ? \nCan the OS (Linux) do that automatically ? \nHow much speedup could I expect by using a production database (MySQL or PostgreSQL) instead of SQLite?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":339,"Q_Id":7710895,"Users Score":3,"Answer":"Yes, SQLite has its own memory cache. Check PRAGMA cache_size for instance. Also, if you're looking for speedups, check PRAGMA temp_store. There is also API for implementing your own cache.\nThe SQLite database is just a file to the OS. Nothing is 'automatically' done for it. To ensure caching does happen, there are sqlite.h defines and runtime pragma settings.\nIt depends, there are a lot of cases when you'll get a slowdown instead.","Q_Score":5,"Tags":"python,sqlite,sqlalchemy","A_Id":7712124,"CreationDate":"2011-10-10T09:35:00.000","Title":"Are SQLite reads always hitting disk?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Best practice question about setting Mongo indexes. Mongoengine, the Python ORM wrapper, allows you to set indexes in the Document meta class. \nWhen is this meta class introspected and the index added? Can I build a collection via a mongoengine Document class and then add an index after the fact?\nIf I remove the index from the meta class, is the index automatically removed from the corresponding collection?\nThanks,","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2747,"Q_Id":7758898,"Users Score":6,"Answer":"You can add an index at any time and ensureIndex will be called behind the scenes so it will be added if it doesn't exist.\nIf you remove an index from the meta - you will have to use pymongo or the shell to remove the index.","Q_Score":7,"Tags":"python,mongodb,indexing,mongoengine","A_Id":9082609,"CreationDate":"2011-10-13T18:43:00.000","Title":"How does MongoEngine handle Indexes (creation, update, removal)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"which is better for production with web2py? please more insights.\nI'm very new 2 web2py and i am working on a small pharmacy mgt system. \npls which is better for production postgres or mysql? if postgres, step by step installation guide pls so to smoothly work with web2py. thanks","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1454,"Q_Id":7761339,"Users Score":0,"Answer":"I say. Whatever you can work with from console. Some events may require fixing db from fingertip, you may also want to have some other ongoing actions in db and it might need to be done outside web2py.\nPosgreSQL is my choice as there are much less irregular behaviours thus its easier to grasp...","Q_Score":2,"Tags":"python,web2py","A_Id":9021132,"CreationDate":"2011-10-13T22:50:00.000","Title":"which is better for production with web2py?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using memcached on a web site, and I am currently needing to open connections to a database and socket each time a function is called. In the case of the db connection, I am having to decide at runtime, which database to connect to.\nBecause of the (default) stateless nature of web apps, I am having to tear down (i.e. close) the connection after each function call. I am wondering if it is possible to store (i.e. cache) the socket connection and the database connections in memcache - do that I have a pool of db connections and a socket connection already open that I can use whenever the function is called.\nIs this safe ? \n[[Additional Info]]\nI will be interfacing to memcached primarily, with PHP and Python\nBTW - memcached is running on the same machine (so physical address issues should not arise).","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":7783860,"Users Score":0,"Answer":"Both languages support database connections which live beyond the lifetime of a single request. Don't use memcache for that!","Q_Score":0,"Tags":"php,python,memcached","A_Id":7785856,"CreationDate":"2011-10-16T11:03:00.000","Title":"Is it safe to store a connection (effectively a pointer) in memcache?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using memcached on a web site, and I am currently needing to open connections to a database and socket each time a function is called. In the case of the db connection, I am having to decide at runtime, which database to connect to.\nBecause of the (default) stateless nature of web apps, I am having to tear down (i.e. close) the connection after each function call. I am wondering if it is possible to store (i.e. cache) the socket connection and the database connections in memcache - do that I have a pool of db connections and a socket connection already open that I can use whenever the function is called.\nIs this safe ? \n[[Additional Info]]\nI will be interfacing to memcached primarily, with PHP and Python\nBTW - memcached is running on the same machine (so physical address issues should not arise).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":150,"Q_Id":7783860,"Users Score":0,"Answer":"I am wondering if it is possible to store (i.e. cache) the socket connection and the database connections in memcache \n\nNo.","Q_Score":0,"Tags":"php,python,memcached","A_Id":7785929,"CreationDate":"2011-10-16T11:03:00.000","Title":"Is it safe to store a connection (effectively a pointer) in memcache?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a web application that allows a user to upload an image. When the image is uploaded, it needs to be resized to one or more sizes, each of which needs to be sent to Amazon s3 for storage. Metadata and urls for each size of the image are stored in a single database record on the web server. I'm using a message queue to perform the resizing and uploading asynchronously (as there is potential for large images and multiple resizes per request). When the resize\/upload task completes, the database record needs to be updated with the url. \nMy problem is that the worker executing the task will not have access to the database. I was thinking of firing off a http callback from the worker back to the web application after the task is complete with the appropriate information for updating the database record. Are there any other alternatives or reasons I should do this another way?\nI'm using python\/pylons for the web backend, mysql for the database and celery\/amqp for messaging.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":639,"Q_Id":7802504,"Users Score":2,"Answer":"It seems that your goal is not to decouple the database from the MQ, but rather from the workers. As such, you can create another queue that receives completion notifications, and have another single worker that picks up the notifications and updates the database appropriately.","Q_Score":0,"Tags":"python,database,pylons,message-queue,celery","A_Id":7802664,"CreationDate":"2011-10-18T04:41:00.000","Title":"Best practice for decoupling a database from a message queue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use Berkeley DB(BDB) in nginx. When a request arrives, nginx passes the URI as a key to BDB and checks if that key has a value in BDB file.\nI actually did in an example. I add some data in BDB, and run nginx, it's OK. I can access it.\nBut when I add some data in running BDB with nginx (using Python), I can't get the new data. Even I use the another python interpreter access the BDB file, it was actually has the new data. \nSteps of the request in nginx:\n\nstart up nginx, and it will init my plugin (BDB env and init)\na request comes in\ncontrol in plugin, check if key(uri) has a value. If true, return it, or pass\n...rest of process","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":237,"Q_Id":7817567,"Users Score":1,"Answer":"it supports\n\nA Single Process With One Thread\nA Single Process With Multiple Threads\nGroups of Cooperating Processes\nGroups of Unrelated Processes","Q_Score":1,"Tags":"python,nginx,berkeley-db","A_Id":8835081,"CreationDate":"2011-10-19T06:52:00.000","Title":"Does Berkeley DB only support one processor operation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there way to execute DDL script from Python with kinterbasdb library for Firebird database?\nBasically I'd like to replicate 'isql -i myscript.sql' command.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":412,"Q_Id":7825066,"Users Score":2,"Answer":"It has been a while since I used kinterbasdb, but as far as I know you should be able to do this with any query command which can also be used for INSERT, UPDATE and DELETE (ie nothing that produces a resultset). So Connection.execute_immediate and Cursor.execute should work.\nDid you actually try this.\nBTW: With Firebird it is advisable not to mix DDL and DML in one transaction.\nEDIT:\nI just realised that you might have meant a full DDL script with multiple statements, if that is what you mean, then: no you cannot, you need to execute each statement individually.\nYou might be able to use an EXECUTE BLOCK statement, but you may need to modify your script so much that it would be easier to simply try to split the actual script into individual statements.","Q_Score":1,"Tags":"python,firebird,ddl,kinterbasdb","A_Id":7832347,"CreationDate":"2011-10-19T16:58:00.000","Title":"How to run DDL script with kinterbasdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm very new to Python and I'm trying to write a sort of recipe organizer to get acquainted with the language. Basically, I am unsure how how I should be storing the recipes.\nFor now, the information I want to store is: \n\nRecipe name\nIngredient names\nIngredient quantities\nPreparation\n\nI've been thinking about how to do this with the built-in sqlite3, but I know nothing about database architecture, and haven't been able to find a good reference.\nI suppose one table would contain recipe names and primary keys. Preparation could be in a different table with the primary key as well. Would each ingredient\/quantity pair need its own table. \nIn other words, there would be a table for ingredientNumberOne, and each recipe's first ingredient, with the quantity, would go in there. Then each time recipe comes along with more ingredients than there are tables, a new table would be created. \nAm I even correct in assuming that sqlite3 is sufficient for this task?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2639,"Q_Id":7827859,"Users Score":2,"Answer":"Just a general data modeling concept: you never want to name anything \"...NumberOne\", \"...NumberTwo\". Data models designed in this way are very difficult to query. You'll ultimately need to visit each of N tables for 1 to N ingredients. Also, each table in the model would ultimately have the same fields making maintenance a nightmare. \nRather, just have one ingredient table that references the \"recipe\" table. \nUltimately, I just realized this doesn't exactly answer the question, but you could implement this solution in Sqlite. I just get worried when good developers start introducing bad patterns into the data model. This comes from a guy who's been on both sides of the coin.","Q_Score":0,"Tags":"python,database,database-design","A_Id":7827955,"CreationDate":"2011-10-19T20:42:00.000","Title":"How to store recipe information with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a simple bookmarking app using django which uses sqlite3 as the database backend.\nCan I upload it to appengine and use it? What is \"Django-nonrel\"?","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":582,"Q_Id":7838667,"Users Score":5,"Answer":"Unfortunately, no you can't. Google App Engine does not allow you to write files, and that is needed by SQLite.\nUntil recently, it had no support of SQL at all, preferring a home-grown solution (see the \"CAP theorem\" as for why). This motivated the creation of projects like \"Django-nonrel\" which is a version of Django that does not require a relational database.\nRecently, they opened a beta service that proposes a MySQL database. But beware that it is fundamentally less reliable, and that it is probably going to be expensive.\nEDIT: As Nick Johnson observed, this new service (Google Cloud SQL) is fundamentally less scalable, but not fundamentally less reliable.","Q_Score":3,"Tags":"python,django,google-app-engine,web-applications,sqlite","A_Id":7838935,"CreationDate":"2011-10-20T15:52:00.000","Title":"Can I deploy a django app which uses sqlite3 as backend on google app engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using GeoDjango with PostGIS. Then I am into trouble on how to get the nearest record from the given coordinates from my postgres db table.","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":5401,"Q_Id":7846355,"Users Score":2,"Answer":"I have no experience with GeoDjango, but on PostgreSQL\/PostGIS you have the st_distance(..) function. So, you can order your results by st_distance(geom_column, your_coordinates) asc and see what are the nearest rows.\nIf you have plain coordinates (no postgis geometry), you can convert your coordinates to a point with the geometryFromText function.\nIs that what you were looking for? If not, try to be more explicit.","Q_Score":13,"Tags":"python,postgresql,postgis,geodjango","A_Id":7904142,"CreationDate":"2011-10-21T07:36:00.000","Title":"How can I query the nearest record in a given coordinates(latitude and longitude of string type)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's the best combination of tools to import daily data feed (in .CSV format) to a MSSQL server table?\nEnvironment and acceptable tools:\n - Windows 2000\/XP\n - ruby or python\nMS SQL Server is on a remote server, the importing process has to be done on a Windows client machine.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3142,"Q_Id":7847818,"Users Score":0,"Answer":"And what about DTS services? It's integral part of MS SQL server starting with early versions and it allows you to import text-based data to server tables","Q_Score":0,"Tags":"python,sql-server,ruby,windows,csv","A_Id":7847885,"CreationDate":"2011-10-21T10:00:00.000","Title":"Import CSV to MS SQL Server programmatically","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to open the chromium site data (in ~\/.config\/chromium\/Default) with python-sqlite3 but it gets locked whenever chromium is running, which is understandable since transactions may be made. Is there a way to open it in read-only mode, ensuring that I can't corrupt the integrity of the db while chromium is using it?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":7130,"Q_Id":7857755,"Users Score":6,"Answer":"Chromium is holding a database lock for long periods of time? Yuck! That's really not a very good idea at all. Still, not your fault\u2026\nYou could try just copying the database file (e.g., with the system utility cp) and using that snapshot for reading purposes; SQLite keeps all its committed state in a single file per database. Yes, there's a chance of seeing a partial transaction, but you will definitely not have lock problems on Unix as SQLite definitely doesn't use mandatory locks. (This might well not work on Windows due to the different locking scheme there.)","Q_Score":15,"Tags":"python,database,sqlite","A_Id":7857866,"CreationDate":"2011-10-22T06:05:00.000","Title":"Is it possible to open a locked sqlite database in read only mode?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is what I have :-\n\nUbuntu 11.10.\nDjango 1.3 \nPython 2.7\n\nWhat I want to do is build an app that is similar to top-coder and I have the skeletal version of the app sketched out. The basic requirements would be:-\n1. Saving the code. \n2. Saving the user name and ranks.(User-profile)\n3. Should allow a teacher to create multiple choice questions too.( Similar to Google docs). \nI have basic knowledge of Django and have built couple of (basic) apps before. Rather than building an online tool, is it possible to build something very similar to conf2py that sits on top of web2py, in Django. \nLets call this small project examPy( I know, very original), is it possible to build an app that acts more a plug-in to Django or is my concept of Django absolutely wrong? \nThe primary question being:\nAs I want to learn a new DB and have worked on postgres in Django, should I chose CouchDB or MongoDB for Django? \nAnswers can be explanations or links to certain documentations or blogs that can tell me the pros and cons.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2560,"Q_Id":7859775,"Users Score":3,"Answer":"I've used mongo-engine with Django but you need to create a file specifically for Mongo documents eg. Mongo_models.py. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data back end ( with a bit of craft ). \nIf you go this route you can dodge Django non-rel which is still not part of Django 1.4. In addition I believe django-nonrel is on hiatus right now.\nI've used both CouchDB and Mongo extensively. CouchDB has a lovely interface. My colleague is working on something similar for Mongo. Mongo's map and reduce are far faster than CouchDB. Mongo is more responsive loading and retrieving data. The python libraries for Mongo are easier to get working with ( both pymongo and mongo-engine are excellent )\nBe sure you read the Mongo production recommendations! Do not run one instance on the same node as Django or prepare to be savagely burned when traffic peaks. Mondo works great with Memcache\/Redis where one can store reduced data for rapid lookups.\nBEWARE: If you have very well defined and structured data that can be described in documents or models then don't use Mongo. Its not designed for that and something like PostGreSQL will work much better.\n\nI use PostGreSQL for relational or well structured data because its good for that. Small memory footprint and good response.\nI use Redis to cache or operate in memory queues\/lists because its very good for that. great performance providing you have the memory to cope with it.\nI use Mongo to store large JSON documents and to perform Map and reduce on them ( if needed ) because its very good for that. Be sure to use indexing on certain columns if you can to speed up lookups.\n\nDon't use a circle to fill a square hole. It won't fill it.","Q_Score":3,"Tags":"python,django,mongodb,couchdb","A_Id":10204764,"CreationDate":"2011-10-22T13:09:00.000","Title":"Mongo DB or Couch DB with django for building an app that is similar to top coder?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"There seems to be many choices for Python to interface with SQLite (sqlite3, atpy) and HDF5 (h5py, pyTables) -- I wonder if anyone has experience using these together with numpy arrays or data tables (structured\/record arrays), and which of these most seamlessly integrate with \"scientific\" modules (numpy, scipy) for each data format (SQLite and HDF5).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3647,"Q_Id":7883646,"Users Score":23,"Answer":"Most of it depends on your use case. \nI have a lot more experience dealing with the various HDF5-based methods than traditional relational databases, so I can't comment too much on SQLite libraries for python...\nAt least as far as h5py vs pyTables, they both offer very seamless access via numpy arrays, but they're oriented towards very different use cases.\nIf you have n-dimensional data that you want to quickly access an arbitrary index-based slice of, then it's much more simple to use h5py. If you have data that's more table-like, and you want to query it, then pyTables is a much better option.\nh5py is a relatively \"vanilla\" wrapper around the HDF5 libraries compared to pyTables. This is a very good thing if you're going to be regularly accessing your HDF file from another language (pyTables adds some extra metadata). h5py can do a lot, but for some use cases (e.g. what pyTables does) you're going to need to spend more time tweaking things. \npyTables has some really nice features. However, if your data doesn't look much like a table, then it's probably not the best option.\nTo give a more concrete example, I work a lot with fairly large (tens of GB) 3 and 4 dimensional arrays of data. They're homogenous arrays of floats, ints, uint8s, etc. I usually want to access a small subset of the entire dataset. h5py makes this very simple, and does a fairly good job of auto-guessing a reasonable chunk size. Grabbing an arbitrary chunk or slice from disk is much, much faster than for a simple memmapped file. (Emphasis on arbitrary... Obviously, if you want to grab an entire \"X\" slice, then a C-ordered memmapped array is impossible to beat, as all the data in an \"X\" slice are adjacent on disk.) \nAs a counter example, my wife collects data from a wide array of sensors that sample at minute to second intervals over several years. She needs to store and run arbitrary querys (and relatively simple calculations) on her data. pyTables makes this use case very easy and fast, and still has some advantages over traditional relational databases. (Particularly in terms of disk usage and speed at which a large (index-based) chunk of data can be read into memory)","Q_Score":12,"Tags":"python,sqlite,numpy,scipy,hdf5","A_Id":7891137,"CreationDate":"2011-10-25T01:06:00.000","Title":"exporting from\/importing to numpy, scipy in SQLite and HDF5 formats","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two List. first list element Name Age Sex and second list element test 10 female. I want to insert this data into database. In first list having MySQL Column and in second MySQL Column Values.I'm trying to make this query. INSERT INTO (LIST1) VALUES (List2) =>INSERT INTO table (name,age,sex) values (test,10,female) Is it possible? thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":7886024,"Users Score":0,"Answer":"Try getting this to work using the MySQL gui. Once that works properly, then you can try to get it to work with Python using the SQL statements that worked in MySQL.","Q_Score":0,"Tags":"python","A_Id":7886073,"CreationDate":"2011-10-25T07:32:00.000","Title":"related to List (want to insert into database)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The most common SQLite interface I've seen in Python is sqlite3, but is there anything that works well with NumPy arrays or recarrays? By that I mean one that recognizes data types and does not require inserting row by row, and extracts into a NumPy (rec)array...? Kind of like R's SQL functions in the RDB or sqldf libraries, if anyone is familiar with those (they import\/export\/append whole tables or subsets of tables to or from R data tables).","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":7905,"Q_Id":7901853,"Users Score":1,"Answer":"This looks a bit older but is there any reason you cannot just do a fetchall() instead of iterating and then just initializing numpy on declaration?","Q_Score":6,"Tags":"python,arrays,sqlite,numpy,scipy","A_Id":12100118,"CreationDate":"2011-10-26T11:15:00.000","Title":"NumPy arrays with SQLite","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running Ubuntu, Flask 0.8, mod_wsgi 3 and apache2. When an error occurs, I am unable to get Flask's custom 500 error page to trigger (and not the debug mode output either). It works fine when I run it without WSGI via app.run(debug=True).\nI've tried setting WSGIErrorOverride to both On and Off in apache settings but same result.\nAnyone has gotten this issue? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":873,"Q_Id":7940745,"Users Score":1,"Answer":"Are you sure the error is actually coming from Flask if you are getting a generic Apache 500 error page? You should look in the Apache error log to see what error messages are in there first. The problem could be configuration or your WSGI script file being wrong or failing due to wrong sys.path etc.","Q_Score":2,"Tags":"python,apache,wsgi,flask","A_Id":7942317,"CreationDate":"2011-10-29T18:13:00.000","Title":"Using Python Flask, mod_wsgi, apache2 - unable to get custom 500 error page","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm create a blog using django.\nI'm getting an 'operational error: FATAL: role \"[database user]\" does not exist.\nBut i have not created any database yet, all i have done is filled in the database details in setting.py.\nDo i have to create a database using psycopg2? If so, how do i do it? \nIs it:\npython\n\n\n\nimport psycopg2\n psycopg2.connect(\"dbname=[name] user=[user]\")\n\n\n\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1391,"Q_Id":7941623,"Users Score":0,"Answer":"Generally, you would create the database externally before trying to hook it up with Django.\nIs this your private server? If so, there are command-line tools you can use to set up a PostgreSQL user and create a database.\nIf it is a shared hosting situation, you would use CPanel or whatever utility your host provides to do this. For example, when I had shared hosting, I was issued a database user and password by the hosting administrator. Perhaps you were too.\nOnce you have this set up, there are places in your settings.py file to put your username and password credentials, and the name of the database.","Q_Score":0,"Tags":"python,database,django,psycopg2","A_Id":7942855,"CreationDate":"2011-10-29T20:52:00.000","Title":"how do i create a database in psycopg2 and do i need to?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm create a blog using django.\nI'm getting an 'operational error: FATAL: role \"[database user]\" does not exist.\nBut i have not created any database yet, all i have done is filled in the database details in setting.py.\nDo i have to create a database using psycopg2? If so, how do i do it? \nIs it:\npython\n\n\n\nimport psycopg2\n psycopg2.connect(\"dbname=[name] user=[user]\")\n\n\n\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1391,"Q_Id":7941623,"Users Score":0,"Answer":"before connecting to database, you need to create database, add user, setup access for user you selected. \nReffer to installation\/configuration guides for Postgres.","Q_Score":0,"Tags":"python,database,django,psycopg2","A_Id":7941712,"CreationDate":"2011-10-29T20:52:00.000","Title":"how do i create a database in psycopg2 and do i need to?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Does a canonical user id exist for a federated user created using STS? When using boto I need a canonical user id to grant permissions to a bucket. \nHere's a quick tour through my code:\n\nI've successfully created temporary credentials using boto's STS module (using a \"master\" account), and this gives me back:\n\nfederated_user_arn\nfederated_user_id\npacked_policy_size\naccess_key\nsecret_key\nsession_token\nexpiration\n\nThen I create the bucket using boto:\nbucket = self.s3_connection.create_bucket('%s_store' % (app_id))\nNow I want to grant permissions I'm left with two choices in boto:\nadd_email_grant(permission, email_address, recursive=False, headers=None)\nadd_user_grant(permission, user_id, recursive=False, headers=None, display_name=None)\n\nThe first method isn't an option since there isn't an email attached to the federated user, so I look at the second. Here the second parameter (\"userid\") is to be \"The canonical user id associated with the AWS account your are granting the permission to.\" But I can't seem to find a way to come with this for the federated user.\nDo canonical user ids even exist for federated users? Am I overlooking an easier way to grant permissions to federated users?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":718,"Q_Id":8032576,"Users Score":1,"Answer":"Contacted the author of boto and learned of:\nget_canonical_user_id() for the S3Connection class.\nThis will give you the canonical user ID for the credentials associated with the connection. The connection will have to have been used for some operation (e.g.: listing buckets).\nVery awkward, but possible.","Q_Score":2,"Tags":"python,amazon-s3,amazon-web-services,boto,amazon-iam","A_Id":8074814,"CreationDate":"2011-11-07T03:57:00.000","Title":"Do AWS Canonical UserIDs exist for AWS Federated Users (temporary security credentials)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I store several properties of objects in hashsets. Among other things, something like \"creation date\". There are several hashsets in the db.\nSo, my question is, how can I find all objects older than a week for example? Can you suggest an algorithm what faster than O(n) (naive implementation)?\nThanks,\nOles","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":462,"Q_Id":8039566,"Users Score":2,"Answer":"My initial thought would be to store the data elsewhere, like relational database, or possibly using a zset.\nIf you had continuous data (meaning it was consistently set at N interval time periods), then you could store the hash key as the member and the date (as a int timestamp) as the value. Then you could do a zrank for a particular date, and use zrevrange to query from the first rank to the value you get from zrank.","Q_Score":1,"Tags":"python,redis","A_Id":8039797,"CreationDate":"2011-11-07T16:37:00.000","Title":"Redis: find all objects older than","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running several thousand python processes on multiple servers which go off, lookup a website, do some analysis and then write the results to a central MySQL database.\nIt all works fine for about 8 hours and then my scripts start to wait for a MySQL connection.\nOn checking top it's clear that the MySQL daemon is overloaded as it is using up to 90% of most of the CPUs.\nWhen I stop all my scripts, MySQL continues to use resources for some time afterwards.\nI assume it is still updating the indexes? - If so, is there anyway of determining which indexes it is working on, or if not what it is actually doing?\nMany thanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":197,"Q_Id":8048742,"Users Score":0,"Answer":"There are a lot of tweaks that can be done to improve the performance of MySQL. Given your workload, you would probably benefit a lot from mysql 5.5 and higher, which improved performance on multiprocessor machines. Is the machine in question hitting VM? if it is paging out, then the performance of mysql will be horrible. \nMy suggestions:\n\ncheck version of mysql. If possible, get the latest 5.5 version.\nLook at the config files for mysql called my.cnf. Make sure that it makes sense on your machine. There are example config files for small, medium, large, etc machines to run MySQL. I think the default setup is for a machine with < 1 Gig of ram. \nAs the other answer suggests, turn on slow query logging.","Q_Score":1,"Tags":"python,mysql,linux,indexing","A_Id":9198763,"CreationDate":"2011-11-08T10:06:00.000","Title":"Python processes and MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry, but does this make sense? the ORM means: Object Relational Mapper, and here, there is Relational, and NoSql is not RDBMS! so why the use of an ORM in a NoSql solution? because i see updates of ORMs for Python!","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":4264,"Q_Id":8051614,"Users Score":3,"Answer":"Interesting question. Although NoSQL databases do not have a mechanism to identify relationships, it does not mean that there are no logical relationships between the data that you are storing. Most of the time, you are handling & enforcing those relationships in code manually if you're using a NoSQL database.\nHence, I feel that ORMs can still help you here. If you do have data that is related, but need to use a NoSQL database, an ORM can still help you in maintaining clean data.\nFor Example, I use Amazon SimpleDB for the lower cost, but my data still has relationships, which need to be maintained. Currently, I'm doing that manually. Maybe an ORM would help me as well.","Q_Score":8,"Tags":"python,orm,mongodb","A_Id":8051721,"CreationDate":"2011-11-08T14:03:00.000","Title":"why the use of an ORM with NoSql (like MongoDB)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry, but does this make sense? the ORM means: Object Relational Mapper, and here, there is Relational, and NoSql is not RDBMS! so why the use of an ORM in a NoSql solution? because i see updates of ORMs for Python!","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":4264,"Q_Id":8051614,"Users Score":2,"Answer":"ORM is an abstraction layer. Switching to a different engine is much easier when the queries are abstracted away, and hidden behind a common interface (it doesn't always work that well in practice, but it's still easier than without).","Q_Score":8,"Tags":"python,orm,mongodb","A_Id":8051652,"CreationDate":"2011-11-08T14:03:00.000","Title":"why the use of an ORM with NoSql (like MongoDB)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry, but does this make sense? the ORM means: Object Relational Mapper, and here, there is Relational, and NoSql is not RDBMS! so why the use of an ORM in a NoSql solution? because i see updates of ORMs for Python!","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":4264,"Q_Id":8051614,"Users Score":13,"Answer":"Firstly, they are not ORM (since they don't have any relations among them), they are ODM (Object Document Mapper)\nMain usage of these ODM frameworks here same as the some common feature of ORM, thus \n\nproviding the abstraction over your data model. you can have your data modelled in your application irrespective of the target software. \nMost ODM's build to leverage the existing language features and use the familiar pattern to manipulate data instead to learn new language syntax's of the new software.\n\nWhen i use mongoid (Ruby ODM for mongo), i can query mongo the way i do it in active model (mostly).\nSince they don't have the relation among them, these ODM's provide the way to define the relations in your models and simulate the relationships. These are all abstracted from the developer so they can code the same way they do with the relational data.","Q_Score":8,"Tags":"python,orm,mongodb","A_Id":8051825,"CreationDate":"2011-11-08T14:03:00.000","Title":"why the use of an ORM with NoSql (like MongoDB)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm thinking in create a webapplication with cakephp but consuming python's appengine webservice. But, to install cakephp etc, I need to configure the database. Appengine uses another kind of datastorage, with is different from mysql, etc.\nI was thinking in store the data in appengine, and using the python webservices, and with the cakephp application comunicating with the webservice, for insert and retrieve data.\nIs there any good resource for this, or is it unpossible.\nObs: also opened for a possibility for developing the webapplicaiton completely in python running in appengine. If anyone has a good resource.\nThanks.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":378,"Q_Id":8069649,"Users Score":0,"Answer":"You can not run PHP on GAE. If you run PHP somewhere, it is a bad architecture to go over the internet for your data. It will be slooooow and a nightmare to develop in.\nYou should store your data where you run your php, unless you must have a distributed, globally scaling architecture, which afaiu not the case.","Q_Score":0,"Tags":"php,python,google-app-engine,cakephp","A_Id":8070747,"CreationDate":"2011-11-09T18:21:00.000","Title":"Connect appengine with cakephp","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am currently working on a pyramid system that uses sqlalchemy.\nThis system will include a model (let's call it Base) that is stored in a\ndatabase table. This model should be extensible by the user on runtime. Basically, the user\nshould be able to subclass the Base and create a new model (let's call this one 'Child').\nChilds should be stored in another database table.\nAll examples available seem to handle database reflection on a predefined model.\nWhat would be the best way to generate complete model classes via database reflection?","AnswerCount":4,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1319,"Q_Id":8122078,"Users Score":4,"Answer":"This doesn't seem to have much to do with \"database reflection\", but rather dynamic table creation. This is a pretty dangerous operation and generally frowned upon.\nYou should try to think about how to model the possible structure your users would want to add to the Base and design your schema around that. Sometimes these flexible structures can benefit a lot from vertical tables when you don't know what the columns may be.\nDon't forget that there's an entire class of data storage systems out there that provide more flexible support for \"schemaless\" models. Something like Mongo or ZODB might make more sense here.","Q_Score":3,"Tags":"python,reflection,sqlalchemy,pyramid","A_Id":8125931,"CreationDate":"2011-11-14T13:09:00.000","Title":"Model Creation by SQLAlchemy database reflection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can we remove the database name and username that appears on top left hand side corner in openERP window after openERP logo.In which file do we need to make changes to remove that.\nThanks,\nSameer","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":8151033,"Users Score":0,"Answer":"It's in the openerp-web module. The location depends on your particular configuration. The relevant code can be found in the file addons\/web\/static\/src\/xml\/base.xml. Search for header_title and edit the contents of the h1 tag of that class.","Q_Score":0,"Tags":"python,openerp","A_Id":12295904,"CreationDate":"2011-11-16T11:37:00.000","Title":"Removing Database name and username from top Left hand side corner.","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to convert xlsx file to xls format using python. The reason is that im using xlrd library to parse xls files, but xlrd is not able to parse xlsx files.\nSwitching to a different library is not feasible for me at this stage, as the entire project is using xlrd, so a lot of changes will be required.\nSo, is there any way i can programatically convert an xlsx file to xls using python ?\nPlease Help\nThank You","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1806,"Q_Id":8151243,"Users Score":0,"Answer":"xlrd-0.9.2.tar.gz (md5) can extract data from Excel spreadsheets (.xls and .xlsx, versions 2.0 on-wards) on any platform.","Q_Score":0,"Tags":"python,excel,xls,xlsx,xlrd","A_Id":21996139,"CreationDate":"2011-11-16T11:54:00.000","Title":"xlrd library not working with xlsx files.any way to covert xlsx to xls using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"if I use cx_Oracle 5.0.4, I can connect from python console, and works under apache+django+mod_wsgi\nbut when I update cx_Oracle 5.1.1, I can connect from python console, BUT same code doesn't work under apache+django+mod_wsgi\nFile \"C:\\Python27\\lib\\site-packages\\django\\db\\backends\\oracle\\base.py\", line 24, in \n raise ImproperlyConfigured(\"Error loading cx_Oracle module: %s\" % e)\n TemplateSyntaxError: Caught ImproperlyConfigured while rendering: Error loading cx_Oracle module: DLL load failed: The specified module could not be found. \nPS: python 2.7 \nPSS: I have instaled MSVC 2008 Redistributable x86","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1227,"Q_Id":8151815,"Users Score":1,"Answer":"Need a solution as well. \nI have the same setup on WinXP (Apache 2.2.21\/ mod_wsgi 3.3\/ python 2.7.2\/ cx_Oracle 5.x.x). I found that cx_Oracle 5.1 also fails with the same error. Only 5.0.4 works. \nHere is the list of changes that were made from 5.0.4 to 5.1:\n\nRemove support for UNICODE mode and permit Unicode to be passed through in\neverywhere a string may be passed in. This means that strings will be\npassed through to Oracle using the value of the NLS_LANG environment\nvariable in Python 3.x as well. Doing this eliminated a bunch of problems\nthat were discovered by using UNICODE mode and also removed an unnecessary\nrestriction in Python 2.x that Unicode could not be used in connect strings\nor SQL statements, for example.\nAdded support for creating an empty object variable via a named type, the\nfirst step to adding full object support.\nAdded support for Python 3.2.\nAccount for lib64 used on x86_64 systems. Thanks to Alex Wood for supplying\nthe patch.\nClear up potential problems when calling cursor.close() ahead of the\ncursor being freed by going out of scope.\nAvoid compilation difficulties on AIX5 as OCIPing does not appear to be\navailable on that platform under Oracle 10g Release 2. Thanks to\nPierre-Yves Fontaniere for the patch.\nFree temporary LOBs prior to each fetch in order to avoid leaking them.\nThanks to Uwe Hoffmann for the initial patch.","Q_Score":1,"Tags":"python,apache,cx-oracle","A_Id":8158089,"CreationDate":"2011-11-16T12:36:00.000","Title":"cx_Oracle 5.1.1 under apache+mod_wsgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working with SQLAlchemy for the first time and was wondering...generally speaking is it enough to rely on python's default equality semantics when working with SQLAlchemy vs id (primary key) equality?\nIn other projects I've worked on in the past using ORM technologies like Java's Hibernate, we'd always override .equals() to check for equality of an object's primary key\/id, but when I look back I'm not sure this was always necessary.\nIn most if not all cases I can think of, you only ever had one reference to a given object with a given id. And that object was always the attached object so technically you'd be able to get away with reference equality.\nShort question: Should I be overriding eq() and hash() for my business entities when using SQLAlchemy?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4581,"Q_Id":8179068,"Users Score":1,"Answer":"I had a few situations where my sqlalchemy application would load multiple instances of the same object (multithreading\/ different sqlalchemy sessions ...). It was absolutely necessary to override eq() for those objects or I would get various problems. This could be a problem in my application design, but it probably doesn't hurt to override eq() just to be sure.","Q_Score":11,"Tags":"python,sqlalchemy","A_Id":8179370,"CreationDate":"2011-11-18T07:24:00.000","Title":"sqlalchemy id equality vs reference equality","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Essentially I have a large database of transactions and I am writing a script that will take some personal information and match a person to all of their past transactions. \nSo I feed the script a name and it returns all of the transactions that it has decided belong to that customer.\nThe issue is that I have to do this for almost 30k people and the database has over 6 million transaction records.\nRunning this on one computer would obviously take a long time, I am willing to admit that the code could be optimized but I do not have time for that and I instead want to split the work over several computers. Enter Celery:\nMy understanding of celery is that I will have a boss computer sending names to a worker computer which runs the script and puts the customer id in a column for each transaction it matches.\nWould there be a problem with multiple worker computers searching and writing to the same database?\nAlso, have I missed anything and\/or is this totally the wrong approach?\nThanks for the help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":8230617,"Users Score":2,"Answer":"No, there wouldn't be any problem multiple worker computers searching and writing to the same database since MySQL is designed to be able to handle this. Your approach is good.","Q_Score":2,"Tags":"python,mysql,celery","A_Id":8230713,"CreationDate":"2011-11-22T16:56:00.000","Title":"relatively new programmer interested in using Celery, is this the right approach","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently trying to build and install the mySQLdb module for Python, but the command \npython setup.py build\ngives me the following error \nrunning build\nrunning build_py\ncopying MySQLdb\/release.py -> build\/lib.macosx-10.3-intel-2.7\/MySQLdb\nerror: could not delete 'build\/lib.macosx-10.3-intel-2.7\/MySQLdb\/release.py': Permission denied\nI verified that I'm a root user and when trying to execute the script using sudo, I then get a gcc-4.0 error: \nrunning build\nrunning build_py\ncopying MySQLdb\/release.py -> build\/lib.macosx-10.3-fat-2.7\/MySQLdb\nrunning build_ext\nbuilding '_mysql' extension\ngcc-4.0 -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I\/usr\/local\/mysql\/include -I\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/include\/python2.7 -c _mysql.c -o build\/temp.macosx-10.3-fat-2.7\/_mysql.o -Os -g -fno-common -fno-strict-aliasing -arch x86_64\nunable to execute gcc-4.0: No such file or directory\nerror: command 'gcc-4.0' failed with exit status 1\nWhich is odd, because I'm using XCode 4 with Python 2.7. I've tried the easy_install and pip methods, both of which dont work and give me a permission denied error on release.py. I've chmodded that file to see if that was the problem but no luck. Thoughts?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":912,"Q_Id":8236963,"Users Score":1,"Answer":"Make sure that gcc-4.0 is in your PATH. Also, you can create an alias from gcc to gcc-4.0.\nTake care about 32b and 64b versions. Mac OS X is a 64b operating system and you should right flags to make sure you're compiling for 64b architecture.","Q_Score":4,"Tags":"python,mysql,django,mysql-python","A_Id":8260644,"CreationDate":"2011-11-23T03:28:00.000","Title":"Errors When Installing MySQL-python module for Python 2.7","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a SQL dump of a legacy DB, and a folder with images, and those are referenced by some rows of certain tables, and I need to migrate that data to the new Django models. The specific problem is how to \"perform\" the upload, but in a management command.\nWhen the table with the field referenced is migrated to it's corresponding model, I need to also set the image field of the model, and I also need to process the filename accordingly to the upload_to parameter for the ImageField.\nHow to programmatically populate the image field from a file path or a file descriptor?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":438,"Q_Id":8280859,"Users Score":0,"Answer":"One approach would be to create a utility django project specifying your legacy database in settings.py. Then use the inspectdb management command to create a django model representation of your legacy database. And finally use dumpdata to get you data in JSON format.\nYou could then finally make your own JSON script that inserts your old data in your new models.","Q_Score":1,"Tags":"python,django,data-migration,filefield","A_Id":8280938,"CreationDate":"2011-11-26T19:05:00.000","Title":"Migrate a legacy DB to Django, with image files","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction.\nRather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is \"Smith\" and then have all of those records loaded into memory as you go though each looking for matches for a specific \"John Smith\" using various data points.\nWould this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":8299614,"Users Score":0,"Answer":"Your strategy is reasonable though I would first look at doing as much of the work as possible in the database query using LIKE and other SQL functions. It should be possible to make a query that matches complex criteria.","Q_Score":0,"Tags":"python,mysql","A_Id":8299759,"CreationDate":"2011-11-28T17:14:00.000","Title":"Could someone give me their two cents on this optimization strategy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction.\nRather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is \"Smith\" and then have all of those records loaded into memory as you go though each looking for matches for a specific \"John Smith\" using various data points.\nWould this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":98,"Q_Id":8299614,"Users Score":2,"Answer":"Regarding: \"would this be faster:\"\nThe behind-the-scenes logistics of the SQL engine are really optimized for this sort of thing. You might need to create an SQL PROCEDURE or a fairly complex query, however.\nCaveat, if you're not particularly good at or fond of maintaining SQL, and this isn't a time-sensitive query, then you might be wasting programmer time over CPU\/IO time in getting it right.\nHowever, if this is something that runs often or is time-sensitive, you should almost certainly be building some kind of JOIN logic in SQL, passing in the appropriate values (possibly wildcards), and letting the database do the filtering in the relational data set, instead of collecting a larger number of \"wrong\" records and then filtering them out in procedural code.\nYou say the database is \"pretty slow.\" Is this because it's on a distant host, or because the tables aren't indexed for the types of searches you're doing? \u2026 If you're doing a complex query against columns that aren't indexed for it, that can be a pain; you can use various SQL tools including ANALYZE to see what might be slowing down a query. Most SQL GUI's will have some shortcuts for such things, as well.","Q_Score":0,"Tags":"python,mysql","A_Id":8299780,"CreationDate":"2011-11-28T17:14:00.000","Title":"Could someone give me their two cents on this optimization strategy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making an internal API in Python (pardon my terms) that provides a layer over MySQL and Solr (databases) with only simple computing. A Python program that spawns from scratch waits 80ms for Solr, while taking negligible time by itself. \nI am worried about the incomplete threading support of Python. So which of the modern Thrift servers allows high-performance request handling?\n\nIn Python, I could make a WSGI app under Apache workers that:\n\npooling resources such as DB connection objects\nhigh performance with minimum processes\ngraceful dropping of requests\n(relatively) graceful code reloading\na keep-alive mechanism (restart the application if it crashes)","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1136,"Q_Id":8308610,"Users Score":1,"Answer":"Apparently TProcessPoolServer is a good server and forks different processes, avoiding threading issues.","Q_Score":1,"Tags":"python,thrift","A_Id":8842820,"CreationDate":"2011-11-29T09:42:00.000","Title":"Performance of TNonblockingServer, TThreadPoolServer for DB-bound server in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The idea is to make a script that would get stored procedure and udf contents (code) every hour (for example) and add it to SVN repo. As a result we have sql versioning control system.\nDoes anyone know how to backup stored procedure code using Python (sqlAlchemy, pyodbc or smth). \nI'v done this via C# before using SQL Management Objects.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":244,"Q_Id":8412636,"Users Score":1,"Answer":"There is no easy way to access SMO from Python (because there is no generic solution for accessing .NET from Python), so I would write a command-line tool in C# and call it from Python using the subprocess module. Perhaps you could do something with ctypes but I have no idea if that's feasible.\nBut, perhaps a more important question is why you want or need to do this. Does the structure of your database really change so often? If so, presumably you have no real control over it so what benefit does source control have in that scenario? How do you deploy database changes in the first place? Usually changes go from source control into production, not the other way around, so the 'master' source of DDL (including tables, indexes etc.) is SVN, not the database. But you haven't given much information about what you really need to achieve, so perhaps there is a good reason for needing to do this in your environment.","Q_Score":0,"Tags":"c#,python,sql-server,stored-procedures,smo","A_Id":8414549,"CreationDate":"2011-12-07T09:00:00.000","Title":"backup msSQL stored proc or UDF code via Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am parsing the USDA's food database and storing it in SQLite for query purposes. Each food has associated with it the quantities of the same 162 nutrients. It appears that the list of nutrients (name and units) has not changed in quite a while, and since this is a hobby project I don't expect to follow any sudden changes anyway. But each food does have a unique quantity associated with each nutrient.\nSo, how does one go about storing this kind of information sanely. My priorities are multi-programming language friendly (Python and C++ having preference), sanity for me as coder, and ease of retrieving nutrient sets to sum or plot over time.\nThe two things that I had thought of so far were 162 columns (which I'm not particularly fond of, but it does make the queries simpler), or a food table that has a link to a nutrient_list table that then links to a static table with the nutrient name and units. The second seems more flexible i ncase my expectations are wrong, but I wouldn't even know where to begin on writing the queries for sums and time series.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":373,"Q_Id":8431451,"Users Score":4,"Answer":"Use the second (more normalized) approach.\nYou could even get away with fewer tables than you mentioned:\n\ntblNutrients\n-- NutrientID\n-- NutrientName\n-- NutrientUOM (unit of measure)\n-- Otherstuff \ntblFood\n-- FoodId\n-- FoodName\n-- Otherstuff \ntblFoodNutrients\n-- FoodID (FK)\n-- NutrientID (FK)\n-- UOMCount \n\nIt will be a nightmare to maintain a 160+ field database.\nIf there is a time element involved too (can measurements change?) then you could add a date field to the nutrient and\/or the foodnutrient table depending on what could change.","Q_Score":2,"Tags":"c++,python,sql,sqlite","A_Id":8431705,"CreationDate":"2011-12-08T13:07:00.000","Title":"How to store data with large number (constant) of properties in SQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question about making the decision whether to use MySQL database or Mongo database, the problem with my decision is that I am highly depending on these things:\n\nI want to select records between two dates (period)\n\nHowever is this possible?\nMy Application won't do any complex queries, just basic crud. It has Facebook integration so sometimes I got to JOIN the users table at the current setup.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":295,"Q_Id":8437213,"Users Score":1,"Answer":"MySQL(SQL) or MongoDB(NoSQL), both can work for your needs. but idea behind using RDBMS\/NoSQL is the requirement of your application \n\nif your application care about speed and no relation between the data is necessary and your data schema changes very frequently, you can choose MongoDB, faster since no joins needed, every data is a stored as document \nelse, go for MySQL","Q_Score":1,"Tags":"mysql,mongodb,mysql-python","A_Id":8437321,"CreationDate":"2011-12-08T20:25:00.000","Title":"MongoDB or MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question about making the decision whether to use MySQL database or Mongo database, the problem with my decision is that I am highly depending on these things:\n\nI want to select records between two dates (period)\n\nHowever is this possible?\nMy Application won't do any complex queries, just basic crud. It has Facebook integration so sometimes I got to JOIN the users table at the current setup.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":295,"Q_Id":8437213,"Users Score":3,"Answer":"Either DB will allow you to filter between dates and I wouldn't use that requirement to make the decision. Some questions you should answer: \n\nDo you need to store your data in a relational system, like MySQL? Relational databases are better at cross entity joining.\nWill your data be very complicated, but you will only make simple queries (e.g. by an ID), if so MongoDB may be a better fit as storing and retrieving complex data is a cinch.\nWho and where will you be querying the data from? MySql uses SQL for querying, which is a much more well known skill than mongo's JSON query syntax.\n\nThese are just three questions to ask. In order to make a recommendation, we'll need to know more about your application?","Q_Score":1,"Tags":"mysql,mongodb,mysql-python","A_Id":8437280,"CreationDate":"2011-12-08T20:25:00.000","Title":"MongoDB or MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a flat-file, portable key-value store in Python. I'll be using strings for keys and either strings or lists for values. I looked at ZODB but I'd like something which is more widely used and is more actively developed. Do any of the dmb modules in Python require system libraries or a database server (like mysql or the likes) or can I write to file with any of them?\nIf a dbm does not support a python lists, I imagine that I can just serialize it?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4198,"Q_Id":8528001,"Users Score":2,"Answer":"You can look at the shelve module. It uses pickle under the hood, and allows you to create a key-value look up that persists between launches.\nAdditionally, the json module with dump and load methods would probably work pretty well as well.","Q_Score":2,"Tags":"python","A_Id":8528030,"CreationDate":"2011-12-15T23:22:00.000","Title":"Flat file key-value store in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm user of a Python application that has poorly indexed tables and was wondering if it's possible to improve performance by converting the SQLite database into an in-memory database upon application startup. My thinking is that it would minimize the issue of full table scans, especially since SQLite might be creating autoindexes, as the documentation says that is enabled by default. How can this be accomplished using the SQLAlchemy ORM (that is what the application uses)?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1450,"Q_Id":8549326,"Users Score":0,"Answer":"Whenever you set a variable in Python you are instantiating an object. This means you are allocating memory for it.\nWhen you query sqlite you are simply reading information off the disk into memory.\nsqlalchemy is simply an abstraction. You read the data from disk into memory in the same way, by querying the database and setting the returned data to a variable.","Q_Score":2,"Tags":"python,sqlite,sqlalchemy","A_Id":8549565,"CreationDate":"2011-12-18T01:59:00.000","Title":"Is it possible to read a SQLite database into memory with SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm user of a Python application that has poorly indexed tables and was wondering if it's possible to improve performance by converting the SQLite database into an in-memory database upon application startup. My thinking is that it would minimize the issue of full table scans, especially since SQLite might be creating autoindexes, as the documentation says that is enabled by default. How can this be accomplished using the SQLAlchemy ORM (that is what the application uses)?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1450,"Q_Id":8549326,"Users Score":1,"Answer":"At the start of the program, move the database file to a ramdisk, point SQLAlchemy to it and do your processing, and then move the SQLite file back to non-volatile storage.\nIt's not a great solution, but it'll help you determine whether caching your database in memory is worthwhile.","Q_Score":2,"Tags":"python,sqlite,sqlalchemy","A_Id":10002601,"CreationDate":"2011-12-18T01:59:00.000","Title":"Is it possible to read a SQLite database into memory with SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been learning python by building a webapp on google app engine over the past five or six months. I also just finished taking a databases class this semester where I learned about views, and their benefits.\nIs there an equivalent with the GAE datastore using python?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":8570066,"Users Score":5,"Answer":"Read-only views (the most common type) are basically queries against one or more tables to present the illusion of new tables. If you took a college-level database course, you probably learned about relational databases, and I'm guessing you're looking for something like relational views.\nThe short answer is No.\nThe GAE datastore is non-relational. It doesn't have tables. It's essentially a very large distributed hash table that uses composite keys to present the (very useful) illusion of Entities, which are easy at first glance to mistake for rows in a relational database.\nThe longer answer depends on what you'd do if you had a view.","Q_Score":2,"Tags":"python,sql,google-app-engine","A_Id":8570245,"CreationDate":"2011-12-20T02:21:00.000","Title":"Is there an equivalent of a SQL View in Google App Engine Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am writing a quick and dirty script which requires interaction with a database (PG).\nThe script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more \"refined\" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg.\nThe advantages for psycopg2 (as I currently understand it) is that:\n\nwritten in C, so faster than sqlAlchemy (written in Python)?\nNo abstraction layer over the DBAPI since works with one db and one db only (implication -> fast)\n(For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight)\n\nDisadvantages:\n\nI KNOW that I will want an ORM further down the line\npsycopg2 is (\"dated\"?) - don't know how long it will remain around for\n\nAre my perceptions of SqlAlchemy (slow\/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the \"rough and ready\" way I want to use psycopg - namely:\n\nexecute SQL statements directly without having to mess about with the ORM layer, etc.\n\nAny examples of doing this available?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":40683,"Q_Id":8588126,"Users Score":108,"Answer":"SQLAlchemy is a ORM, psycopg2 is a database driver. These are completely different things: SQLAlchemy generates SQL statements and psycopg2 sends SQL statements to the database. SQLAlchemy depends on psycopg2 or other database drivers to communicate with the database!\nAs a rather complex software layer SQLAlchemy does add some overhead but it also is a huge boost to development speed, at least once you learned the library. SQLAlchemy is a excellent library and will teach you the whole ORM concept, but if you don't want to generate SQL statements to begin with then you don't want SQLAlchemy.","Q_Score":57,"Tags":"python,postgresql,sqlalchemy,psycopg2","A_Id":8588766,"CreationDate":"2011-12-21T10:08:00.000","Title":"SQLAlchemy or psycopg2?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a quick and dirty script which requires interaction with a database (PG).\nThe script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more \"refined\" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg.\nThe advantages for psycopg2 (as I currently understand it) is that:\n\nwritten in C, so faster than sqlAlchemy (written in Python)?\nNo abstraction layer over the DBAPI since works with one db and one db only (implication -> fast)\n(For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight)\n\nDisadvantages:\n\nI KNOW that I will want an ORM further down the line\npsycopg2 is (\"dated\"?) - don't know how long it will remain around for\n\nAre my perceptions of SqlAlchemy (slow\/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the \"rough and ready\" way I want to use psycopg - namely:\n\nexecute SQL statements directly without having to mess about with the ORM layer, etc.\n\nAny examples of doing this available?","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":40683,"Q_Id":8588126,"Users Score":11,"Answer":"To talk with database any one need driver for that. If you are using client like SQL Plus for oracle, MysqlCLI for Mysql then it will direct run the query and that client come with DBServer pack. \nTo communicate from outside with any language like java, c, python, C#... We need driver to for that database. psycopg2 is driver to run query for PostgreSQL from python. \nSQLAlchemy is the ORM which is not same as database driver. It will give you flexibility so you can write your code without any database specific standard. ORM provide database independence for programmer. If you write object.save in ORM then it will check, which database is associated with that object and it will generate insert query according to the backend database.","Q_Score":57,"Tags":"python,postgresql,sqlalchemy,psycopg2","A_Id":8589254,"CreationDate":"2011-12-21T10:08:00.000","Title":"SQLAlchemy or psycopg2?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to compare massive database dumps in xls format to parse for changes day-to-day (gross, right?). I'm currently doing this in the most backwards way possible, and using xlrd to turn the xls into csv files, and then I'm running diffs to compare them.\nSince it's a database, and I don't have a means of knowing if the data ever stays in the same order after something like an item deletion, I can't do a compare x line to x line between the files, so doing lists of tuples or something wouldn't make the most sense to me.\nI basically need to find every single change that could have happened on any row REGARDLESS of that row's position in the actual dump, and the only real \"lookup\" I could think of is SKU as a unique ID (it's a product table from an ancient DB system), but I need to know a lot more than just products being deleted or added, because they could modify pricing or anything else in that item.\nShould I be using sets? And once I've loaded 75+ thousand lines of this database file into a \"set\", is my ram usage going to be hysterical?\nI thought about loading in each row of the xls as a big concatenated string to add to a set. Is that an efficient idea? I could basically get a list of rows that differ between sets and then go back after those rows in the original db file to find my actual differences.\nI've never worked with data parsing on a scale like this. I'm mostly just looking for any advice to not make this process any more ridiculous than it has to be, and I came here after not really finding something that seemed specific enough to my case to feel like good advice. Thanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19719,"Q_Id":8609737,"Users Score":0,"Answer":"You could load the data into a database, and compare the databases. If you think that is easier.\nThe key question you might need to think about is: can you sort the data somehow?\nSorted sets are so much easier to handle.\nP.S. 75000 lines is not very much. Anything that fits into main memory of a regular computer is not much. Add a couple of 0s.","Q_Score":2,"Tags":"python,database,diff,set,compare","A_Id":8609909,"CreationDate":"2011-12-22T21:08:00.000","Title":"Python comparing two massive sets of data in the MOST efficient method possible","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using postgresql in ubuntu, and now i am working on python. I want to connect postgresql with an android application. Is there any way to connect postgresql with an android application?\n Any reply would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":377,"Q_Id":8613246,"Users Score":2,"Answer":"Better way is to Use RestFUL API or WebService as front end for your Android device to connect to your PostgreSQL backend. I am not sure if it is possible to directly connect your android device to postgre SQL.","Q_Score":0,"Tags":"android,python,web-services,postgresql,ubuntu-10.04","A_Id":8613304,"CreationDate":"2011-12-23T07:11:00.000","Title":"what is the way to connect postgresql with android in ubuntu","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script where by a user registers his\/her username, but a function checks whether this username is already in the db or not. But I'm stuck on how to match my query with the input. Here is the code:\n\ndef checker(self, insane): \n t = (insane,) \n cmd = \"SELECT admin_user FROM admin_db where admin_user = \\\"%s\\\";\" %t\n self.cursor.execute(cmd)\n namer = self.cursor.fetchone()\n print namer\n if namer == insane: \n print(\"Username is already chosen!\")\n exit(1)\n else:\n pass\n\nSince namer returns as (u'maverick',) It doesn't match with the input. How should I go about implementing that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1309,"Q_Id":8617837,"Users Score":1,"Answer":"The DB fetch models return a tuple for each row. Since you've only selected a single field, you can simply access namer[0] to get the actual value.","Q_Score":0,"Tags":"python,sql,sqlite","A_Id":8617856,"CreationDate":"2011-12-23T15:40:00.000","Title":"Matching user's input with query from sqlite3 db in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've looked in the documentation and haven't seen (from first sight) anything about cache in Pyramid. Maybe I missed something... Or maybe there are some third party packages to help with this.\nFor example, how to cache db query (SQLAlchemy), how to cache views? Could anyone give some link to examples or documentation?\nAppreciate any help!\nEDITED:\nHow to use memcache or database type cache or filebased cache?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":4486,"Q_Id":8651061,"Users Score":6,"Answer":"Your options are pyramid_beaker and dogpile.cache \npyramid_beaker was written to offer beaker caching for sessions. it also lets you configure beaker cache regions, which can be used elsewhere.\ndogpile.cache is a replacement for beaker. it hasn't been integrated to offer session support or environment.ini based setup yet. however it addresses a lot of miscellaneous issues and shortcomings with beaker.\nyou can't\/shouldn't cache a SqlAlchemy query or results. weird and bad things will happen, because the SqlAlchemy objects are bound to a database session. it's much better to convert the sqlalchemy results into another object\/dict and cache those.","Q_Score":3,"Tags":"python,sqlalchemy,pyramid","A_Id":14859955,"CreationDate":"2011-12-28T01:50:00.000","Title":"How to cache using Pyramid?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm engaged in developing a turn-based casual MMORPG game server.\nThe low level engine(NOT written by us) which handle networking,\nmulti-threading, timer, inter-server communication, main game loop etc, was\nwritten by C++. The high level game logic was written by Python.\nMy question is about the data model design in our game.\nAt first we simply try to load all data of a player into RAM and a shared data\ncache server when client login and schedule a timer periodically flush data into\ndata cache server and data cache server will persist data into database.\nBut we found this approach has some problems\n1) Some data needs to be saved or checked instantly, such as quest progress,\nlevel up, item & money gain etc.\n2) According to game logic, sometimes we need to query some offline player's\ndata.\n3) Some global game world data needs to be shared between different game\ninstances which may be running on a different host or a different process on the\nsame host. This is the main reason we need a data cache server sits between game\nlogic server and database.\n4) Player needs freely switch between game instances.\nBelow is the difficulty we encountered in the past:\n1) All data access operation should be asynchronized to avoid network I\/O\nblocking the main game logic thread. We have to send message to database or\ncache server and then handle data reply message in callback function and\ncontinue proceed game logic. It quickly become painful to write some moderate\ncomplex game logic that needs to talk several times with db and the game logic\nis scattered in many callback functions makes it hard to understand and\nmaintain.\n2) The ad-hoc data cache server makes things more complex, we hard to maintain\ndata consistence and effectively update\/load\/refresh data.\n3) In-game data query is inefficient and cumbersome, game logic need to query\nmany information such as inventory, item info, avatar state etc. Some\ntransaction machanism is also needed, for example, if one step failed the entire\noperation should be rollback. We try to design a good data model system in RAM,\nbuilding a lot of complex indexs to ease numerous information query, adding\ntransaction support etc. Quickly I realized what we are building is a in-memory\ndatabase system, we are reinventing the wheel...\nFinally I turn to the stackless python, we removed the cache server. All data\nare saved in database. Game logic server directly query database. With stackless\npython's micro tasklet and channel, we can write game logic in a synchronized\nway. It is far more easy to write and understand and productivity greatly\nimproved.\nIn fact, the underlying DB access is also asynchronized: One client tasklet\nissue request to another dedicate DB I\/O worker thread and the tasklet is\nblocked on a channel, but the entire main game logic is not blocked, other\nclient's tasklet will be scheduled and run freely. When DB data reply the\nblocked tasklet will be waken up and continue to run on the 'break\npoint'(continuation?).\nWith above design, I have some questions:\n1) The DB access will be more frequently than previous cached solution, does the\nDB can support high frequent query\/update operation? Does some mature cache\nsolution such as redis, memcached is needed in near future?\n2) Are there any serious pitfalls in my design? Can you guys give me some better\nsuggestions, especially on in-game data management pattern.\nAny suggestion would be appreciated, thanks.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":3298,"Q_Id":8660622,"Users Score":2,"Answer":"It's difficult to comment on the entire design\/datamodel without greater understanding of the software, but it sounds like your application could benefit from an in-memory database.* Backing up such databases to disk is (relatively speaking) a cheap operation. I've found that it is generally faster to: \nA) Create an in-memory database, create a table, insert a million** rows into the given table, and then back-up the entire database to disk \nthan \nB) Insert a million** rows into a table in a disk-bound database.\nObviously, single record insertions\/updates\/deletions also run faster in-memory. I've had success using JavaDB\/Apache Derby for in-memory databases.\n*Note that the database need not be embedded in your game server.\n**A million may not be an ideal size for this example.","Q_Score":8,"Tags":"python,database,python-stackless","A_Id":8660848,"CreationDate":"2011-12-28T19:57:00.000","Title":"Need suggestion about MMORPG data model design, database access and stackless python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm engaged in developing a turn-based casual MMORPG game server.\nThe low level engine(NOT written by us) which handle networking,\nmulti-threading, timer, inter-server communication, main game loop etc, was\nwritten by C++. The high level game logic was written by Python.\nMy question is about the data model design in our game.\nAt first we simply try to load all data of a player into RAM and a shared data\ncache server when client login and schedule a timer periodically flush data into\ndata cache server and data cache server will persist data into database.\nBut we found this approach has some problems\n1) Some data needs to be saved or checked instantly, such as quest progress,\nlevel up, item & money gain etc.\n2) According to game logic, sometimes we need to query some offline player's\ndata.\n3) Some global game world data needs to be shared between different game\ninstances which may be running on a different host or a different process on the\nsame host. This is the main reason we need a data cache server sits between game\nlogic server and database.\n4) Player needs freely switch between game instances.\nBelow is the difficulty we encountered in the past:\n1) All data access operation should be asynchronized to avoid network I\/O\nblocking the main game logic thread. We have to send message to database or\ncache server and then handle data reply message in callback function and\ncontinue proceed game logic. It quickly become painful to write some moderate\ncomplex game logic that needs to talk several times with db and the game logic\nis scattered in many callback functions makes it hard to understand and\nmaintain.\n2) The ad-hoc data cache server makes things more complex, we hard to maintain\ndata consistence and effectively update\/load\/refresh data.\n3) In-game data query is inefficient and cumbersome, game logic need to query\nmany information such as inventory, item info, avatar state etc. Some\ntransaction machanism is also needed, for example, if one step failed the entire\noperation should be rollback. We try to design a good data model system in RAM,\nbuilding a lot of complex indexs to ease numerous information query, adding\ntransaction support etc. Quickly I realized what we are building is a in-memory\ndatabase system, we are reinventing the wheel...\nFinally I turn to the stackless python, we removed the cache server. All data\nare saved in database. Game logic server directly query database. With stackless\npython's micro tasklet and channel, we can write game logic in a synchronized\nway. It is far more easy to write and understand and productivity greatly\nimproved.\nIn fact, the underlying DB access is also asynchronized: One client tasklet\nissue request to another dedicate DB I\/O worker thread and the tasklet is\nblocked on a channel, but the entire main game logic is not blocked, other\nclient's tasklet will be scheduled and run freely. When DB data reply the\nblocked tasklet will be waken up and continue to run on the 'break\npoint'(continuation?).\nWith above design, I have some questions:\n1) The DB access will be more frequently than previous cached solution, does the\nDB can support high frequent query\/update operation? Does some mature cache\nsolution such as redis, memcached is needed in near future?\n2) Are there any serious pitfalls in my design? Can you guys give me some better\nsuggestions, especially on in-game data management pattern.\nAny suggestion would be appreciated, thanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3298,"Q_Id":8660622,"Users Score":6,"Answer":"I've worked with one MMO engine that operated in a somewhat similar fashion. It was written in Java, however, not Python.\nWith regards to your first set of points:\n1) async db access We actually went the other route, and avoided having a \u201cmain game logic thread.\u201d All game logic tasks were spawned as new threads. The overhead of thread creation and destruction was completely lost in the noise floor compared to I\/O. This also preserved the semantics of having each \u201ctask\u201d as a reasonably straightforward method, instead of the maddening chain of callbacks that one otherwise ends up with (although there were still cases of this.) It also meant that all game code had to be concurrent, and we grew increasingly reliant upon immutable data objects with timestamps.\n2) ad-hoc cache We employed a lot of WeakReference objects (I believe Python has a similar concept?), and also made use of a split between the data objects, e.g. \u201cPlayer\u201d, and the \u201cloader\u201d (actually database access methods) e.g. \u201cPlayerSQLLoader;\u201d the instances kept a pointer to their Loader, and the Loaders were called by a global \u201cfactory\u201d class that would handle cache lookups versus network or SQL loads. Every \u201cSetter\u201d method in a data class would call the method changed, which was an inherited boilerplate for myLoader.changed (this);\nIn order to handle loading objects from other active servers, we employed \u201cproxy\u201d objects that used the same data class (again, say, \u201cPlayer,\u201d) but the Loader class we associated was a network proxy that would (synchronously, but over gigabit local network) update the \u201cmaster\u201d copy of that object on another server; in turn, the \u201cmaster\u201d copy would call changed itself.\nOur SQL UPDATE logic had a timer. If the backend database had received an UPDATE of the object within the last ($n) seconds (we typically kept this around 5), it would instead add the object to a \u201cdirty list.\u201d A background timer task would periodically wake and attempt to flush any objects still on the \u201cdirty list\u201d to the database backend asynchronously.\nSince the global factory maintained WeakReferences to all in-core objects, and would look for a single instantiated copy of a given game object on any live server, we would never attempt to instantiate a second copy of one game object backed by a single DB record, so the fact that the in-RAM state of the game might differ from the SQL image of it for up to 5 or 10 seconds at a time was inconsequential.\nOur entire SQL system ran in RAM (yes, a lot of RAM) as a mirror to another server who tried valiantly to write to disc. (That poor machine burned out RAID drives on average of once every 3-4 months due to \u201cold age.\u201d RAID is good.)\nNotably, the objects had to be flushed to database when being removed from cache, e.g. due to exceeding the cache RAM allowance.\n3) in-memory database \u2026 I hadn't run across this precise situation. We did have \u201ctransaction-like\u201d logic, but it all occurred on the level of Java getters\/setters.\nAnd, in regards to your latter points:\n1) Yes, PostgreSQL and MySQL in particular deal well with this, particularly when you use a RAMdisk mirror of the database to attempt to minimize actual HDD wear and tear. In my experience, MMO's do tend to hammer the database more than is strictly necessary, however. Our \u201c5 second rule\u201d* was built specifically to avoid having to solve the problem \u201ccorrectly.\u201d Each of our setters would call changed. In our usage pattern, we found that an object typically had either 1 field changed, and then no activity for some time, or else had a \u201cstorm\u201d of updates happen, where many fields changed in a row. Building proper transactions or so (e.g. informing the object that it was about to accept many writes, and should wait for a moment before saving itself to the DB) would have involved more planning, logic, and major rewrites of the system; so, instead, we bypassed the situation.\n2) Well, there's my design above :-)\nIn point of fact, the MMO engine I'm presently working on uses even more reliance upon in-RAM SQL databases, and (I hope) will be doing so a bit better. However, that system is being built using an Entity-Component-System model, rather than the OOP model that I described above.\nIf you already are based on an OOP model, shifting to ECS is a pretty paradigm shift and, if you can make OOP work for your purposes, it's probably better to stick with what your team already knows.\n*- \u201cthe 5 second rule\u201d is a colloquial US \u201cfolk belief\u201d that after dropping food on the floor, it's still OK to eat it if you pick it up within 5 seconds.","Q_Score":8,"Tags":"python,database,python-stackless","A_Id":8660935,"CreationDate":"2011-12-28T19:57:00.000","Title":"Need suggestion about MMORPG data model design, database access and stackless python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Apologies for the longish description.\nI want to run a transform on every doc in a large-ish Mongodb collection with 10 million records approx 10G. Specifically I want to apply a geoip transform to the ip field in every doc and either append the result record to that doc or just create a whole other record linked to this one by say id (the linking is not critical, I can just create a whole separate record). Then I want to count and group by say city - (I do know how to do the last part).\nThe major reason I believe I cant use map-reduce is I can't call out to the geoip library in my map function (or at least that's the constraint I believe exists).\nSo I the central question is how do I run through each record in the collection apply the transform - using the most efficient way to do that. \nBatching via Limit\/skip is out of question as it does a \"table scan\" and it is going to get progressively slower.\nAny suggestions?\nPython or Js preferred just bec I have these geoip libs but code examples in other languages welcome.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":477,"Q_Id":8663432,"Users Score":1,"Answer":"Since you have to go over \"each record\", you'll do one full table scan anyway, then a simple cursor (find()) + maybe only fetching few fields (_id, ip) should do it. python driver will do the batching under the hood, so maybe you can give a hint on what's the optimal batch size (batch_size) if the default is not good enough. \nIf you add a new field and it doesn't fit the previously allocated space, mongo will have to move it to another place, so you might be better off creating a new document.","Q_Score":3,"Tags":"python,mongodb,mapreduce","A_Id":8666791,"CreationDate":"2011-12-29T02:33:00.000","Title":"How do I transform every doc in a large Mongodb collection without map\/reduce?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Apologies for the longish description.\nI want to run a transform on every doc in a large-ish Mongodb collection with 10 million records approx 10G. Specifically I want to apply a geoip transform to the ip field in every doc and either append the result record to that doc or just create a whole other record linked to this one by say id (the linking is not critical, I can just create a whole separate record). Then I want to count and group by say city - (I do know how to do the last part).\nThe major reason I believe I cant use map-reduce is I can't call out to the geoip library in my map function (or at least that's the constraint I believe exists).\nSo I the central question is how do I run through each record in the collection apply the transform - using the most efficient way to do that. \nBatching via Limit\/skip is out of question as it does a \"table scan\" and it is going to get progressively slower.\nAny suggestions?\nPython or Js preferred just bec I have these geoip libs but code examples in other languages welcome.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":477,"Q_Id":8663432,"Users Score":0,"Answer":"Actually I am also attempting another approach in parallel (as plan B) which is to use mongoexport. I use it with --csv to dump a large csv file with just the (id, ip) fields. Then the plan is to use a python script to do a geoip lookup and then post back to mongo as a new doc on which map-reduce can now be run for count etc. Not sure if this is faster or the cursor is. We'll see.","Q_Score":3,"Tags":"python,mongodb,mapreduce","A_Id":8677503,"CreationDate":"2011-12-29T02:33:00.000","Title":"How do I transform every doc in a large Mongodb collection without map\/reduce?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a row of data in dict format. Is there an easy way to insert it into a mysql table. I know that I can write a custom function to convert dict into a custom sql query, but I am looking for a more direct alternative.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1650,"Q_Id":8674426,"Users Score":2,"Answer":"MySQLDB does not come with anything which allows a direct operation like that. This is a common problem with a variety of answers, including a custom function for this purpose.\nIn my experience, it is best to buckle down and just write the paramaterized SQL most of the time. If you have the same thing going on a lot, then I would consider factoring it into a utility function.\nHOWEVER, if you are hand-writing static SQL using parameters, then most of the security and bug related issues are taken care of. When you start basing your SQL on a dictionary of data that came from where (?), you need to be much more careful. \nIn summary, your code will likely be more readable and maintainable and secure if you simply write the queries, use parameters, and document well.\n(Note: some proponents of ORM, etc... may disagree... this is an opinion based on a lot of experience on what was simple, reliable, and worked for our team)","Q_Score":2,"Tags":"python,dictionary,insert,mysql-python","A_Id":8674504,"CreationDate":"2011-12-29T22:52:00.000","Title":"MySQLdb inserting a dict into a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a row of data in dict format. Is there an easy way to insert it into a mysql table. I know that I can write a custom function to convert dict into a custom sql query, but I am looking for a more direct alternative.","AnswerCount":2,"Available Count":2,"Score":0.4621171573,"is_accepted":false,"ViewCount":1650,"Q_Id":8674426,"Users Score":5,"Answer":"Well... According to the documentation for paramstyle:\n\nSet to 'format' = ANSI C printf format codes, e.g. '...WHERE name=%s'.\n If a mapping object is used for conn.execute(), then the interface\n actually uses 'pyformat' = Python extended format codes, e.g.\n '...WHERE name=%(name)s'. However, the API does not presently allow\n the specification of more than one style in paramstyle\n\nSo, it should be just a matter of:\ncurs.execute(\"INSERT INTO foo (col1, col2, ...) VALUES (%(key1)s, %(key2)s, ...)\", dictionary)\nwhere key1, key2, etc. would be keys from the dictionary.\nDisclaimer: I haven't tried this myself :)\nEdit: yeah, tried it. It works.","Q_Score":2,"Tags":"python,dictionary,insert,mysql-python","A_Id":8674547,"CreationDate":"2011-12-29T22:52:00.000","Title":"MySQLdb inserting a dict into a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Assuming I have a schema with the name \"my_schema\", how can I create tables with \"django syncdb\" for that particular schema? Or is there any other alternatives for quickly creating tables from my django models? I think, by default django creates tables for the \"public\" schema.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4285,"Q_Id":8680673,"Users Score":0,"Answer":"I have used following info and work for me \n'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': 'dab_name',\n 'USER': 'username',\n 'PASSWORD': 'password',\n 'HOST': 'localhost',\n 'PORT': '5432',\n 'OPTIONS': {\n 'options': '-c search_path=tours' #schema name\n }\n }\nTested on postgresql 9 and django 1.10.2\nThanks @romke","Q_Score":7,"Tags":"python,database,django","A_Id":42578440,"CreationDate":"2011-12-30T14:52:00.000","Title":"How to specify schema name while running \"syncdb\" in django?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For example in CQL, \nSELECT * from abc_dimension ORDER BY key ASC;\nseems to be not working.\nAny help?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3328,"Q_Id":8751293,"Users Score":0,"Answer":"Latest versions of Cassandra support aggregations within single partition only.","Q_Score":2,"Tags":"python,cassandra,cql","A_Id":43361173,"CreationDate":"2012-01-05T23:13:00.000","Title":"does cassandra cql support aggregation functions, like group by and order by","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are sketching out how a system would work. The problem is that have a set of items with a computed value for each item. Say for example you like players in the nba and there are a certain set of players that you have shown preferences about.\nExamples might be:\n\nnumber of games played\nrebounding \nscoring\nassists\nminutes played per game \nplayers that your other friends like\nlikelihood of being traded - you often want players that are going to be traded\n\nThere's approx 500 players in the nba. From a performance pov, querying is cost prohibitive - throwing in other people' s preferences etc.... We have been thinking of doing an alternative approaches. One approach is a NoSQL where each user gets written a document of each player. To be honest, this seems like too many unkowns as I have zero experience. Another approach is where each person in the system would get a table dedicated to them. Perhaps write out the table definition via cron on a nightly basis and when the user logs in, do a create table statement and then have a dedicated query against that. This sounds really ugly to me too (although feasible). We could also certainly have a single table where each user has a row for each player. I'd rather not premise the whole system off self-joins though. It seems to take querying off-line and we could feasibly measure 1000 players against these different parameters.\nAre there other ideas that I'm missing? I don't want anything too esoteric - preferably just MySQL and Python. Would be using InnoDB and not so concerned about splitting up the tables per database per host issue.\nAny other ideas or realword experience would be appreciated? I'm sure this has been solved many times before.\nthx","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":8753559,"Users Score":0,"Answer":"I'm using mongodb right now for the first time and I find it to be really awesome in the way it lets you represent a document pretty much like an object oriented class structure. You can easily have a document per user that stores any number of embedded documents. Your players can be in a nested dictionary or a list, and you can index on player names. Then when you request a user, there are NO joins. You have all your data. Plus there is a flexible schema so you can always just add more fields whenever you want.\nAs for mysql table-per-user, I agree its really messy and can get out of control.\nAnother alternative is you could look into a key-value store like Redis for caching purposes since its in memory, it would be fast as well.","Q_Score":0,"Tags":"python,mysql,database-design","A_Id":8754480,"CreationDate":"2012-01-06T04:56:00.000","Title":"Caching computed results for each user - would having a table dedicated to each user make sense?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any python3 modules for importing(reading) excel .xls files or is there any work in progress on porting one of the python2 modules? All I'm coming up with is xlrd for python2.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":530,"Q_Id":8788041,"Users Score":0,"Answer":"I believe the maintainer of xlrd is working on porting it, but it's not yet ready.","Q_Score":1,"Tags":"excel,python-3.x,xls","A_Id":8797959,"CreationDate":"2012-01-09T11:55:00.000","Title":"python3 module for importing xls files","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use xlrd to read data from excel files.\nFor integers stored in the files, let's say 63, the xlrd interprets it as 63.0 of type number. \nWhy can't xlrd recognize 63 as an integer? \nAssume sheet.row(1)[0].value gives us 63.0. How can I convert it back to 63.","AnswerCount":6,"Available Count":1,"Score":0.1651404129,"is_accepted":false,"ViewCount":30994,"Q_Id":8825681,"Users Score":5,"Answer":"I'm reminded of this gem from the xlrd docs:\n\nDates in Excel spreadsheets\nIn reality, there are no such things. What you have are floating point\n numbers and pious hope.\n\nThe same is true of integers. Perhaps minus the pious hope.","Q_Score":20,"Tags":"python,xlrd","A_Id":8826320,"CreationDate":"2012-01-11T19:48:00.000","Title":"Integers from excel files become floats?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a regular desktop application which is written in Python\/GTK and SQLObject as ORM. My goal is to create a webinterface where a user can login and sync\/edit the database. My application is split up in different modules, so the database and gtk code are completly separate, so I would like to run the same database code on the webserver too.\nSo, I would like to know if there's a webframework which could handle these criteria:\n\nUser authentication\nUse my own database code\/SQLObject\nSome widgets to build a basic ui\n\nThis would be my first webproject, so I'm a bit confused by all searchresults. CherryPy, Turbogears, web2py, Pyramid? I would be happy if someone could give me some pointers what would be a good framework in my situation.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":375,"Q_Id":8895942,"Users Score":0,"Answer":"Try the pyramid, it does not impose anything you like as opposed to Django. And has a wealth of features for building Web applications at any level.","Q_Score":0,"Tags":"python,sqlobject","A_Id":8896107,"CreationDate":"2012-01-17T14:05:00.000","Title":"Choosing Python\/SQLObject webframework","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Well i have a question that i feel i've been answered several times, from what i found here. However, as a newbie, i can't really understand how to perform a really basic operation.\nHere's the thing :\n\ni have an .xls and when i use xlrd to get a value i'm just using \nsh.cell(0,0) (assuming that sh is my sheet);\nif what is in the cell is a string i get something like text:u'MyName' and i only want to keep the string 'MyName';\nif what is in the cell is a number i get something like number:201.0 and i only want to keep the integer 201.\n\nIf anyone can indicate me what i should to only extract the value, formatted as i want, thank you.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27320,"Q_Id":8909342,"Users Score":0,"Answer":"The correct answer to this is to simply use the Cell.value function. This will return a number or a Unicode string depending on what the cell contains.","Q_Score":9,"Tags":"python,xlrd","A_Id":43124531,"CreationDate":"2012-01-18T11:29:00.000","Title":"Python xlrd : how to convert an extracted value?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We recently moved a Zeo instance over to a new server environment and one of the changes was the file system now has the database files stored on an NFS share.\nWhen trying to start zeo, we've been getting lock file errors which after researching seems to be because of a known issue of lock files being created on an NFS share. \nMy question is, can we maintain the data (.fs) files on the share but have the lock files created on the server's filesystem? We want to maintain the data being stored on the SAN so moving the data over to box is really not an option.\nAny help would be greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":260,"Q_Id":8914644,"Users Score":1,"Answer":"This is likely not a good setup. Your best bet is to work-around NFS in spite of it: maybe a loopback ext3 filesystem mounted on a regular file on the NFS volume -- NFSv3 should have few practical limits to filesize that you won't have natively. Only you will be able to measure if this performs well enough. Otherwise, you should know that (generally) no networked database performs well or without side-effects over NFS.","Q_Score":0,"Tags":"python,zope,zodb","A_Id":8916946,"CreationDate":"2012-01-18T17:36:00.000","Title":"Zeo\/ZODB lock file location, possible to change?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to provide individuals with their financial statement, and I am using S3. So far what I am doing is making the file public-read and creating a unique Key, using uuid.uuid4(). \nWould this be acceptable, or how else could I make this more secure? Sending authentication keys for each individual is not an option.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":260,"Q_Id":8935191,"Users Score":1,"Answer":"Even though version 4 UUIDs are supposed to incorporate random data, I wouldn't want to rely on the fact that the RNG used by Python's uuid.uuid4() being securely random. The Python docs make no mention about the quality of the randomness, so I'd be afraid that you might end up with guessable UUID's.\nI'm not a crypto expert, so I won't suggest a specific alternative, but I would suggest using something that is designed to produce crypto-quailty random data, and transform that into something that can be used as an S3 key (I'm not sure what the requirements on S3 key data might be, but I'd guess they're supposed to be something like a filename).\nTo be honest, having no security other than an unguessable name still leaves me with a bad feeling. It seems to easy to have an unintentional leak of the names, as Ian Clelland suggests in his comment.","Q_Score":1,"Tags":"python,security,amazon-s3","A_Id":8935272,"CreationDate":"2012-01-20T00:08:00.000","Title":"Sending 'secure' financial statements on S3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python, I am firing Sparql queries to get data from dbpedia.\nAt a point approximately firing 7,000 queries, my script is hangs at line results = sparql.query().convert()\nwhich is already executed atleast 5000 times in the loop\nAny idea what could be the issue in it","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":157,"Q_Id":8965123,"Users Score":3,"Answer":"try splitting up the .query() and .convert() into two separate lines. I would guess that .query() is where it's hanging, and I would further guess that you are being rate-limited by DBPedia, but I can't find any information on what their limits might be.","Q_Score":1,"Tags":"python,sparql,mysql-python,dbpedia","A_Id":8965215,"CreationDate":"2012-01-22T22:02:00.000","Title":"python script hangs at results = sparql.query().convert()","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"First of all, I am sorry if this question doesn't belong to SO since I don't know where else to post it, anyway...\nI am looking for a decent python based database development RAD framework with nice data aware widgets and grids. A desktop framework would be much preferable to a web framework (I've developed heavy DB-centric apps in django but the web dev experience is still painful compared to a desktop one), although a web framework will do as long as there are powerful data-centric widgets to go along with it.\nIdeally, it should be as useful as say Delphi or MSAccess \/ VBA (I used to develop using those a long time ago). For the record, I have very good development experience in django and wxPython and as I've said developing heavy data-centric web apps is tough and wxPython although very powerful lacks DB-related widgets.\nPlease note that the use of Python is mandatory because I've been using this language exclusively for all my projects in the last few years and I can't bear the idea of switching back to more mundane languages.\nThanks for any suggestion...","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":684,"Q_Id":9045723,"Users Score":1,"Answer":"I am also looking for something similar to Kexi. Unfortunately python scripting is not supported in Kexi for windows. I would like to find something better than MS Access, and it does not have to be based on python. So far I have looked at quite a few IDE's but have not found anything where a GUI and database application can be built us as quickly as in access.\n I think of all the best one I have seen is Alpha 5. There could be something based on Net Beans but I really do not know. Oracle APEX is another one I have heard about but it doesn't support desktop applications (as far as I know).","Q_Score":3,"Tags":"python,database,widget,rad","A_Id":15711432,"CreationDate":"2012-01-28T13:49:00.000","Title":"Python database widgets\/environment like MSAccess","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building a web App with mongoDB as the backend. Some of the documents need to store a collection of items in some sort of list, and then the system will need to frequently check if a specified item is present in that list. Using Python's 'in' operator takes Big-O(N) time, n being the size of the list. Since these list can get quite large, I want something faster than that. Python's 'set' type does this operation in constant time (and enforces uniqueness, which is good in my case), but is considered an invalid data type to put in MongoDB.\nSo what's the best way to do this? Is there some way to just use a regular list and exploit mongo's indexing features? Again, I want to know, for a given document in a collection, does a list inside that document contain particular element?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2402,"Q_Id":9115979,"Users Score":6,"Answer":"You can represent a set using a dictionary. Your elements become the keys, and all the values can be set to a constant such as 1. The in operator checks for the existence of a key.\nEDIT. MongoDB stores a dict as a BSON document, where the keys must be strings (with some additional restrictions), so the above advice is of limited use.","Q_Score":5,"Tags":"python,mongodb","A_Id":9116463,"CreationDate":"2012-02-02T16:29:00.000","Title":"Mongodb with Python's \"set()\" type","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have Ubuntu and installed python3 since my script is written in it. Since I use MYSQL with MySQLdb, I installed \napt-get install python-mysqldb\nhowever this installed MySQLdb to Python (which is 2.6 on Ubuntu) and not to Python3.\n\nHow can I install MySQLdb for Python3\nShould I use it at all or switch to PyMSQL\n\nSorry, I have just started working with Python today...","AnswerCount":7,"Available Count":1,"Score":0.0285636566,"is_accepted":false,"ViewCount":14751,"Q_Id":9146320,"Users Score":1,"Answer":"If you are planning to switch from MySQLDB then I recommend you to use MySQL connector Python\nWhy ?\n\nit work with both Python 2 and 3\nBecause it is official Oracle driver for MySQL for working with Python.\nIt is purely written in Python\nYou can also use C extension to connect to MySQL.\nMySQL Connector\/Python is implementing the MySQL Client\/Server protocol completely in Python. This means you don't have to compile anything or MySQL doesn't even have to be installed on the machine.\nAlso, it has great support for connection pooling","Q_Score":9,"Tags":"python,mysql,ubuntu,installation","A_Id":51553512,"CreationDate":"2012-02-05T02:06:00.000","Title":"install python MySQLdb to python3 not python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a project in which i use JDBC with MySQL to store some user information, Java REST for the server and Python REST for the client.\nMy question is: by default(i haven't changed anything in the configurations), are the http requests from the client serialized on the server's side? I ask this because i'd like to know if i need to make the database insert\/delete querys thread-safe or something.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":253,"Q_Id":9158578,"Users Score":1,"Answer":"Of course they need to be thread safe. You should be writing your Java server as if it were single threaded, because a Java EE app server will assign a thread per incoming request.\nYou also need to think about database isolation and table locking. Will you allow \"dirty reads\" or should your transactions be serializable? Should you SELECT FOR UPDATE? That's a database setting, separate from threading considerations.","Q_Score":0,"Tags":"java,python,mysql,rest,jdbc","A_Id":9158709,"CreationDate":"2012-02-06T10:16:00.000","Title":"Serialization with JDBC with MySQL, JAVA REST and Python REST","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Why use py manage.py test ? \nWhat's the point? It creates the table anyway... if I wanted to test it, then I wouldn't want it to create the actual table!!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":126,"Q_Id":9160411,"Users Score":0,"Answer":"Test is meant to perform both the upgrade and the downgrade steps. You want to verify that the application is usable in both states. So the idea would be to upgrade, run tests, downgrade, run tests, and verify you don't break things.\nIf the test run fails, it gives you a chance to clean it up, reset, and try again. Usually, I'd say that the test run must completely cleanly before the migration is considered \"good\" and able to be committed to the code base.","Q_Score":0,"Tags":"python,mysql,database,sqlalchemy","A_Id":9161223,"CreationDate":"2012-02-06T12:53:00.000","Title":"In SQLAlchemy-migrate, what's the point of using \"test\"?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted your advice for the best design approach at the following Python project.\nI am building a web service system that is split into 2 parts: \n\nThis part grabs realtime data from a 3rd party API and puts the data in a DB. \nThis part exposes a json API to access data from the DB mentioned in 1). \n\nSome background info - 2) runs on django, and exposes the API via view methods. It uses SQLAlchemy instead of the django ORM. \nMy questions are:\n- Should 1) and 2) run on the same machine, considering that they both access the same MySQL DB?\n- What should 1) run on? I was thinking about just running cron jobs with Python scripts that also use SQLAlchemy. This is because I don't see a need for an entire web framework here, especially because this needs to work super fast. Is this the best approach?\n- Data size - 1) fetches about 60,000 entries and puts them in the DB every 1 minute (an entry contains of about 12 Float values and a few Dates and Integers). What is the best way to deal with the ever growing amount of data here? Would you split the DB? If so, into what? \nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":319,"Q_Id":9182936,"Users Score":0,"Answer":"I would say, run the two on the same machien to start with, and see how the performance goes. Why spend money on a second machine if you don\u2019t have to?\nAs for \u201cdealing with the ever growing amount of data\u201d\u2014do you need to keep old data around? If not, your second task can simply delete old data when it\u2019s done with it. Provided all the records are properly time-stamped, you don\u2019t need to worry about race conditions between the two tasks.","Q_Score":2,"Tags":"python,real-time","A_Id":9189761,"CreationDate":"2012-02-07T19:56:00.000","Title":"Realtime data server architecture","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on a system where a bunch of modules connect to a MS SqlServer DB to read\/write data. Each of these modules are written in different languages (C#, Java, C++) as each language serves the purpose of the module best. \nMy question however is about the DB connectivity. As of now, all these modules use the language-specific Sql Connectivity API to connect to the DB. Is this a good way of doing it ? \nOr alternatively, is it better to have a Python (or some other scripting lang) script take over the responsibility of connecting to the DB? The modules would then send in input parameters and the name of a stored procedure to the Python Script and the script would run it on the database and send the output back to the respective module. \nAre there any advantages of the second method over the first ? \nThanks for helping out!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":9202562,"Users Score":0,"Answer":"If we assume that each language you use will have an optimized set of classes to interact with databases, then there shouldn't be a real need to pass all database calls through a centralized module.\nUsing a \"middle-ware\" for database manipulation does offer a very significant advantage. You can control, monitor and manipulate your database calls from a central and single location. So, for example, if one day you wake up and decide that you want to log certain elements of the database calls, you'll need to apply the logical\/code change only in a single piece of code (the middle-ware). You can also implement different caching techniques using middle-ware, so if the different systems share certain pieces of data, you'd be able to keep that data in the middle-ware and serve it as needed to the different modules.\nThe above is a very advanced edge-case and it's not commonly used in small applications, so please evaluate the need for the above in your specific application and decide if that's the best approach. \nDoing things the way you do them now is fine (if we follow the above assumption) :)","Q_Score":2,"Tags":"python,sql-server,architecture","A_Id":9456223,"CreationDate":"2012-02-08T22:32:00.000","Title":"DB Connectivity from multiple modules","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any tutorials about how to set-up sqlalchemy for windows? I went to www.sqlalchemy.org and they don't have clear instructions about set-up for windows. When I opened the zipped package, I see distribute_setup, ez_setup and setup.py among other files but it doesn't see to install sqlalchemy.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":9330,"Q_Id":9221888,"Users Score":1,"Answer":"The Command pip install sqlalchemy will download the necessary files and run setup.py install for you.","Q_Score":1,"Tags":"python,sql,database,orm,sqlalchemy","A_Id":29831712,"CreationDate":"2012-02-10T02:21:00.000","Title":"Configuring sqlalchemy for windows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There have been many questions along these lines but I'm struggling to apply them to my scenario. Any help would be be greatly appreciated!\nWe currently have a functioning mySQL database hosted on a website, data is entered from a website and via PHP it is put into the database.\nAt the same time we want to now create a python application that works offline. It should carry out all the same functions as the web version and run totally locally, this means it needs a copy of the entire database to run locally and when changes are made to such local database they are synced next time there is an internet connection available.\nFirst off I have no idea what the best method would be to run such a database offline. I was considering just setting up a localhost, however this needs to be distributable to many machines. Hence setting up a localhost via an installer of some sort may be impractical no?\nSecondly synchronization? Not a clue on how to go about this!\nAny help would be very very very appreciated.\nThank you!","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2804,"Q_Id":9237481,"Users Score":0,"Answer":"How high-performance does your local application need to be? Also, how reliable is the locally available internet connection? If you don't need extremely high performance, why not just leave the data in the remote MySQL server? \nIf you're sure you need access to local data I'd look at MySQL's built-in replication for synchronization. It's really simple to setup\/use and you could use it to maintain a local read-only copy of the remote database for quick data access. You'd simply build into your application the ability to perform write queries on the remote server and do read queries against the local DB. The lag time between the two servers is generally very low ... like on the order of milliseconds ... but you do still have to contend with network congestion preventing a local slave database from being perfectly in-sync with the master instantaneously.\nAs for the python side of things, google mysql-python because you'll need a python mysql binding to work with a MySQL database. Finally, I'd highly recommend SQLalchemy as an ORM with python because it'll make your life a heck of a lot easier.\nI would say an ideal solution, however, would be to set up a remote REST API web service and use that in place of directly accessing the database. Of course, you may not have the in-house capabilities, the time or the inclination to do that ... which is also okay :)","Q_Score":2,"Tags":"php,python,mysql,localhost,sync","A_Id":9237543,"CreationDate":"2012-02-11T03:04:00.000","Title":"Python sync with mySQL for local application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There have been many questions along these lines but I'm struggling to apply them to my scenario. Any help would be be greatly appreciated!\nWe currently have a functioning mySQL database hosted on a website, data is entered from a website and via PHP it is put into the database.\nAt the same time we want to now create a python application that works offline. It should carry out all the same functions as the web version and run totally locally, this means it needs a copy of the entire database to run locally and when changes are made to such local database they are synced next time there is an internet connection available.\nFirst off I have no idea what the best method would be to run such a database offline. I was considering just setting up a localhost, however this needs to be distributable to many machines. Hence setting up a localhost via an installer of some sort may be impractical no?\nSecondly synchronization? Not a clue on how to go about this!\nAny help would be very very very appreciated.\nThank you!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2804,"Q_Id":9237481,"Users Score":0,"Answer":"Are you planning to run mysql on your local python offline apps ? I would suggest something like sqlite. As for keeping things in sync, it also depends on the type of data that needs to be synchronized. One question that needs to be answered:\nAre the data generated by these python apps something that is opague ? If yes (i.e. it doesn't have any relations to other entities), then you can queue the data locally and push it up to the centrally hosted website.","Q_Score":2,"Tags":"php,python,mysql,localhost,sync","A_Id":9237521,"CreationDate":"2012-02-11T03:04:00.000","Title":"Python sync with mySQL for local application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a database front-end with python and glade. I need to present SQL query results in the form of database tables inside my app's window (schema followed by tuples\/records). Both the schema and the database entries are dynamic because the schema could be that of a join operation or in general altered and the number of tuples could be any valid number.One possible solution could be to format a given table with python, create a text object in my GUI and change its' value to that produced by python. Advices and suggestions are very welcome.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2104,"Q_Id":9299934,"Users Score":3,"Answer":"Given that the number and name of the columns to display isn't known beforehand, you could just create a gtk.TreeView widget in glade and modify it as you need in the application code.\nThis widget could be updated to use a new model using gtk.TreeView.set_model and the columns could be adapted to match the information to be dsplayed with the gtk.TreeView.{append,remove,insert}_column columns.\nRegarding the model, you coud create a new gtk.ListStore with appropriate columns depending on the results from the database.\nI hope this helps.","Q_Score":3,"Tags":"python,user-interface,gtk,pygtk,glade","A_Id":9302750,"CreationDate":"2012-02-15T19:28:00.000","Title":"GUI for database tables with pygtk and glade","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having a problem with the sessions in my python\/wsgi web app. There is a different, persistent mysqldb connection for each thread in each of 2 wsgi daemon processes. Sometimes, after deleting old sessions and creating a new one, some connections still fetch the old sessions in a select, which means they fail to validate the session and ask for login again.\nDetails: Sessions are stored in an InnoDB table in a local mysql database. After authentication (through CAS), I delete any previous sessions for that user, create a new session (insert a row), commit the transaction, and redirect to the originally requested page with the new session id in the cookie. For each request, a session id in the cookie is checked against the sessions in the database.\nSometimes, a newly created session is not found in the database after the redirect. Instead, the old session for that user is still there. (I checked this by selecting and logging all of the sessions at the beginning of each request). Somehow, I'm getting cached results. I tried selecting the sessions with SQL_NO_CACHE, but it made no difference.\nWhy am I getting cached results? Where else could the caching occur, and how can stop it or refresh the cache? Basically, why do the other connections fail to see the newly inserted data?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5316,"Q_Id":9318347,"Users Score":16,"Answer":"MySQL defaults to the isolation level \"REPEATABLE READ\" which means you will not see any changes in your transaction that were done after the transaction started - even if those (other) changes were committed. \nIf you issue a COMMIT or ROLLBACK in those sessions, you should see the changed data (because that will end the transaction that is \"in progress\").\nThe other option is to change the isolation level for those sessions to \"READ COMMITTED\". Maybe there is an option to change the default level as well, but you would need to check the manual for that.","Q_Score":9,"Tags":"python,mysql,session,caching,wsgi","A_Id":9318495,"CreationDate":"2012-02-16T20:09:00.000","Title":"Why are some mysql connections selecting old data the mysql database after a delete + insert?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to store information in a database table that will be constantly receiving and sending data back and forth to an iPhone App\/Python Socket. The problem is, if I were to have my own servers, what is the maximum queries I can sustain?\nThe reason I'm asking is because if I were to have thousands of people using the clients and multiple queries are going a second, I'm afraid something will go wrong.\nIs there a different way of storing user information without MySQL? Or is MySQL OK for what I am doing?\nThank you!","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1960,"Q_Id":9322523,"Users Score":3,"Answer":"The maximum load is going to vary based on the design of your application and the power of the hardware that you put it on. A well designed application on reasonable hardware will far outperform what you need to get the project off the ground. \nIf you are unexpectedly successful, you will have money to put into real designers, real programmers and a real business plan. Until then, just have fun hacking away and see if you can bring your idea to reality.","Q_Score":4,"Tags":"iphone,python,mysql,objective-c,database","A_Id":9322806,"CreationDate":"2012-02-17T03:37:00.000","Title":"How many SQL queries can be run at a time?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will start a project ( not commercial, just for learning ) but I would like to choose to work with the right tools as I would if I were doing it for a company.\nFirst of all small description of what I will need.\nIt would be a server-client(s) application.\nFor the server:\n- GUI for Windows\n- ORM \n- Database without installation (sqlite ???)\n- GUI builder (RAD Tool)\n- Ability to create easily REST Services\nClients would be android devices\n- GUI for android mobile\nClients would connect to the server and get some initial settings and then start to \nsend information to the server.\nServer should be able to display properly the information collected from the clients and \nedit them if needed.\nOpen source technologies are mandatatory.\nFirst I am thinking to use sqlite ( I should not make any installation except the programm). Any alternatives here?\nFor the server maybe python with a gui library and sql alchemy. What about Camelot?\nAnd for the clients (android) java. I think there are no other options here.\nCan you make some comments on the above choices?\nMaybe you can suggest something different which will make the development faster...","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":657,"Q_Id":9354695,"Users Score":0,"Answer":"As you have asserted: client is java only.\nOn server:\n\nGUI for Windows : WPF \nORM - Database without installation : SQLCE 4.0 - Maybe use codefirst \nGUI builder (RAD Tool) : Visual Studio lets you do that for WPF apps \nAbility to create easily REST Services : Use WCF\n\nhope that helps","Q_Score":0,"Tags":"android,python,client-server","A_Id":9354732,"CreationDate":"2012-02-20T00:15:00.000","Title":"Right tools for GUI windows program","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Python (relatively new to programing in general) and I have created a small python script that scrape some data off of a site once a week and stores it to a local database (I'm trying to do some statistical analysis on downloaded music). I've tested it on my Mac and would like to put it up onto my server (VPS with WiredTree running CentOS 5), but I have no idea where to start.\nI tried Googling for it, but apparently I'm using the wrong terms as \"deploying\" means to create an executable file. The only thing that seems to make sense is to set it up inside Django, but I think that might be overkill. I don't know...\nEDIT: More clarity","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":865,"Q_Id":9356926,"Users Score":1,"Answer":"Copy script to server\ntest script manually on server\nset cron, \"crontab -e\" to a value that will test it soon\nonce you've debugged issues set cron to the appropriate time.","Q_Score":1,"Tags":"python,django,centos","A_Id":9357006,"CreationDate":"2012-02-20T06:14:00.000","Title":"Deploying a Python Script on a Server (CentOS): Where to start?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a column heading Fee. Using xlwt in python, I successfully generated the required excel.This column is always blank at the creation of Excel file.\nIs it possible to have the Fee column preformatted to 'Currency' and 'two decimal places', so that when I write manually in the Fee column of the Excel file after downloading, 23 should change into $23.00 ??","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2721,"Q_Id":9375637,"Users Score":11,"Answer":"I got it working like this:\ncurrency_style = xlwt.XFStyle()\ncurrency_style.num_format_str = \"[$$-409]#,##0.00;-[$$-409]#,##0.00\"\nsheet.write(row+2, col, val, style=currency_style)","Q_Score":5,"Tags":"python,excel,xlwt","A_Id":9376306,"CreationDate":"2012-02-21T10:10:00.000","Title":"Preformat to currency and two decimal places in python using xlwt for excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a static folder that is managed by apache where images are stored.\nI wonder if it's possible by configuring apache to send all files from that folder as downloadable files, not opening them as images inside browser? I suppose I can do it by creating a special view in Flask, but I think it would be nicer if I could do it with some more simple solution.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":219,"Q_Id":9378664,"Users Score":1,"Answer":"You can force the contents to be a downloadable attachment using http headers.\nIn PHP that would be:\n\n$fileName = 'dummy.jpg';\nheader(\"Content-Disposition: attachment; filename=$fileName\");\n\nThen, the script dumps the raw contents of the file.","Q_Score":4,"Tags":"python,browser,flask","A_Id":9378819,"CreationDate":"2012-02-21T13:46:00.000","Title":"Send image as an attachment in browser","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In order to demonstrate the security feature of Oracle one has to call OCIServerVersion() or OCIServerRelease() when the user session has not yet been established.\nWhile having the database parameter sec_return_server_release_banner = false.\nI am using Python cx_Oracle module for this, but I am not sure how to get the server version before establishing the connection. Any ideas?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1064,"Q_Id":9389381,"Users Score":0,"Answer":"With-out establishing a connection,. No you can never asking anything. It's like going to Google Page.(Internet Architecture - wether you call it sessionless or session based)\nAs for Authentical, if no permission are set - Oracle uses a username 'nobody' as a user and thus gives every user a session.\nI am a user of Oracle APEX, and I use Python, PLSQL regurlary.\nThat's one nice question. Thanks.","Q_Score":7,"Tags":"python,cx-oracle","A_Id":21155146,"CreationDate":"2012-02-22T05:08:00.000","Title":"python cx_oracle and server information","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to add a feature to my behind the firewall webapp that exposes and ODBC interface so users can connect with a spreadsheet program to explore our data.\nWe don't use a RDBMS so I want to emulate the server side of the connection.\nI've searched extensively for a library or framework that helps to implement the server side component of an ODBC connection with no luck. Everything I can find is for the other side of the equation - connecting one's client program to a database using an ODBC driver.\nIt would be great to use Python but at this point language preference is secondary, although it does have to run on *nix.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":9432332,"Users Score":0,"Answer":"The server side of ODBC is already done, it is your RDBMS.\nODBC is a client side thing, most implementations are just a bridge between ODBC interface and the native client interface for you-name-your-RDBMS-here.\nThat is why you will not find anything about the server side of ODBC... :-)\nImplementing a RDBMS (even with a subset of SQL) is no easy quest. My advice is to expose your underlying database storage, the best solution depends on what database are you using.\nIf its a read-only interface, expose a database mirror using some sort of asynchronous replication.\nIf you want it read\/write, trust me, you better don't. If your customer is savvy, expose an API, it he isn't you don't want him fiddling with your database. :-)\n[updated]\nIf your data is not stored on a RDBMS, IMHO there is no point in exposing it through a relational interface like ODBC. The advice to use some sort of asynchronous replication with a relational database is still valid and probably the easiest approach. \nOtherwise you will have to reinvent the wheel implementing an SQL parser, network connection, authentication and related logic. If you think it's worth, go for it!","Q_Score":1,"Tags":"python,odbc","A_Id":9432486,"CreationDate":"2012-02-24T14:26:00.000","Title":"Are there any libraries (or frameworks) that aid in implementing the server side of ODBC?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using plone to build my site.\nIn one page template, I have the and this form:
    but i want to have the put where the file was uploaded (like c:...)\nI hope somebody can help me.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":890,"Q_Id":9446769,"Users Score":1,"Answer":"You are not saving the file on the filesystem, but in the Zope object database. You'd have to use python code (not a python script) to open a filepath with the open built-in function to save the data to.","Q_Score":2,"Tags":"python,plone,zope","A_Id":9456924,"CreationDate":"2012-02-25T18:27:00.000","Title":"upload file with python script in plone","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a handful of servers all connected over WAN links (moderate bandwidth, higher latency) that all need to be able to share info about connected clients. Each client can connect to any of the servers in the 'mesh'. Im looking for some kind of distributed database each server can host and update. It would be important that each server is able to get updated with the current state if its been offline for any length of time.\nIf I can't find anything, the alternative will be to pick a server to host a MySQL DB all the servers can insert to; but I'd really like to remove this as a single-point-of-failure if possible. (and the downtime associated with promoting a slave to master)\nIs there any no-single-master distributed data store you have used before and would recommend?\nIt would most useful if any solution has Python interfaces.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":981,"Q_Id":9456954,"Users Score":0,"Answer":"What you describe reminds me of an Apache Cassandra cluster configured so that each machine hosts a copy of the whole dataset and reads and writes succeed when they reach a single node (I never did that, but I think it's possible). Nodes should be able to remain functional when WAN links are down and receive pending updates as soon as they get back on-line. Still, there is no magic - if conflicting updates are issued on different servers or outdated replicas are used to generate new data, consistency problems will arise on any architecture you select.\nA second issue is that for every local write, you'll get n-1 remote writes and your servers may spend a lot of time and bandwidth debating who has the latest record.\nI strongly suggest you fire up a couple EC2 instances and play with their connectivity to check if everything works the way you expect. This seems to be in the \"creative misuse\" area and your mileage may vary wildly, if you get any at all.","Q_Score":1,"Tags":"python,database,linux,datastore,distributed-system","A_Id":9466181,"CreationDate":"2012-02-26T20:41:00.000","Title":"Distributed state","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am playing around with python and thought I would make a simple language learning program... ie: lanuageA | languageB | type of word | synonym | antonym |\nbasically flash cards...\nI have made a crude version using python and json, and have just started playing with sqlite3... Is a database a better way to organize the information, and for pulling things out and referencing against each other. and against user input. Or would it be easier to use nested dictionaries?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":371,"Q_Id":9484814,"Users Score":0,"Answer":"If your data fits in memory and you only require to access elements by key, a dictionary is probably just enough for your needs.","Q_Score":2,"Tags":"python,json,sqlite","A_Id":9484859,"CreationDate":"2012-02-28T15:32:00.000","Title":"json or sqlite3 for a dictionary","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am playing around with python and thought I would make a simple language learning program... ie: lanuageA | languageB | type of word | synonym | antonym |\nbasically flash cards...\nI have made a crude version using python and json, and have just started playing with sqlite3... Is a database a better way to organize the information, and for pulling things out and referencing against each other. and against user input. Or would it be easier to use nested dictionaries?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":371,"Q_Id":9484814,"Users Score":1,"Answer":"Who is going to modify your data?\n\nIf you plan to only modify the word set yourself (as a developer, not a user of an application), you can use JSON to keep the data on the disk\nIf you want to allow users of your application to add\/edit\/remove flashcards, you should use a database (sqlite3 is OK), because otherwise you would have to save the whole data file after each small change made by the user. You could, of course, split the data into separate JSON files, add thread locks, etc., but that's what database engines are for.","Q_Score":2,"Tags":"python,json,sqlite","A_Id":9485889,"CreationDate":"2012-02-28T15:32:00.000","Title":"json or sqlite3 for a dictionary","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am fetching records from gae model using cursor() and with_cursor() logic as used in paging. but i am not sure how to check that there is no any other record in db that is pointed by cursor. i am fetching these records in chunks within some iterations.when i got my required results in the first iteration then in next iteration I want to check there is no any record in model but I not get any empty\/None value of cursor at this stage.please let me know how to perform this check with cursors in google app engine with python.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":550,"Q_Id":9521289,"Users Score":0,"Answer":"i am not 100% sure about that but what i used to do is compare the last cursor with the actual cursor and i think i noticed that they were the same so i came to the conclusion that it was the last cursor.","Q_Score":0,"Tags":"python,google-app-engine","A_Id":9521520,"CreationDate":"2012-03-01T17:47:00.000","Title":"cursor and with_cursor() in GAE","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm new to Django and have only been using sqlite3 as a database engine in Django. Now one of the applications I'm working on is getting pretty big, both in terms of models' complexity and requests\/second.\nHow do database engines supported by Django compare in terms of performance? Any pitfalls in using any of them? And the last but not least, how easy is it to switch to another engine once you've used one for a while?","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":41120,"Q_Id":9540154,"Users Score":6,"Answer":"MySQL and PostgreSQL work best with Django. I would highly suggest that when you choose one that you change your development settings to use it while development (opposed to using sqlite3 in dev mode and a \"real\" database in prod) as there are subtle behavioral differences that can caused lots of headaches in the future.","Q_Score":48,"Tags":"python,database,django,sqlite","A_Id":9540685,"CreationDate":"2012-03-02T20:49:00.000","Title":"Which database engine to choose for Django app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to think through a script that I need to create. I am most likely going to be using php unless there would be a better language to do this with e.g. python or ror. I only know a little bit of php so this will definitely be a learning experience for me and starting fresh with a different language wouldn't be a problem if it would help in the long run. \nWhat I am wanting to do is create a website where people can sign up for WordPress hosting. Right now I have the site set up with WHMCS. If I just leave it how it is I will have manually go in and install WordPress every time a customer signs up. I would like an automated solution that creates a database and installs WordPress as soon as the customer signs up. With WHMCS I can run a script as soon as a customer signs up and so far I understand how to create a database, download WordPress, and install WordPress. The only thing is I can't figure out how to make it work with more than one customer because with each customer there will be a new database. What I need the script to do is when customer A signs up, the script will create a database name \"customer_A\" (that name is just an example) and when, lets say my second customer signs up, the script will create a database named \"customer_B\". \nIs there a possible solution to this?\nThanks for the help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":9543171,"Users Score":0,"Answer":"I did this yesterday.\nmy process was to add a row to a master accounts table, get the auto inc id, use that along with the company name to create the db name. so in my case the db's are \nRoot_1companyname1\nRoot_2companyname2\n..\nRoot_ is optional of course.\nAsk if you have any questions.","Q_Score":0,"Tags":"php,python,wordpress","A_Id":9543194,"CreationDate":"2012-03-03T03:30:00.000","Title":"Automate database creation with incremental name?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python Flask app I'm writing, and I'm about to start on the backend. The main part of it involves users POSTing data to the backend, usually a small piece of data every second or so, to later be retrieved by other users. The data will always be retrieved within under an hour, and could be retrieved in as low as a minute. I need a database or storage solution that can constantly take in and store the data, purge all data that was retrieved, and also perform a purge on data that's been in storage for longer than an hour.\nI do not need any relational system; JSON\/key-value should be able to handle both incoming and outgoing data. And also, there will be very constant reading, writing, and deleting.\nShould I go with something like MongoDB? Should I use a database system at all, and instead write to a directory full of .json files constantly, or something? (Using only files is probably a bad idea, but it's kind of the extent of what I need.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":168,"Q_Id":9544618,"Users Score":1,"Answer":"You might look at mongoengine we use it in production with flask(there's an extension) and it has suited our needs well, there's also mongoalchemy which I haven't tried but seems to be decently popular. \nThe downside to using mongo is that there is no expire automatically, having said that you might take a look at using redis which has the ability to auto expire items. There are a few ORMs out there that might suit your needs.","Q_Score":3,"Tags":"python,database,flask","A_Id":9545480,"CreationDate":"2012-03-03T08:17:00.000","Title":"In need of a light, changing database\/storage solution","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"there's something I'm struggling to understand with SQLAlchamy from it's documentation and tutorials. \nI see how to autoload classes from a DB table, and I see how to design a class and create from it (declaratively or using the mapper()) a table that is added to the DB. \nMy question is how does one write code that both creates the table (e.g. on first run) and then reuses it?\nI don't want to have to create the database with one tool or one piece of code and have separate code to use the database.\nThanks in advance,\nPeter","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":9554204,"Users Score":0,"Answer":"I think you're perhaps over-thinking the situation. If you want to create the database afresh, you normally just call Base.metadata.create_all() or equivalent, and if you don't want to do that, you don't call it.\nYou could try calling it every time and handling the exception if it goes wrong, assuming that the database is already set up.\nOr you could try querying for a certain table and if that fails, call create_all() to put everything in place.\nEvery other part of your app should work in the same way whether you perform the db creation or not.","Q_Score":0,"Tags":"python,database,sqlalchemy","A_Id":9554925,"CreationDate":"2012-03-04T10:41:00.000","Title":"SQLAlchamy Database Construction & Reuse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a multi-player game in Python with a Flask frontend, and I'm using it as an opportunity to learn more about the NoSQL way of doing things.\nRedis seems to be a good fit for some of the things I need for this app, including storage of server-side sessions and other transient data, e.g. what games are in progress, who's online, etc. There are also several good Flask\/Redis recipes that have made things very easy so far.\nHowever, there are still some things in the data model that I would prefer lived inside a traditional RDBMS, including user accounts, logs of completed games, etc. It's not that Redis can't do these things, but I just think the RDBMS is more suited to them, and since Redis wants everything in memory, it seems to make sense to \"warehouse\" some of this data on disk.\nThe one thing I don't quite have a good strategy for is how to make these two data stores live happily together. Using ORMs like SQLAlchemy and\/or redisco seems right out, because the ORMs are going to want to own all the data that's part of their data model, and there are inevitably times I'm going to need to have classes from one ORM know about classes from the other one (e.g. \"users are in the RDBMS, but games are in Redis, and games have users participating in them.)\nDoes anyone have any experience deploying python web apps using a NoSQL store like Redis for some things and an RDBMS for others? If so, do you have any strategies for making them work together?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":532,"Q_Id":9557552,"Users Score":3,"Answer":"You should have no problem using an ORM because, in the end, it just stores strings, numbers and other values. So you could have a game in progress, and keep its state in Redis, including the players' IDs from the SQL player table, because the ID is just a unique integer.","Q_Score":4,"Tags":"python,nosql,redis,rdbms,flask","A_Id":9557895,"CreationDate":"2012-03-04T18:17:00.000","Title":"Redis and RDBMS coexistence (hopefully cooperation) in Flask applications","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a game where each player has a score. I would like to have a global scoreboard where players can compare their scores, see how well they are placed and browse the scoreboard. \nUnfortunately I cannot find an efficient way to program this: storing the current player position in the scoreboard means I have to update a large part of the scoreboard when a player improves his score, and not storing the position means I have to recompute it each time I need it (which would also require a lot of computations).\nIs there a better solution to this problem? Or is one of the above solutions \"good enough\" to be used practically with a lot of users and a lot of updates?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":857,"Q_Id":9576578,"Users Score":0,"Answer":"The ORDER BY clause was made for that and doesn't look so slow.","Q_Score":0,"Tags":"python,sql","A_Id":9576946,"CreationDate":"2012-03-06T01:20:00.000","Title":"Scoreboard using Python and SQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table address. This table is constantly getting new row inserts, appox 1 row per second. Lets called it process1.\nIn parallel, I need to iterate over SELECT * from address results inserted till now via process1. This is Process2. It should wait for Process1 to insert new rows if it reaches the end, ie, there are no more rows to process (iterate) in address.\nBoth Process1 and 2 are very long. Several hours or maybe days. \nHow should process2 look like in python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":112,"Q_Id":9607711,"Users Score":0,"Answer":"Add a TIMESTAMP column and select rows with a newer timestamp than the latest processed.","Q_Score":0,"Tags":"python,mysql","A_Id":9607771,"CreationDate":"2012-03-07T19:21:00.000","Title":"python dynamically select rows from mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying rewrite a simple Rails application I made a while ago with cherrypy and Python3. So far I have been unable to find a Python replacement for ActiveRecord (the persistence part of the application). Most of the recommendations I've found on StackOverflow have been for SQL Alchemy. I looked into this and it seems much too complicated to get up and running. After reading its online docs and a book from Amazon, It's still not clear how to even proceed; not a good sign. \nSo my question is, what are developers using to persist data in their python3 web applications? \nAlso, I looked into Django but python3 is a requirement so that's out. \nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":456,"Q_Id":9678989,"Users Score":0,"Answer":"I have developed a transparent persistent storage system for python this is currently in an alpha-stage. Once you create a persistent object, you can access and modify its attributes using standard python syntax (obj.x=3;) and the persistence is done behind the scenes (by overloading the setattr methods, etc.). Contact me if you are interested in learning more. -Stefan","Q_Score":2,"Tags":"python,web-applications,persistence,cherrypy","A_Id":9832722,"CreationDate":"2012-03-13T05:49:00.000","Title":"Persistence for a python (cherrypy) web application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying rewrite a simple Rails application I made a while ago with cherrypy and Python3. So far I have been unable to find a Python replacement for ActiveRecord (the persistence part of the application). Most of the recommendations I've found on StackOverflow have been for SQL Alchemy. I looked into this and it seems much too complicated to get up and running. After reading its online docs and a book from Amazon, It's still not clear how to even proceed; not a good sign. \nSo my question is, what are developers using to persist data in their python3 web applications? \nAlso, I looked into Django but python3 is a requirement so that's out. \nThanks","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":456,"Q_Id":9678989,"Users Score":1,"Answer":"SQL Alchemy is a industrial standard is no choice. But it's not as difficult as it seems at first sight","Q_Score":2,"Tags":"python,web-applications,persistence,cherrypy","A_Id":9679132,"CreationDate":"2012-03-13T05:49:00.000","Title":"Persistence for a python (cherrypy) web application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I seem to remember reading somewhere that google app engine automatically caches the results of very frequent queries into memory so that they are retrieved faster. \nIs this correct?\nIf so, is there still a charge for datastore reads on these queries?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1313,"Q_Id":9689588,"Users Score":1,"Answer":"I think that app engine does not cache anything for you. While it could be that, internally, it caches some things for a split second, I don't think you should rely on that.\nI think you will be charged the normal number of read operations for every entity you read from every query.","Q_Score":3,"Tags":"python,google-app-engine,memcached,bigtable","A_Id":9689883,"CreationDate":"2012-03-13T18:06:00.000","Title":"Does app engine automatically cache frequent queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I seem to remember reading somewhere that google app engine automatically caches the results of very frequent queries into memory so that they are retrieved faster. \nIs this correct?\nIf so, is there still a charge for datastore reads on these queries?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1313,"Q_Id":9689588,"Users Score":1,"Answer":"No, it doesn't. However depending on what framework you use for access to the datastore, memcache will be used. Are you developing in java or python? On the java side, Objectify will cache GETs automatically but not Queries. Keep in mind that there is a big difference in terms of performance and cachability between gets and queries in both python and java.\nYou are not charged for datastore reads for memcache hits.","Q_Score":3,"Tags":"python,google-app-engine,memcached,bigtable","A_Id":9690080,"CreationDate":"2012-03-13T18:06:00.000","Title":"Does app engine automatically cache frequent queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a spreadsheet with about 1.7m lines, totalling 1 GB, and need to perform various queries on it. Being most comfortable with Python, my first approach was to hack together a bunch of dictionaries keyed in a way that would facilitate the queries I was trying to make. E.g. if I needed to be able to access everyone with a particular area code and age, I would make an areacode_age 2-dimensional dict. I ended up needing quite a few of these, which multiplied my memory footprint (to the order of ~10GB), and even though I had enough RAM to support this, the process was still quite slow.\nAt this point, it seemed like I was playing a sucker's game. \"Well this is what relational databases were made for, right?\", I thought. I imported sqlite3 and imported my data into an in-memory database. I figure databases are built for speed and this will solve my problems.\nIt turns out though, that doing a query like \"SELECT (a, b, c) FROM foo WHERE date1<=d AND date2>e AND name=f\" takes 0.05 seconds. Doing this for my 1.7m rows would take 24 hours of compute time. My hacky approach with dictionaries was about 3 orders of magnitude faster for this particular task (and, in this example, I couldn't key on date1 and date2 obviously, so I was getting every row that matched name and then filtering by date).\nSo, my question is, why is this so slow, and how can I make it fast? And what is the Pythonic approach? Possibilities I've been considering:\n\nsqlite3 is too slow, and I need something more heavyweight\nI need to somehow change my schema or my queries to be more... optimized?\nthe approaches I've tried so far are entirely wrong and I need a whole new tool of some kind\nI read somewhere that, in sqlite 3, doing repeated calls to cursor.execute is much slower than using cursor.executemany. It turns out that executemany isn't even compatible with select statements though, so I think this was a red herring.\n\nThanks.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":572,"Q_Id":9694967,"Users Score":4,"Answer":"sqlite3 is too slow, and I need something more heavyweight\n\nFirst, sqlite3 is fast, sometime faster than MySQL\nSecond, you have to use index, put a compound index in (date1, date2, name) will speed thing up significantly","Q_Score":2,"Tags":"python,database,sqlite,indexing,bigdata","A_Id":9695095,"CreationDate":"2012-03-14T02:05:00.000","Title":"Querying (pretty) big relational data in Python in a reasonable amount of time?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am new in python and i want to read office 2010 excel file without changing its style. Currently its working fine but changing date format. i want it as they are in excel file.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":719,"Q_Id":9757361,"Users Score":1,"Answer":"i want it as they are in excel file.\n\nA date is recorded in an Excel file (both 2007+ XLSX files and earlier XLS files) as a floating point number of days (and fraction thereof) since some date in 1899\/1900 or 1904. Only the \"number format\" that is recorded against the cell can be used to distinguish whether a date or a number was intended.\nYou will need to be able to retrieve the actual float value and the \"number format\" and apply the format to the float value. If the \"number format\" being used is one of the standard ones, this should be easy enough to do. Customised number formats are another matter. Locale-dependant formats likewise.\nTo get detailed help, you will need to give examples of what raw data you have got and what you want to \"see\" and how it is now being presented (\"changing date format\").","Q_Score":0,"Tags":"python,openpyxl","A_Id":9757506,"CreationDate":"2012-03-18T09:49:00.000","Title":"How to read office 2010 excelfile using openpyxl without changing style","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating python app using relatively big SQL database (250k rows). Application needs GUI where most important part of it would be to present results of SQL queries.\nSo I'm looking for a best way to quickly present data in tables in GUI.\nMost preferably I'd be using wx - as it has seamless connection to main application I'm working with. And what I need is least effort between SQL query a and populating GUI table. \nI used once wx.grid, but it seemed to be limited functionality. Also I know of wx.grid.pygridtablebase - what is the difference?\nWhat would be easiest way to do this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":420,"Q_Id":9762841,"Users Score":1,"Answer":"You could use wx.grid or one of the ListCtrls. There's an example of a grid with 100 million cells in the wxPython demo that you could use for guidance on projects with lots of information. For ListCtrls, you would want to use a Virtual ListCtrl using the wx.LC_VIRTUAL flag. There's an example of that in the demo as well.","Q_Score":1,"Tags":"python,sqlite,user-interface,wxpython,wxwidgets","A_Id":9771997,"CreationDate":"2012-03-18T22:22:00.000","Title":"Most seamless way to present data in gui","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":">>> _cursor.execute(\"select * from bitter.test where id > 34\")\n 1L\n >>> _cursor.fetchall()\n ({'priority': 1L, 'default': 0, 'id': 35L, 'name': 'chinanet'},)\n >>> _cursor.execute(\"select * from bitter.test where id > 34\")\n 1L\n >>> _cursor.fetchall()\n ({'priority': 1L, 'default': 0, 'id': 35L, 'name': 'chinanet'},)\n >>> \n\n\nthe first time, i run cursor.execute and cursor.fetchall, i got the right result.\nbefore the second time i run execute and fetchall\ni insert data into mysql which id id 36, i also run commit command in mysql\nbut cursor.execute\/fetchall counld only get the data before without new data","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":677,"Q_Id":9764963,"Users Score":2,"Answer":"I guess you're using InnoDB. This is default for an InnoDB transaction.\n\nREPEATABLE READ\nThis is the default isolation level for InnoDB. For consistent reads,\n there is an important difference from the READ COMMITTED isolation\n level: All consistent reads within the same transaction read the\n snapshot established by the first read. This convention means that if\n you issue several plain (nonlocking) SELECT statements within the same\n transaction, these SELECT statements are consistent also with respect\n to each other. See Section 13.2.8.2, \u201cConsistent Nonlocking Reads\u201d.\n\nI haven't tested yet but forcing MySQLdb to start a new transaction by issuing a commit() on the current connection or create a new connection might solve the issue.","Q_Score":2,"Tags":"python,mysql-python","A_Id":9765239,"CreationDate":"2012-03-19T04:13:00.000","Title":"cursor fetch wrong records from mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a new way to connect to MySQL from Python with Mac OS X Lion (10.7.x)?\nAll the material I can find only seems to support Snow Leopard (10.6) and older.\nI've tried installing pyodbc, but can't get the odbc drivers to register with the operating system (maybe a 10.6 -> 10.7 compatibility issue?)","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":391,"Q_Id":9791587,"Users Score":0,"Answer":"Turns out the newest MySql_python worked great. just had to run sudo python setup.py install","Q_Score":1,"Tags":"python,mysql,macos","A_Id":9792010,"CreationDate":"2012-03-20T17:09:00.000","Title":"Python MySQL On Mac OS X Lion","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any alternative to xlrd, xlwt and xlutils for handling MS Excel in python? As far as I know, their licensing does not allow it to be used for commercial purpose and I was wondering if there are any alternative to that other than using COM.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3189,"Q_Id":9805426,"Users Score":1,"Answer":"openpyxl is definitely worth a test drive, but keep in mind that it support only XLSX files,\nwhile xlrd\/xlwr support only XLS files.","Q_Score":2,"Tags":"python","A_Id":10435892,"CreationDate":"2012-03-21T13:17:00.000","Title":"Alternative to xlrd, xlwt and xlutils in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a music catalogue application with PyQt using to display GUI. I have a problem about choosing database engine. There are simply too many options.\nI can use:\n-PyQt build-in QSql\n-sqlite3\n-SQLAlchemy (Elixir)\n-SQLObject\n-Python DB-API\nProbably there are far more options, this list is what I got from google (I'm open for any other propositions). If I decide to use some ORM which database system should I use? MySql, PostgreSQL or other? I know some MySql, but I heard a lot of good thing about PostgreSQL, on the other hand sqlite3 seems be most popular in desktop applications. I would be grateful for any advice.\nEDIT:\nAn application is meant to work on Linux and Windows. I think database size should be around 100-10k entries.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":475,"Q_Id":9859343,"Users Score":1,"Answer":"You would be better off using an ORM (Object Relational Library) Library that would allow you to design in an OOP way, and let it take care of the database for you. There are many advantages; but one of the greatest is that you won't be tied to a database engine. You can use sqlite for development, and keep your project compatible with Postgresql, Mysql and even Oracle D, depending on a single change in a configuration parameter.\nGiven that, my ORM of choice is SQLAlchemy, due to its maturity for being well known and used (but others could be fine as well).","Q_Score":2,"Tags":"python,database,sqlite,sqlalchemy","A_Id":9860250,"CreationDate":"2012-03-25T10:09:00.000","Title":"Databases and python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a music catalogue application with PyQt using to display GUI. I have a problem about choosing database engine. There are simply too many options.\nI can use:\n-PyQt build-in QSql\n-sqlite3\n-SQLAlchemy (Elixir)\n-SQLObject\n-Python DB-API\nProbably there are far more options, this list is what I got from google (I'm open for any other propositions). If I decide to use some ORM which database system should I use? MySql, PostgreSQL or other? I know some MySql, but I heard a lot of good thing about PostgreSQL, on the other hand sqlite3 seems be most popular in desktop applications. I would be grateful for any advice.\nEDIT:\nAn application is meant to work on Linux and Windows. I think database size should be around 100-10k entries.","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":475,"Q_Id":9859343,"Users Score":4,"Answer":"SQLite3 has the advantage of shipping with Python, so it doesn't require any installation. It has a number of nice features (easy-of-use, portability, ACID, storage in a single file, and it is reasonably fast). \nSQLite makes a good starting-off point. Python's DB API assures a consistent interface to all the popular DBs, so it shouldn't be difficult to switch to another DB later if you change you mind. \nThe decision about whether to use an ORM is harder and it is more difficult to change your mind later. If you can isolate the DB access in just a few functions, then you may not need an ORM at all.","Q_Score":2,"Tags":"python,database,sqlite,sqlalchemy","A_Id":9859423,"CreationDate":"2012-03-25T10:09:00.000","Title":"Databases and python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small issue(for lack of a better word) with MySQL db. I am using Python.\nSo I have this table in which rows are inserted regularly. As regularly as 1 row \/sec.\nI run two Python scripts together. One that simulates the insertion at 1 row\/sec. I have also turned autocommit off and explicitly commit after some number of rows, say 10. \nThe other script is a simple \"SELECT count(*) ...\" query on the table. This query doesn't show me the number of rows the table currently has. It is stubbornly stuck at whatever number of rows the table had initially when the script started running. I have even tried \"SELECT SQL_NO_CACHE count(*) ...\" to no effect.\nAny help would be appreciated.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":392,"Q_Id":9866319,"Users Score":1,"Answer":"If autocommit is turned off in the reader as well, then it will be doing the reads inside a transaction and thus not seeing the writes the other script is doing.","Q_Score":2,"Tags":"python,mysql","A_Id":9868793,"CreationDate":"2012-03-26T03:37:00.000","Title":"Python MySQL- Queries are being unexpectedly cached","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small issue(for lack of a better word) with MySQL db. I am using Python.\nSo I have this table in which rows are inserted regularly. As regularly as 1 row \/sec.\nI run two Python scripts together. One that simulates the insertion at 1 row\/sec. I have also turned autocommit off and explicitly commit after some number of rows, say 10. \nThe other script is a simple \"SELECT count(*) ...\" query on the table. This query doesn't show me the number of rows the table currently has. It is stubbornly stuck at whatever number of rows the table had initially when the script started running. I have even tried \"SELECT SQL_NO_CACHE count(*) ...\" to no effect.\nAny help would be appreciated.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":392,"Q_Id":9866319,"Users Score":0,"Answer":"My guess is that either the reader or writer (most likely the writer) is operating inside a transaction which hasn't been committed. Try ensuring that the writer is committing after each write, and try a ROLLBACK from the reader to make sure that it isn't inside a transaction either.","Q_Score":2,"Tags":"python,mysql","A_Id":9867231,"CreationDate":"2012-03-26T03:37:00.000","Title":"Python MySQL- Queries are being unexpectedly cached","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have several Shelve i.e. .db files that I wish to merge together into one single database.\nThe only method I could think of was to iterate through each database rewriting each iteration to the new database, but this takes too long.\nIs there a better way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":320,"Q_Id":9915062,"Users Score":0,"Answer":"Shelves are mappings, and mappings have an update() method.","Q_Score":0,"Tags":"python,database,shelve","A_Id":9915108,"CreationDate":"2012-03-28T20:20:00.000","Title":"How can I merge Shelve files\/databases?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some *.xls (excel 2003) files\uff0c and I want to convert those files into xlsx (excel 2007).\nI use the uno python package, when I save the documents,\nI can set the Filter name: MS Excel 97\nBut there is no Filter name like 'MS Excel 2007',\nHow can set the the filter name to convert xls to xlsx ?","AnswerCount":17,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99140,"Q_Id":9918646,"Users Score":0,"Answer":"This is a solution for MacOS with old xls files (e.g. Excel 97 2004).\nThe best way I found to deal with this format, if excel is not an option, is to open the file in openoffice and save it to another format as csv files.","Q_Score":43,"Tags":"python,uno","A_Id":67111357,"CreationDate":"2012-03-29T03:20:00.000","Title":"how to convert xls to xlsx","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to create Excel spreadsheet with nice format from Python. I thought of doing it by: \n\nI start in Excel as it is very easy to format: I write in Excel the\nmodel I want, with the good format\nI read this from Python\nI create from Python an Excel spreadsheet with the same format\n\nIn the end, the purpose is to create from Python Excel spreadsheets, but formatting with xlwt takes a lot of time, so I thought of formatting first in Excel to help. \nI have researched for easy ways to doing this but haven't found any. I can stick to my current working solution, using xlwt in Python to create formatted Excel, but it is quite awkward to use. \nThanks for any reply","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4524,"Q_Id":9920935,"Users Score":0,"Answer":"You said:\n\nformatting with xlwt takes a lot of time\n\nand \n\nit is quite awkward to use\n\nPerhaps you are not using easyxf? If so, check out the tutorial that you can access via www.python-excel.org, and have a look at examples\/xlwt_easyxf_simple_demo.py in your xlwt installation.","Q_Score":3,"Tags":"python,excel,format,xlwt","A_Id":10001613,"CreationDate":"2012-03-29T07:32:00.000","Title":"Easily write formatted Excel from Python: Start with Excel formatted, use it in Python, and regenerate Excel from Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to ask you what programming language I should use to develop a horizontally scalable database. I don't care too much about performance.\nCurrently, I only know PHP and Python, but I wonder if Python is good for scalability.\nOr is this even possible in Python?\nThe reasons I don't use an existing system is, I need deep insight into the system, and there is no database out there that can store indexes the way I want. (It's a mix of non relational, sparse free multidimensional, and graph design)\nEDIT:\nI already have most of the core code written in Python and investigated ways to improve adding data for that type of database design, what limits the use of other databases even more.\nEDIT 2:\nForgot to note, the database tables are several hundred gigabytes.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":284,"Q_Id":9927372,"Users Score":1,"Answer":"The deveopment of a scalable database is language independent, i cannot say much about PHP, but i can tell you good things about Python, it's easy to read, easy to learn, etc. In my opinion it makes the code much cleaner than other languges.","Q_Score":0,"Tags":"python,programming-languages,database-programming","A_Id":9927520,"CreationDate":"2012-03-29T14:25:00.000","Title":"Programming a scalable database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to ask you what programming language I should use to develop a horizontally scalable database. I don't care too much about performance.\nCurrently, I only know PHP and Python, but I wonder if Python is good for scalability.\nOr is this even possible in Python?\nThe reasons I don't use an existing system is, I need deep insight into the system, and there is no database out there that can store indexes the way I want. (It's a mix of non relational, sparse free multidimensional, and graph design)\nEDIT:\nI already have most of the core code written in Python and investigated ways to improve adding data for that type of database design, what limits the use of other databases even more.\nEDIT 2:\nForgot to note, the database tables are several hundred gigabytes.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":284,"Q_Id":9927372,"Users Score":0,"Answer":"Since this is clearly a request for \"opinion\", I thought I'd offer my $.02\nWe looked at MongoDB 12-months ago, and started to really like it...but for one issue. MongoDB limits the largest database to amount of physical RAM installed on the MongoDB server. For our tests, this meant we were limited to 4 GB databases. This didn't fit our needs, so we walked away (too bad really, because Mongo looked great).\nWe moved back to home turf, and went with PostgreSQL for our project. It is an exceptional system, with lots to like.\nBut we've kept an eye on the NoSQL crowd ever since, and it looks like Riak is doing some really interesting work. \n(fyi -- it's also possible the MongoDB project has resolved the DB size issue -- we haven't kept up with that project).","Q_Score":0,"Tags":"python,programming-languages,database-programming","A_Id":9927811,"CreationDate":"2012-03-29T14:25:00.000","Title":"Programming a scalable database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to ask you what programming language I should use to develop a horizontally scalable database. I don't care too much about performance.\nCurrently, I only know PHP and Python, but I wonder if Python is good for scalability.\nOr is this even possible in Python?\nThe reasons I don't use an existing system is, I need deep insight into the system, and there is no database out there that can store indexes the way I want. (It's a mix of non relational, sparse free multidimensional, and graph design)\nEDIT:\nI already have most of the core code written in Python and investigated ways to improve adding data for that type of database design, what limits the use of other databases even more.\nEDIT 2:\nForgot to note, the database tables are several hundred gigabytes.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":284,"Q_Id":9927372,"Users Score":0,"Answer":"Betweent PHP & Python, definitely Python. Where I work, the entire system is written in Python and it scales quite well.\np.s.: Do take a look at Mongo Db though.","Q_Score":0,"Tags":"python,programming-languages,database-programming","A_Id":9927445,"CreationDate":"2012-03-29T14:25:00.000","Title":"Programming a scalable database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a web application build in Django + Python that interact with web services (written in JAVA).\nNow all the database management part is done by web-services i.e. all CRUD operations to actual database is done by web-services.\n\nNow i have to track all User Activities done on my website in some log table.\nLike If User posted a new article, then a new row is created into Articles table by web-services and side by side, i need to add a new row into log table , something like \"User : Raman has posted a new article (with ID, title etc)\"\nI have to do this for all Objects in my database like \"Article\", \"Media\", \"Comments\" etc\n\nNote : I am using PostgreSQL\n\nSo what is the best way to achieve this..?? (Should I do it in PostgreSQL OR JAVA ..??..And How..??)","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2693,"Q_Id":9942206,"Users Score":0,"Answer":"In your log table you can have various columns, including:\n\nuser_id (the user that did the action)\nactivity_type (the type of activity, such as view or commented_on)\nobject_id (the actual object that it concerns, such as the Article or Media)\nobject_type (the type of object; this can be used later, in combination with object_id to lookup the object in the database)\n\nThis way, you can keep track of all actions the users do. You'd need to update this table whenever something happens that you wish to track.","Q_Score":0,"Tags":"java,python,django,postgresql,user-activity","A_Id":9942327,"CreationDate":"2012-03-30T11:39:00.000","Title":"How to store all user activites in a website..?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web application build in Django + Python that interact with web services (written in JAVA).\nNow all the database management part is done by web-services i.e. all CRUD operations to actual database is done by web-services.\n\nNow i have to track all User Activities done on my website in some log table.\nLike If User posted a new article, then a new row is created into Articles table by web-services and side by side, i need to add a new row into log table , something like \"User : Raman has posted a new article (with ID, title etc)\"\nI have to do this for all Objects in my database like \"Article\", \"Media\", \"Comments\" etc\n\nNote : I am using PostgreSQL\n\nSo what is the best way to achieve this..?? (Should I do it in PostgreSQL OR JAVA ..??..And How..??)","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2693,"Q_Id":9942206,"Users Score":1,"Answer":"So, you have UI <-> Web Services <-> DB\nSince the web services talk to the DB, and the web services contain the business logic (i.e. I guess you validate stuff there, create your queries and execute them), then the best place to 'log' activities is in the services themselves.\nIMO, logging PostgreSQL transactions is a different thing. It's not the same as logging 'user activities' anymore.\nEDIT: This still means you create DB schema for 'logs' and write them to DB.\nSecond EDIT: Catching log worthy events in the UI and then logging them from there might not be the best idea either. You will have to rewrite logging if you ever decide to replace the UI, or for example, write an alternate UI for, say mobile devices, or something else.","Q_Score":0,"Tags":"java,python,django,postgresql,user-activity","A_Id":9942819,"CreationDate":"2012-03-30T11:39:00.000","Title":"How to store all user activites in a website..?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to implement an script that reads the content(files and folders) of a certain directory and writes it in a database. My goal is create an software that allows me to organize those files and folders relating description and tags to them, without affecting the correspondig physical files in the disk.\nBut for now I'm facing a logical problem: How do I make a direct connection between that physical file and the database register? I want that, even if the physical file, for some reason, is edited or moved to another folder inside the root directory, the software is still able to relate that file with its original register in the database.\nMy first idea was to use a checksum hash to identify every file but, I'm guessing that if the file is edited, so does the hash, doesn't it? Besides that, I also think that a folder itself can't be checked that way.\nAnother solution that came up to my mind was applying a unique key in the beginning of every file and folder name in the directory. That may work, but it seems to me like an improvised solution and, therefore, I'mhoping that there may be another way to do it that I haven't considered yet.\nDoes anyone have an advice on that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":114,"Q_Id":9977888,"Users Score":0,"Answer":"You can't.\nIt looks that there is no way to identify the file: neither by content nor by pathname.\nOne workaround might be: use path as id (and use them as reference in the DB) and do not use system tools (like mv) to move files but your own script which updates the file system and the database.","Q_Score":0,"Tags":"php,python,database,windows,linux","A_Id":9978197,"CreationDate":"2012-04-02T14:00:00.000","Title":"How to link a file to a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am requesting a web page and want to cache the page data as a raw html string. (First I escaped the data string) I use sqlite3 to save my data on. When I tried give the byte_string in dictionary, or tuple, using placeholders in request, it raise \"Programming Error\" saying to convert the application to use unicode strings. I save it as SQLITE3 TEXT datatype.\nI tried data.encode(\"utf-8\") and encode(\"utf-8\") both raises the same error\nUnicodeDecodeError: 'utf8' codec can't decode byte 0xf6 in position 11777: invalid start byte\nI know it contains a strange character, this character is '\u00f6'. How can i solve this problem.\nDo i need use BLOB datatype of sqlite3","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":9991854,"Users Score":0,"Answer":"You should .decode with the correct encoding. In this case Latin 1 or CP1252. \u00bb\u00f6\u00ab is obviously not 0xf6 in UTF-8 so why should it work?","Q_Score":0,"Tags":"python,unicode,utf-8,sqlite","A_Id":9991929,"CreationDate":"2012-04-03T10:57:00.000","Title":"How to convert a stringbyte(raw html string) to sqlite3 TEXT supporting unicode in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Ok so I have a script that connects to a mssql db and i need to run as a service which I have already accomplished but when I run it as a service it overrides my credentials that I have put in when i connect to the db with the ad computer account.\nIt runs perfect when i run it on its own and not as a service.\nMy Connection String is:\n\n'DRIVER={SQL Server};SERVER=MyServer;DATABASE=MyDB;UID=DOMAIN\\myusername;PWD=A;Trusted_Connection=True'\n\nThe Error is:\n\nError: ('28000', \"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'DOMAIN\\COMPUTERNAME')\n\nAny Advice?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":7735,"Q_Id":10000256,"Users Score":2,"Answer":"In the last project I worked on, I found that DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBName is sufficient to initiate a db connection in trusted mode.\nIf it still does not work, it is probably either\n1) the account DEEPTHOUGHT on mssql server is not set up properly. \n2) the runAs in the service is not set up properly (why error message mentions 'ComputerName' instead of 'DEEPTHOUGHT'?)","Q_Score":4,"Tags":"python,sql-server,py2exe,pyodbc","A_Id":10001004,"CreationDate":"2012-04-03T19:46:00.000","Title":"Failed to Login as 'Domain\\ComputerName' pyodbc with py2exe","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a unique datastore key for users authenticated via openid with the python 2.7 runtime for the google apps engine. \nShould I use User.federated_identity() or User.federated_provider() + User.federated_identity()? \nIn other words is User.federated_identity() unique for ALL providers or just one specific provider?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":194,"Q_Id":10002209,"Users Score":2,"Answer":"User.federated_identity() \"Returns the user's OpenID identifier.\", which is unique by definition (it's a URL that uniquely identifies the user).","Q_Score":2,"Tags":"python,google-app-engine,authentication","A_Id":10023490,"CreationDate":"2012-04-03T22:13:00.000","Title":"Generating a unique data store key from a federated identity","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm writing a Oracle of Bacon type website that involves a breadth first search on a very large directed graph (>5 million nodes with an average of perhaps 30 outbound edges each). This is also essentially all the site will do, aside from display a few mostly text pages (how it works, contact info, etc.). I currently have a test implementation running in Python, but even using Python arrays to efficiently represent the data, it takes >1.5gb of RAM to hold the whole thing. Clearly Python is the wrong language for a low-level algorithmic problem like this, so I plan to rewrite most of it in C using the Python\/C bindings. I estimate that this'll take about 300 mb of RAM.\nBased on my current configuration, this will run through mod_wsgi in apache 2.2.14, which is set to use mpm_worker_module. Each child apache server will then load up the whole python setup (which loads the C extension) thus using 300 mb, and I only have 4gb of RAM. This'll take time to load and it seems like it'd potentially keep the number of server instances lower than it could otherwise be. If I understand correctly, data-heavy (and not client-interaction-heavy) tasks like this would typically get divorced from the server by setting up an SQL database or something of the sort that all the server processes could then query. But I don't know of a database framework that'd fit my needs.\nSo, how to proceed? Is it worth trying to set up a database divorced from the webserver, or in some other way move the application a step farther out than mod_wsgi, in order to maybe get a few more server instances running? If so, how could this be done? \nMy first impression is that the database, and not the server, is always going to be the limiting factor. It looks like the typical Apache mpm_worker_module configuration has ServerLimit 16 anyways, so I'd probably only get a few more servers. And if I did divorce the database from the server I'd have to have some way to run multiple instances of the database as well (I already know that just one probably won't cut it for the traffic levels I want to support) and make them play nice with the server. So I've perhaps mostly answered my own question, but this is a kind of odd situation so I figured it'd be worth seeing if anyone's got a firmer handle on it. Anything I'm missing? Does this implementation make sense? Thanks in advance!\nTechnical details: it's a Django website that I'm going to serve using Apache 2.2.14 on Ubuntu 10.4.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":214,"Q_Id":10017645,"Users Score":1,"Answer":"First up, look at daemon mode of mod_wsgi and don't use embedded mode as then you can control separate to Apache child processes the number of Python WSGI application processes. Secondly, you would be better off putting the memory hungry bits in a separate backend process. You might use XML-RPC or other message queueing system to communicate with the backend processes, or even perhaps see if you can use Celery in some way.","Q_Score":2,"Tags":"python,database,django,apache,mod-wsgi","A_Id":10020054,"CreationDate":"2012-04-04T19:06:00.000","Title":"Maximizing apache server instances with large mod_wsgi application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been doing some HA testing of our database and in my simulation of server death I've found an issue.\nMy test uses Django and does this:\n\nConnect to the database\nDo a query\nPull out the network cord of the server\nDo another query\n\nAt this point everything hangs indefinitely within the mysql_ping function. As far as my app is concerned it is connected to the database (because of the previous query), it's just that the server is taking a long time to respond...\nDoes anyone know of any ways to handle this kind of situation? connect_timeout doesn't work as I'm already connected. read_timeout seems like a somewhat too blunt instrument (and I can't even get that working with Django anyway).\nSetting the default socket timeout also doesn't work (and would be vastly too blunt as this would affect all socket operations and not just MySQL).\nI'm seriously considering doing my queries within threads and using Thread.join(timeout) to perform the timeout.\nIn theory, if I can do this timeout then reconnect logic should kick in and our automatic failover of the database should work perfectly (kill -9 on affected processes currently does the trick but is a bit manual!).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":181,"Q_Id":10018055,"Users Score":0,"Answer":"I would think this would be more inline with setting a read_timeout on your front-facing webserver. Any number of reasons could exist to hold up your django app indefinitely. While you have found one specific case there could be many more (code errors, cache difficulties, etc).","Q_Score":2,"Tags":"python,django,mysql-python","A_Id":10192810,"CreationDate":"2012-04-04T19:35:00.000","Title":"How can I detect total MySQL server death from Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am having a problem and not sure if this is possible at all, so if someone could point me in the right direction.\nI need to open a file from a webpage, open it in excel and save the file.\nThe problem I am running into the file name on the website has a file name ( not an active link ) and then it will have a \"download \" button that is not specific to the file I need to download. So instead of the download button being \"file1todaysdate\", they are nothing that I could use from day to day.\nIs there a way I could locate file name then grab the file from the download icon? then save in excel? If not sorry for wasting time.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1137,"Q_Id":10023418,"Users Score":0,"Answer":"Examine the Content-Disposition header of the response to discover what the server wants you to call the file.","Q_Score":2,"Tags":"python","A_Id":10023435,"CreationDate":"2012-04-05T06:05:00.000","Title":"Python File Download","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to do a parser, that reads several excel files. I need values usually at the bottom of a row where you find a sum of all upper elements. So the cell value is actually \"=sum()\" or \"A5*0.5\" lets say... To a user that opens this file with excel it appears like a number, which is fine. But if I try to read this value with ws.cell(x, y).value I do not get anything.\nSo my question is how to read this kind of fields with xlrd, if it is possible to read it like ws.cell(x, y).value or something similar?\nthanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2953,"Q_Id":10029641,"Users Score":0,"Answer":"As per the link for your question,I have posted above, the author of xlrd says, 'The work is 'in-progress' but is not likely to be available soon as the focus of xlrd lies elsewhere\". By this, I assume that there is nothing much you can do about it. Note: this is based on author's comment on Jan, 2011.","Q_Score":2,"Tags":"python,excel,xlrd","A_Id":10029868,"CreationDate":"2012-04-05T13:36:00.000","Title":"how to read formulas with xlrd","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have Python process 1 on machine 1 and Python process 2 on machine 2. Both processes are the same and process data sent by a load balancer.\nBoth processes need to interact with a database - in my case Postgres so each process needs to know what database it should talk to, it needs to have the right models on each machine etc. It's just too tightly coupled. \nThe ideal would be to have a separate process dealing with the database stuff like connections, keeping up with db model changes, requests to the databases etc. What my process 1 and process 2 should do is just say I have some JSON data that needs to be saved or updated on this table or I need this data in json format. \nMaybe I'm asking the impossible but is there any Python solution that would at least make life a little easier when it comes to having distributed processes interacting with relational databases in the most decoupled way possible?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":157,"Q_Id":10044862,"Users Score":1,"Answer":"If it's db connection information you're interested in, I recently wrote a service for this. Each process has token(s) set in configuration and uses those to query the service for db connection info. The data layer uses that info to create connections, no DSN's are stored. On the server side, you just maintain a dictionary of token->DSN mappings.\nYou could do connection pooling with bpgergo's suggestion, but you should still include an authentication or identification method. That way, if there's a network intrusion, malicious clients may not be able to impersonate one of the clients.\nThe service implementation is broken into a few parts:\n\nA RESTful service that supports calls of the form http:\/\/192.168.1.100\/getConnection?token=mytokenstring\nA key-value storage system that stores a mapping like {'mytokenstring': {'dbname': 'db', 'ip': '192.168.1.101', 'user': 'dbuser', 'password': 'password', ..}\n\nThis system shouldn't be on the front end network, but if your web tier is compromised, this approach doesn't buy you any protection for the db.\n\nA db object that on instantiation, retrieves a dsn using an appropriate token and creates a new db connection.\n\nYou should re-use this connection object for the rest of the page response if you can. The response time from the service will be fast, but there's a lot more overhead required for db connections.\n\n\nOnce implemented, some care is required for handing schema incompatibilities when switching the dsn info behind a token. You may be able to resolve this by pinning a token to a user session, etc.","Q_Score":1,"Tags":"python,database,distributed-computing","A_Id":10045103,"CreationDate":"2012-04-06T14:25:00.000","Title":"Any Python solution for having distributed processes interact with relational databases in the most decoupled way possible?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The documentation for Pandas has numerous examples of best practices for working with data stored in various formats.\nHowever, I am unable to find any good examples for working with databases like MySQL for example.\nCan anyone point me to links or give some code snippets of how to convert query results using mysql-python to data frames in Pandas efficiently ?","AnswerCount":13,"Available Count":1,"Score":0.0614608973,"is_accepted":false,"ViewCount":123656,"Q_Id":10065051,"Users Score":4,"Answer":"pandas.io.sql.frame_query is deprecated. Use pandas.read_sql instead.","Q_Score":97,"Tags":"python,pandas","A_Id":27531471,"CreationDate":"2012-04-08T18:01:00.000","Title":"python-pandas and databases like mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing an application designed for a secretary to use. She has a stack of hundreds of ballot forms which have a number of questions on them, and wishes to input this data into a program to show the total votes for each answer. Each question has a number of answers.\nFor example:\nQ: \"Re-elect current president of the board\"\nA: Choice between \"Yes\" or \"No\" or \"Neutral\"\nYear on year the questions can change, as well as the answers, but the current application used in the company is hard coded with the questions and answers of last year.\nMy aim is to create an app (in Django\/Python) which allows the secretary to add\/delete questions and answers as she wishes. I am relatively new to Django... I have created an app in University and know how to create basic models and implement the Twitter bootstrap for the GUI.\nBut I'm a little confused about how to enable the secretary to add custom fields in (which are obviously defined in SQL). Does anyone have any small tips on how to get started? By the way, I recognize that this could be achievable using the admin part of website and would welcome any suggestions about that.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":10066573,"Users Score":5,"Answer":"You really don't want to implement each question\/answer as a separate DB field.\nInstead, make a table of questions and a table of answers, and have a field in the answers table (in general, a ForeignKey) to indicate which question a given answer is associated with.","Q_Score":0,"Tags":"python,sql,django","A_Id":10066588,"CreationDate":"2012-04-08T21:14:00.000","Title":"Adding field to SQL table from Django Application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to build a web application which uses Python, php and MongoDB.\n\nPython - For offline database populating on my local home machine and then exporting db to VPS. Later I am planning to schedule this job using cron.\nPHP - For web scripting.\n\nThe VPS I wish to buy supports Python and LAMP Stack but not mongoDB (myhosting.com LAMP stack VPS) by default. Now since mongoDB isn't supported by default, I would have to install mongoDB manually on VPS. So what I want to know is that, had it been my VPS would have supported mongoDB would I have benefitted in terms of performance and scalability.\nAlso can someone please suggest a VPS suitable in my case.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":810,"Q_Id":10073934,"Users Score":1,"Answer":"If the vps you are looking at restricts the packages you can install, and you need something that they prohibit, I would look for another vps. Both rackspace and amazon a range of instances, and numerous supported os. With either of them you choose your operating system and are free to install whatever you want.","Q_Score":1,"Tags":"php,python,mongodb,vps","A_Id":10074035,"CreationDate":"2012-04-09T13:28:00.000","Title":"Performance of MongoDB on VPS or cloud service not having mongoDB installed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a desktop python application whose data backend is a MySQL database, but whose previous database was a network-accessed xml file(s). When it was xml-powered, I had a thread spawned at the launch of the application that would simply check the xml file for changes and whenever the date modified changed (due to any user updating it), the app would refresh itself so multiple users could use and see the changes of the app as they went about their business.\nNow that the program has matured and is venturing toward an online presence so it can be used anywhere. Xml is out the window and I'm using MySQL with SQLAlchemy as the database access method. The plot thickens, however, because the information is no longer stored in one xml file but rather it is split into multiple tables in the SQL database. This complicates the idea of some sort of 'last modified' table value or structure. Thus the question, how do you inform the users that the data has changed and the app needs to refresh? Here are some of my thoughts:\n\nEach table needs a last-modified column (this seems like the worst option ever)\nA separate table that holds some last modified column?\nSome sort of push notification through a server?\nIt should be mentioned that I have the capability of running perhaps a very small python script on the same server hosting the SQL db that perhaps the app could connect to and (through sockets?) it could pass information to and from all connected clients?\n\nSome extra information:\n\nThe information passed back and forth would be pretty low-bandwidth. Mostly text with the potential of some images (rarely over 50k).\nNumber of clients at present is very small, in the tens. But the project could be picked up by some bigger companies with client numbers possibly getting into the hundreds. Even still the bandwidth shouldn't be a problem for the foreseeable future.\n\nAnyway, somewhat new territory for me, so what would you do? Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":450,"Q_Id":10091108,"Users Score":0,"Answer":"As I understand this is not a client-server application, but rather an application that has a common remote storage.\nOne idea would be to change to web services (this would solve most of your problems on the long run).\nAnother idea (if you don't want to switch to web) is to refresh periodically the data in your interface by using a timer.\nAnother way (and more complicated) would be to have a server that receives all the updates, stores them in the database and then pushes the changes to the other connected clients.\nThe first 2 ideas you mentioned will have maintenance, scalability and design uglyness issues.\nThe last 2 are a lot better in my opinion, but I still stick to web services as being the best.","Q_Score":0,"Tags":"python,mysql,notifications","A_Id":10091535,"CreationDate":"2012-04-10T14:55:00.000","Title":"Best way to inform user of an SQL Table Update?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Looking for any advice I can get. \nI have 16 virtual CPUs all writing to a single remote MongoDB server. The machine that's being written to is a 64-bit machine with 32GB RAM, running Windows Server 2008 R2. After a certain amount of time, all the CPUs stop cold (no gradual performance reduction), and any attempt to get a Remote Desktop Connection hangs. \nI'm writing from Python via pymongo, and the insert statement is \"[collection].insert([document], safe=True)\"\nI decided to more actively monitor my server as the distributed write job progressed, remoting in from time to time and checking the Task Manager. What I see is a steady memory creep, from 0.0GB all the way up to 29.9GB, in a fairly linear fashion. My leading theory is therefore that my writes are filling up the memory and eventually overwhelming the machine. \nAm I missing something really basic? I'm new to MongoDB, but I remember that when writing to a MySQL database, inserts are typically followed by commits, where it's the commit statement that actually makes sure the record is written. Here I'm not doing any commits...?\nThanks,\nDave","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":10114431,"Users Score":0,"Answer":"Try it with journaling turned off and see if the problem remains.","Q_Score":0,"Tags":"mongodb,python-2.7,windows-server-2008-r2,pymongo,distributed-transactions","A_Id":10157192,"CreationDate":"2012-04-11T21:43:00.000","Title":"Distributed write job crashes remote machine with MongoDB server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We moved our SQL Server 2005 database to a new physical server, and since then it has been terminating any connection that persist for 30 seconds.\nWe are experiencing this in Oracle SQL developer and when connecting from python using pyodbc\nEverything worked perfectly before, and now python returns this error after 30 seconds:\n('08S01', '[08S01] [FreeTDS][SQL Server]Read from the server failed (20004) (SQLExecDirectW)')","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":549,"Q_Id":10145201,"Users Score":1,"Answer":"First of all what you need is profile the sql server to see if any activity is happening. Look for slow running queries, CPU and memory bottlenecks. \nAlso you can include the timeout in the querystring like this: \n\"Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=SSPI;Connection Timeout=30\"; \nand extend that number if you want.\nBut remember \"timeout\" doesn't means time connection, this is just the time to wait while trying to establish a connection before terminating.\nI think this problem is more about database performance or maybe a network issue.","Q_Score":0,"Tags":"python,sql,sql-server,sql-server-2005,oracle-sqldeveloper","A_Id":10145890,"CreationDate":"2012-04-13T17:06:00.000","Title":"SQL Server 2005 terminating connections after 30 sec","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've extended sorl-thumbnail's KVStoreBase class, and made a key-value backend that uses a single MongoDB collection.\nThis was done in order to avoid installing a discrete key-value store (e.g. Redis).\nShould I clear the collection every once in a while?\nWhat are the downsides?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":298,"Q_Id":10146087,"Users Score":0,"Answer":"Only clear the collection if low disk usage is more important to you than fast access times.\nThe downsides are that your users will all hit un-cached thumbs simultaneously (And simultaneously begin recomputing them).\nJust run python manage.py thumbnail cleanup\n\nThis cleans up the Key Value Store from stale cache. It removes references to images that do not exist and thumbnail references and their actual files for images that do not exist. It removes thumbnails for unknown images.","Q_Score":2,"Tags":"django,mongodb,python-imaging-library,sorl-thumbnail","A_Id":11557675,"CreationDate":"2012-04-13T18:11:00.000","Title":"Using sorl-thumbnail with MongoDB storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently writing a script in Python which uploads data to a localhost MySql DB. I am now looking to relocate this MySql DB to a remote server with a static IP address. I have a web hosting facility but this only allows clients to connect to the MySql DB if I specify the domain \/ IP address from which clients will connect. My Python script will be ran on a number of computers that will connect via a mobile broadband dongle and therefore, the IP addresses will vary on a day-to-day basis as the IP address is allocated dynamically.\nAny suggestions on how to overcome this issue either with my web hosting facility (cPanel) or alternatively, any suggestions on MySql hosting services that allow remote access from any IP addresses (assuming they successfully authenticate with passwords etc...) Would SSH possibly address this and allow me to transmit data?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":1360,"Q_Id":10157380,"Users Score":2,"Answer":"Go to Cpanel and add the wildcard % on remote Mysql Connection options (cPanel > Remote MySQL)","Q_Score":0,"Tags":"python,mysql,mysql-python","A_Id":10157409,"CreationDate":"2012-04-14T21:07:00.000","Title":"Remote Access to MySql DB (Hosting Options)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to read phonenumber field from xls using xlrd (python). But, I always get float no.\ne.g. I get phone number as 8889997777.0\nHow can I get rid of floating format and convert it to string to store it in my local mongodb within python as string as regular phone number e.g. 8889997777","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2371,"Q_Id":10169949,"Users Score":0,"Answer":"Did you try using int(phoneNumberVar) or in your case int(8889997777.0)?","Q_Score":2,"Tags":"python,string,floating-point,xls,xlrd","A_Id":10169963,"CreationDate":"2012-04-16T07:06:00.000","Title":"python xlrd reading phone nunmber from xls becomes float","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to read phonenumber field from xls using xlrd (python). But, I always get float no.\ne.g. I get phone number as 8889997777.0\nHow can I get rid of floating format and convert it to string to store it in my local mongodb within python as string as regular phone number e.g. 8889997777","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":2371,"Q_Id":10169949,"Users Score":4,"Answer":"You say:\n\npython xlrd reading phone nunmber from xls becomes float\n\nThis is incorrect. It is already a float inside your xls file. xlrd reports exactly what it finds.\nYou can use str(int(some_float_value)) to do what you want to do.","Q_Score":2,"Tags":"python,string,floating-point,xls,xlrd","A_Id":10170261,"CreationDate":"2012-04-16T07:06:00.000","Title":"python xlrd reading phone nunmber from xls becomes float","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to know if somebody knows a way to customize the csv output in htsql, and especially the delimiter and the encoding ?\nI would like to avoid iterating over each result and find a way through configuration and\/or extensions.\nThank in advance.\nAnthony","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":170,"Q_Id":10205990,"Users Score":3,"Answer":"If you want TAB as a delimiter, use tsv format (e.g. \/query\/:tsv instead of \/query\/:csv).\nThere is no way to specify the encoding other than UTF-8. You can reencode the output manually on the client.","Q_Score":1,"Tags":"python,sql,htsql","A_Id":10210348,"CreationDate":"2012-04-18T08:52:00.000","Title":"Customizing csv output in htsql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":794,"Q_Id":10208147,"Users Score":0,"Answer":"OpenERP works with PostgreSQl as the Back-end Structure.\nPostgresql is managed by pgadmin3 (Postgres GUI),you can write sql queries there and can add\/delete records from there.\nIt is not advisable to insert\/remove data directly into Database!!!!","Q_Score":2,"Tags":"python,postgresql,openerp","A_Id":10208766,"CreationDate":"2012-04-18T11:10:00.000","Title":"OpenERP: insert Data code","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":794,"Q_Id":10208147,"Users Score":0,"Answer":"The addition of columns in the .py files of the corresponding modules you want to chnage will insert coumns to the pgadmin3 also defenition of classes will create tables...when the fields are displayed in xml file and values are entered to the fields through the interface the values get stored to the table values to the database...","Q_Score":2,"Tags":"python,postgresql,openerp","A_Id":10225346,"CreationDate":"2012-04-18T11:10:00.000","Title":"OpenERP: insert Data code","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web application that has been done using Cakephp with MySql as the DB. The webapp also exposes a set of web services that get and update data to the MySQL DB. I will like to extend the app to provide a fresh set of web services but will like to use a python based framework like web2py\/django etc. Since both will be working of the same DB will it cause any problems? The reason I want to do it is because the initial app\/web services was done by somebody else and now I want to extend it and am more comfortable using python\/web2py that php\/cakephp.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":10233187,"Users Score":0,"Answer":"This is one of the reasons to use RDBMS to provide access for different users and applications to the same data. There should absolutely no problem with this.","Q_Score":0,"Tags":"php,python,mysql,django,cakephp","A_Id":10233231,"CreationDate":"2012-04-19T17:08:00.000","Title":"Same MySql DB working with a php and a python framework","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to drop a few tables with the \"DROP TABLE\" command but for a unknown reason, the program just \"sits\" and doesn't delete the table that I want it to in the database.\nI have 3 tables in the database:\nProduct, Bill and Bill_Products which is used for referencing products in bills.\nI managed to delete\/drop Product, but I can't do the same for bill and Bill_Products.\nI'm issuing the same \"DROP TABLE Bill CASCADE;\" command but the command line just stalls. I've also used the simple version without the CASCADE option.\nDo you have any idea why this is happening?\nUpdate:\nI've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.\nSo, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).","AnswerCount":8,"Available Count":4,"Score":0.1243530018,"is_accepted":false,"ViewCount":55426,"Q_Id":10317114,"Users Score":5,"Answer":"Had the same problem.\nThere were not any locks on the table.\nReboot helped.","Q_Score":42,"Tags":"python,database,django,postgresql","A_Id":19072541,"CreationDate":"2012-04-25T13:50:00.000","Title":"Postgresql DROP TABLE doesn't work","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to drop a few tables with the \"DROP TABLE\" command but for a unknown reason, the program just \"sits\" and doesn't delete the table that I want it to in the database.\nI have 3 tables in the database:\nProduct, Bill and Bill_Products which is used for referencing products in bills.\nI managed to delete\/drop Product, but I can't do the same for bill and Bill_Products.\nI'm issuing the same \"DROP TABLE Bill CASCADE;\" command but the command line just stalls. I've also used the simple version without the CASCADE option.\nDo you have any idea why this is happening?\nUpdate:\nI've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.\nSo, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).","AnswerCount":8,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":55426,"Q_Id":10317114,"Users Score":2,"Answer":"Old question but ran into a similar issue. Could not reboot the database so tested a few things until this sequence worked :\n\ntruncate table foo;\ndrop index concurrently foo_something; times 4-5x\nalter table foo drop column whatever_foreign_key; times 3x\nalter table foo drop column id;\ndrop table foo;","Q_Score":42,"Tags":"python,database,django,postgresql","A_Id":40749694,"CreationDate":"2012-04-25T13:50:00.000","Title":"Postgresql DROP TABLE doesn't work","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to drop a few tables with the \"DROP TABLE\" command but for a unknown reason, the program just \"sits\" and doesn't delete the table that I want it to in the database.\nI have 3 tables in the database:\nProduct, Bill and Bill_Products which is used for referencing products in bills.\nI managed to delete\/drop Product, but I can't do the same for bill and Bill_Products.\nI'm issuing the same \"DROP TABLE Bill CASCADE;\" command but the command line just stalls. I've also used the simple version without the CASCADE option.\nDo you have any idea why this is happening?\nUpdate:\nI've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.\nSo, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).","AnswerCount":8,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":55426,"Q_Id":10317114,"Users Score":0,"Answer":"The same thing happened for me--except that it was because I forgot the semicolon. face palm","Q_Score":42,"Tags":"python,database,django,postgresql","A_Id":69412889,"CreationDate":"2012-04-25T13:50:00.000","Title":"Postgresql DROP TABLE doesn't work","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to drop a few tables with the \"DROP TABLE\" command but for a unknown reason, the program just \"sits\" and doesn't delete the table that I want it to in the database.\nI have 3 tables in the database:\nProduct, Bill and Bill_Products which is used for referencing products in bills.\nI managed to delete\/drop Product, but I can't do the same for bill and Bill_Products.\nI'm issuing the same \"DROP TABLE Bill CASCADE;\" command but the command line just stalls. I've also used the simple version without the CASCADE option.\nDo you have any idea why this is happening?\nUpdate:\nI've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.\nSo, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).","AnswerCount":8,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":55426,"Q_Id":10317114,"Users Score":4,"Answer":"I ran into this today, I was issuing a:\nDROP TABLE TableNameHere\nand getting ERROR: table \"tablenamehere\" does not exist. I realized that for case-sensitive tables (as was mine), you need to quote the table name:\nDROP TABLE \"TableNameHere\"","Q_Score":42,"Tags":"python,database,django,postgresql","A_Id":60367779,"CreationDate":"2012-04-25T13:50:00.000","Title":"Postgresql DROP TABLE doesn't work","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ive been reading the Django Book and its great so far, unless something doesn't work properly. I have been trying for two days to install the psycogp2 plugin with no luck.\ni navigate to the unzipped directory and run setup.py install and it returns \"You must have postgresql dev for building a serverside extension or libpq-dev for client side.\"\nI don't know what any of this means, and google returns results tossing a lot of terms I don't really understand.\nIve been trying to learn django for abut a week now plus linux so any help would be great. Thanks\nBtw, I have installed postgresql and pgadminIII from installer pack. \nI also tried sudo apt-get post.... and some stuff happens...but Im lost.","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":4157,"Q_Id":10321568,"Users Score":-1,"Answer":"sudo apt-get install python-psycopg2 should work fine since it worked solution for me as well.","Q_Score":6,"Tags":"python,django","A_Id":20124244,"CreationDate":"2012-04-25T18:25:00.000","Title":"Django with psycopg2 plugin","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Ive been reading the Django Book and its great so far, unless something doesn't work properly. I have been trying for two days to install the psycogp2 plugin with no luck.\ni navigate to the unzipped directory and run setup.py install and it returns \"You must have postgresql dev for building a serverside extension or libpq-dev for client side.\"\nI don't know what any of this means, and google returns results tossing a lot of terms I don't really understand.\nIve been trying to learn django for abut a week now plus linux so any help would be great. Thanks\nBtw, I have installed postgresql and pgadminIII from installer pack. \nI also tried sudo apt-get post.... and some stuff happens...but Im lost.","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":4157,"Q_Id":10321568,"Users Score":3,"Answer":"I'm working on Xubuntu (12.04) and I have encountered the same error when I wanted to install django-toolbelt. I solved this error with the following operations :\n\nsudo apt-get install python-dev\nsudo apt-get install libpq-dev\nsudo apt-get install python-psycopg2\n\nI hope this informations may be helpful for someone else.","Q_Score":6,"Tags":"python,django","A_Id":22528687,"CreationDate":"2012-04-25T18:25:00.000","Title":"Django with psycopg2 plugin","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a daemon process witch spawns child processes using multiprocessing to do some work, each child process opens its own connection handle do DB (postgres in my case). Jobs to processes are passed via Queue and if queue is empty processes invoke sleep for some time, and recheck queue\nHow can I implement \"graceful shutdown\" on SIGTERM? Each subprocess should terminate as fast as possible, with respect of closing\/terminating current cursor\/transaction and db connection, and opened files.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":403,"Q_Id":10322422,"Users Score":5,"Answer":"Store all the open files\/connections\/etc. in a global structure, and close them all and exit in your SIGTERM handler.","Q_Score":3,"Tags":"python,database,multiprocessing,signals","A_Id":10322481,"CreationDate":"2012-04-25T19:27:00.000","Title":"Gracefull shutdown, close db connections, opened files, stop work on SIGTERM, in multiprocessing","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:\n\nI already do this, thank you.\nI know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.\nEven if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once\n\nSo, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":621,"Q_Id":10329486,"Users Score":1,"Answer":"I don't know if this is in any way applicable but I am just putting it up there for completeness and experts can downvote me at will... not to mention i have concerns about its performance in some cases.\nI was once tasked with protecting an aging web app written in classic asp against sql injection (they were getting hit pretty bad at the time)\nI dint have time to go through all code (not may choice) so I added a method to one of our standard include files that looked at everything being submitted by the user (iterated through request params) and checked it for blacklisted html tags (e.g. script tags) and sql injection signs (e.g. \";--\" and \"';shutdown\")..\nIf it found one it redirected the user told them they submission was suspicious and if they have an issue call or email.. blah blah.\nIt also recorded the injection attempt in a table (once it have been escaped) and details about the IP address time etc of the attack.. \nOverall it worked a treat.. at least the attacks stopped.\nevery web technology i have used has some way of fudging something like this in there and it only took me about a day to dev and test..\nhope it helps, I would not call it an industry standard or anything\ntl;dr?:\nCheck all request params against a blacklist of strings","Q_Score":0,"Tags":"python,sql,sql-injection","A_Id":10329694,"CreationDate":"2012-04-26T08:09:00.000","Title":"protecting against sql injection attacks beyond parameter binding","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:\n\nI already do this, thank you.\nI know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.\nEven if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once\n\nSo, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":621,"Q_Id":10329486,"Users Score":2,"Answer":"I already do this, thank you.\n\n\nGood; with just this, you can be totally sure (yes, totally sure) that user inputs are being interpreted only as values. You should direct your energies toward securing your site against other kinds of vulnerabilities (XSS and CSRF come to mind; make sure you're using SSL properly, et-cetera).\n\n\nI know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.\n\n\nWell, there's no such thing as fool proof because fools are so ingenious. If your your audience is determined to undermine all of your hard work for securing their data, you can't really do anything about it. what you can do is determine which drivers you believe are secure, and generate a big scary warning when you detect that your users are using something else.\n\n\nEven if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once\n\n\n\nSo don't do that!\nDuring development, log every sql statement sent to your driver. check, on a regular basis, that user data is never in this log (or logged as a separate event, for the parameters).\nSQL injection is basically string formatting. You can usually follow each database transaction backwards to the original sql; if user data is formatted into that somewhere along the way, you have a problem. When scanning over projects, I find that I'm able to locate these at a rate of about one per minute, with effective use of grep and my editor of choice. unless you have tens of thousands of different sql statements, going over each one shouldn't really be prohibitively difficult.\nTry to keep your database interactions well isolated from the rest of your application. mixing sql in with the rest of your code makes it hard to mantain, or do the checks I've described above. Ideally, you should go through some sort of database abstraction, (a full ORM or maybe something thinner), so that you can work on just your database related code when that's the task at hand.","Q_Score":0,"Tags":"python,sql,sql-injection","A_Id":10336420,"CreationDate":"2012-04-26T08:09:00.000","Title":"protecting against sql injection attacks beyond parameter binding","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:\n\nI already do this, thank you.\nI know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.\nEven if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once\n\nSo, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":621,"Q_Id":10329486,"Users Score":0,"Answer":"So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. \n\nThis strategy is doomed to fail.","Q_Score":0,"Tags":"python,sql,sql-injection","A_Id":10336013,"CreationDate":"2012-04-26T08:09:00.000","Title":"protecting against sql injection attacks beyond parameter binding","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:\n\nI already do this, thank you.\nI know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.\nEven if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once\n\nSo, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.","AnswerCount":4,"Available Count":4,"Score":-0.049958375,"is_accepted":false,"ViewCount":621,"Q_Id":10329486,"Users Score":-1,"Answer":"Well in php, I use preg_replace to protect my website from being attacked by sql injection. preg_match can also be used. Try searching an equivalent function of this in python.","Q_Score":0,"Tags":"python,sql,sql-injection","A_Id":10329550,"CreationDate":"2012-04-26T08:09:00.000","Title":"protecting against sql injection attacks beyond parameter binding","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have developed an application using DJango 1.3.1, Python 2.7.2 using Database as SQL server 2008. All these are hosted in Win 2008 R2 operating system on VM. The clients has windows 7 as o\/s.\nWe developed application keeping in view with out VM, all of sudden client has come back saying they can only host the application on VM. Now the challnege is to access application from client to server which is on VM.\nIf anyone has done this kind of applications, request them share step to access the applicaiton on VM. \nAs I am good at standalone systems, not having knowledge on VM accessbility. \nWe have done all project and waiting to someone to respond ASAP.\nThanks in advance for your guidence.\nRegards,\nShiva.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":422,"Q_Id":10331518,"Users Score":0,"Answer":"Maybe this could help you a bit, although my set-up is slightly different. I am running an ASP.NET web app developed on Windows7 via VMware fusion on OS X. I access the web app from outside the VM (browser of Mac or other computers\/phones within the network).\nHere are the needed settings:\n\nNetwork adapter set to (Bridged), so that the VM has its own IP address\nConfigure the VM to have a static IP\n\nAt this point, the VM is acting as its own machine, so you can access it as if it were another server sitting on the network.","Q_Score":0,"Tags":"django,wxpython,sql-server-2008-r2,vmware,python-2.7","A_Id":10331810,"CreationDate":"2012-04-26T10:21:00.000","Title":"Steps to access Django application hosted in VM from Windows 7 client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"What's the best way to create an intentionally empty query in SQLAlchemy?\nFor example, I've got a few functions which build up the query (adding WHERE clauses, for example), and at some points I know that the the result will be empty.\nWhat's the best way to create a query that won't return any rows? Something like Django's QuerySet.none().","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":9094,"Q_Id":10345327,"Users Score":34,"Answer":"If you need the proper return type, just return session.query(MyObject).filter(sqlalchemy.sql.false()).\nWhen evaluated, this will still hit the DB, but it should be fast.\nIf you don't have an ORM class to \"query\", you can use false() for that as well:\nsession.query(sqlalchemy.false()).filter(sqlalchemy.false())","Q_Score":36,"Tags":"python,sqlalchemy","A_Id":12837029,"CreationDate":"2012-04-27T05:41:00.000","Title":"SQLAlchemy: create an intentionally empty query?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using django-1.4 , sqlite3 , django-facebookconnect\nFollowing instructions in Wiki to setup .\n\"python manage.py syncdb\" throws an error .\n\nCreating tables ...\nCreating table auth_permission\nCreating table auth_group_permissions\nCreating table auth_group\nCreating table auth_user_user_permissions\nCreating table auth_user_groups\nCreating table auth_user\nCreating table django_content_type\nCreating table django_session\nCreating table django_site\nCreating table blog_post\nCreating table blog_comment\nCreating table django_admin_log\nTraceback (most recent call last):\n File \"manage.py\", line 10, in \n execute_from_command_line(sys.argv)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/core\/management\/init.py\", line 443, in execute_from_command_line\n utility.execute()\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/core\/management\/init.py\", line 382, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/core\/management\/base.py\", line 196, in run_from_argv\n self.execute(*args, **options.dict)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/core\/management\/base.py\", line 232, in execute\n output = self.handle(*args, **options)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/core\/management\/base.py\", line 371, in handle\n return self.handle_noargs(**options)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/core\/management\/commands\/syncdb.py\", line 91, in handle_noargs\n sql, references = connection.creation.sql_create_model(model, self.style, seen_models)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/django\/db\/backends\/creation.py\", line 44, in sql_create_model\n col_type = f.db_type(connection=self.connection)\nTypeError: db_type() got an unexpected keyword argument 'connection'\n\nIs there any solution ??","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":282,"Q_Id":10356581,"Users Score":1,"Answer":"You should use django-facebook instead, it does that and more and it is actively supported :)","Q_Score":0,"Tags":"python,django,facebook,sqlite","A_Id":10486708,"CreationDate":"2012-04-27T19:17:00.000","Title":"Getting db_type() error while using django-facebook connect for DjangoApp","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a large dataset of events in a Postgres database that is too large to analyze in memory. Therefore I would like to quantize the datetimes to a regular interval and perform group by operations within the database prior to returning results. I thought I would use SqlSoup to iterate through the records in the appropriate table and make the necessary transformations. Unfortunately I can't figure out how to perform the iteration in such a way that I'm not loading references to every record into memory at once. Is there some way of getting one record reference at a time in order to access the data and update each record as needed?\nAny suggestions would be most appreciated!\nChris","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":333,"Q_Id":10359617,"Users Score":1,"Answer":"After talking with some folks, it's pretty clear the better answer is to use Pig to process and aggregate my data locally. At the scale, I'm operating it wasn't clear Hadoop was the appropriate tool to be reaching for. One person I talked to about this suggests Pig will be orders of magnitude faster than in-DB operations at the scale I'm operating at which is about 10^7 records.","Q_Score":0,"Tags":"python,postgresql,sqlsoup","A_Id":10360094,"CreationDate":"2012-04-28T00:57:00.000","Title":"Data Transformation in Postgres Using SqlSoup","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station.\nI have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit.\nCould not find any code on the internet that does this.\nMaybe I need to have a sub-process, but I am not familiar with that at all.\nDoes anyone have an example that I can use?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":489,"Q_Id":10366424,"Users Score":0,"Answer":"Just derive to a new class and override the insert function. In the overwriting function, check last insert time and call father's insert method if it has been more than five minutes, and of course update the most recent insert time.","Q_Score":1,"Tags":"python,python-2.7","A_Id":10366467,"CreationDate":"2012-04-28T18:36:00.000","Title":"Python, function quit if it has been run the last 5 minutes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station.\nI have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit.\nCould not find any code on the internet that does this.\nMaybe I need to have a sub-process, but I am not familiar with that at all.\nDoes anyone have an example that I can use?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":489,"Q_Id":10366424,"Users Score":0,"Answer":"Each time the function is run save a file with the current time. When the function is run again check the time stored in the file and make sure it is old enough.","Q_Score":1,"Tags":"python,python-2.7","A_Id":10366452,"CreationDate":"2012-04-28T18:36:00.000","Title":"Python, function quit if it has been run the last 5 minutes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a social app in django, the architecture of the site will be very similar to facebook\nThere will be posts, posts will have comments\nBoth posts and comments will have meta data like date, author, tags, votes\nI decided to go with nosql database because of the ease with which we can add new features.\nI finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app?\nUpdate:\nI have decided to go with mongodb, will use redis for user home page and home page if necessary.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":1158,"Q_Id":10396315,"Users Score":2,"Answer":"There's a huge distinction to be made between Redis and MongoDB for your particular needs, in that Redis, unlike MongoDB, doesn't facilitate value queries.\nYou can use MongoDB to embed the comments within the post document, which means you get the post and the comments in a single query, yet you could also query for post documents based on tags, the author, etc.\nYou'll definitely want to go with MongoDB. Redis is great, but it's not a proper fit for what I'd believe you'll need from it.","Q_Score":0,"Tags":"python,django,database-design,mongodb,redis","A_Id":10396700,"CreationDate":"2012-05-01T10:16:00.000","Title":"mongo db or redis for a facebook like site?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a social app in django, the architecture of the site will be very similar to facebook\nThere will be posts, posts will have comments\nBoth posts and comments will have meta data like date, author, tags, votes\nI decided to go with nosql database because of the ease with which we can add new features.\nI finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app?\nUpdate:\nI have decided to go with mongodb, will use redis for user home page and home page if necessary.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1158,"Q_Id":10396315,"Users Score":0,"Answer":"First, loosely couple your app and your persistence so that you can swap them out at a very granular level. For example, you want to be able to move one service from mongo to redis as your needs evolve. Be able to measure your services and appropriately respond to them individually.\nSecond, you are unlikely to find one persistence solution that fits every workflow in your application at scale. Don't be afraid to use more than one. Mongo is a good tool for a set of problems, as is Redis, just not necessarily the same problems.","Q_Score":0,"Tags":"python,django,database-design,mongodb,redis","A_Id":10403789,"CreationDate":"2012-05-01T10:16:00.000","Title":"mongo db or redis for a facebook like site?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a social app in django, the architecture of the site will be very similar to facebook\nThere will be posts, posts will have comments\nBoth posts and comments will have meta data like date, author, tags, votes\nI decided to go with nosql database because of the ease with which we can add new features.\nI finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app?\nUpdate:\nI have decided to go with mongodb, will use redis for user home page and home page if necessary.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1158,"Q_Id":10396315,"Users Score":1,"Answer":"These things are subjective and can be looked at in different directions. But if you have already decided to go with a nosql solution and is trying to determine between mongodb and redis I think it is better to go with mongodb as I guess you should be able to save a big number of posts and also mongodb documents are better suited to represent posts. \nRedis can only save upto the max memory limit but is super fast. So if you need to index some kind of things you can save posts in mongodb and then keep the id's of posts in redis to access faster.","Q_Score":0,"Tags":"python,django,database-design,mongodb,redis","A_Id":10396466,"CreationDate":"2012-05-01T10:16:00.000","Title":"mongo db or redis for a facebook like site?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am facing an issue with setting a value of Excel Cell.\nI get data from a table cell in MS-Word Document(dcx) and print it on output console.\nProblem is that the data of the cell is just a word, \"Hour\", with no apparent other leading or trailing printable character like white-spaces. But when I print it using python's print() function, it shows some unexpected character, more like a small \"?\" in a rectangle.\nI don't know where does it come from.\nAnd when I write the same variable that holds the word, \"Hour\", to an Excel cell it shows a bold dot(.) in the cell.\nWhat can be the problem?\nAny help is much appreciated.\nI Am Using Python 3.2 And PyWin32 3.2 On Win7.\nThanks.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3883,"Q_Id":10423593,"Users Score":3,"Answer":"Try using value.rstrip('\\r\\n') to remove any carriage returns (\\r) or newlines (\\n) at the end of your string value.","Q_Score":3,"Tags":"python,excel,ms-word,character","A_Id":10423918,"CreationDate":"2012-05-03T00:30:00.000","Title":"Unwanted character in Excel Cell In Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sometimes an application requires quite a few SQL queries before it can do anything useful. I was wondering if there is a way to send those as a batch to the database, to avoid the overhead of going back and forth between the client and the server?\nIf there is no standard way to do it, I'm using the python bindings of MySQL.\nPS: I know MySQL has an executemany() function, but that's only for the same query executed many times with different parameters, right?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":10434523,"Users Score":0,"Answer":"This process works best on inserts\n\nMake all you SQL queries into Stored Procedures. These eventually will become child stored procedures\nCreate Master Store procedure to run all other Stored Procedures.\nModify master Stored procedure to accept values required by child Stored Procedures\nModify master Stored procedure to accept commands using \"if\" statements to know which\nchild stored procedures to run\n\nIf you need return data from Database use 1 stored procedure at the time.","Q_Score":0,"Tags":"mysql,sql,mysql-python","A_Id":10434644,"CreationDate":"2012-05-03T15:27:00.000","Title":"Grouping SQL queries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say I have 100 servers each running a daemon - lets call it server - that server is responsible for spawning a thread for each user of this particular service (lets say 1000 threads per server). Every N seconds each thread does something and gets information for that particular user (this request\/response model cannot be changed). The problem I a have is sometimes a thread hangs and stops doing something. I need some way to know that users data is stale, and needs to be refreshed.\nThe only idea I have is every 5N seconds have the thread update a MySQL record associated with that user (a last_scanned column in the users table), and another process that checks that table every 15N seconds, if the last_scanned column is not current, restart the thread.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":221,"Q_Id":10440277,"Users Score":1,"Answer":"The general way to handle this is to have the threads report their status back to the server daemon. If you haven't seen a status update within the last 5N seconds, then you kill the thread and start another.\nYou can keep track of the current active threads that you've spun up in a list, then just loop through them occasionally to determine state.\nYou of course should also fix the errors in your program that are causing threads to exit prematurely.\nPremature exits and killing a thread could also leave your program in an unexpected, non-atomic state. You should probably also have the server daemon run a cleanup process that makes sure any items in your queue, or whatever you're using to determine the workload, get reset after a certain period of inactivity.","Q_Score":1,"Tags":"python,distributed-computing","A_Id":10440880,"CreationDate":"2012-05-03T22:39:00.000","Title":"Distributed server model","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have come across a requirement that needs to access a set of databases in a Mongodb server, using TurboGear framework. There I need to list down the Databases, and allow the user to select one and move on. As far as I looked, TurboGear does facilitate multiple databases to use, but those needs to be specify beforehand in the development.ini.\nIs there a way to just connect to the db server(or to a particular database first) and then get the list of databases and select one on the fly?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":305,"Q_Id":10495324,"Users Score":2,"Answer":"For SQLAlchemy you can achieve something like that using a smarter Session.\nJust subclass the sqlalchemy.orm.Session class and override the get_bind(self, mapper=None, clause=None) method.\nThat method is called each time the session has to decide which engine to use and is expected to return the engine itself. You can then store a list of engines wherever you prefer and return the correct one.\nWhen using Ming\/MongoDB the same can probably be achieved by subclassing the ming.Session in model\/session.py and overridding the ming.Session.db property to return the right database.","Q_Score":2,"Tags":"mongodb,python-3.x,turbogears2","A_Id":10650606,"CreationDate":"2012-05-08T08:42:00.000","Title":"How to change the database on the fly in python using TurboGear framework?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have one table with Time32 column and large number of rows. My problem is next. \nWhen my table reaches thousand million rows, I want start archiving every row older than specified value. For creating query I will use Time32 column which represents timestamp for collected data in row. So,using this query I want delete old rows in working table, and store in other table reserved for storing archive records. Is it possible? If yes, what is most efficient way? \nI know for whereAppend() method, but this method only copy records, not delete from actual table. Thaks for advice. Cheers!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1114,"Q_Id":10496821,"Users Score":1,"Answer":"The general way to archive records from one table of a given database to another one is to copy records into the target table, and then to delete the same records in the origin table.\nThat said, depending of your database engine and the capabilities of the language built on top of that, you can write atomic query commands that do an atomic 'copy then delete' for you, but it is dependent of your database engine capabilities.\nIn your case of old records archiving, a robust approach can be to copy the records you want to archive by chunks by copying blocks of n records (n sized to your amount of data you can temporary clone, it is a trade-off between temporary additional size and the overhead of a copy delete action), then deleting those n records, and so on until to archive all the records fulfilling your condition Time32 field older than a given timestamp threshold.","Q_Score":1,"Tags":"python,database,python-2.7,hdf5,pytables","A_Id":10497547,"CreationDate":"2012-05-08T10:26:00.000","Title":"Pytables - Delete rows from table by some criteria","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I looked at the sqlite.org docs, but I am new to this, so bear with me. (I have a tiny bit of experience with MySQL, and I think using it would be an overkill for what I am trying to do with my application.)\nFrom what I understand I can initially create an SQLite db file locally on my MAC and add entrees to it using a Firefox extension. I could then store any number of tables and images (as binary). Once my site that uses this db is live, I could upload the db file to any web hosting service to any directory. In my site I could have a form that collects data and sends a request to write that data to the db file. Then, I could have an iOS app that connects to the db and reads the data. Did I get this right?\nWould I be able to run a Python script that writes to SQLite? What questions should I ask a potential hosting service? (I want to leave MediaTemple, so I am looking around...)\nI don't want to be limited to a Windows server, I am assuming SQLite would run on Unix? Or, does it depend on a hosting service? Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":143,"Q_Id":10517900,"Users Score":1,"Answer":"I could upload the db file to any web hosting service to any directory\n\nSupposing that the service has the libraries installed to handle sqlite, and that sqlite is installed.\n\nWould I be able to run a Python script that writes to SQLite\n\nYes, well, maybe. As of Python 2.5, Python includes sqlite support as part of it's standard library.\n\nWhat questions should I ask a potential hosting service\n\nUsually, in their technical specs they will list what databases\/libraries\/languages are supported. I have successfully ran Python sites w\/ sqlite databases on Dreamhost.\n\nSQLite would run on Unix\n\nMost *nix flavors have pre-packaged sqlite installation binaries. The hosting provider should be able to tell you this as well.","Q_Score":0,"Tags":"python,sqlite,web-hosting","A_Id":10518010,"CreationDate":"2012-05-09T14:13:00.000","Title":"Understanding SQLite conceptually","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install MYSQLdb on a windows client. The goal is, from the Windows client, run a python script that connects to a MySQL server on a LINUX client. Looking at the setup code (and based on the errors I am getting when I try to run setup.py for mysqldb, it appears that I have to have my own version of MySQL on the windows box. Is there a way (perhaps another module) that will let me accomplish this? I need to have people on multiple boxes run a script that will interact with a MySQL database on a central server.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1771,"Q_Id":10541085,"Users Score":1,"Answer":"You don't need the entire MySQL database server, only the MySQL client libraries.","Q_Score":0,"Tags":"python,mysql,windows","A_Id":10541253,"CreationDate":"2012-05-10T19:44:00.000","Title":"Install MYSQLdb python module without MYSQL local install","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know that pyramid comes with a scaffold for sqlalchemy. But what if I'm using the pyramid_jqm scaffold. How would you integrate or use sqlalchemy then? When I create a model.py and import from sqlalchemy I get an error that he couldnt find the module.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":101,"Q_Id":10551042,"Users Score":2,"Answer":"You have to setup your project in the same way that the alchemy scaffold is constructed. Put \"sqlalchemy\" in your setup.py requires field and run \"python setup.py develop\" to install the dependency. This is all just python and unrelated to Pyramid.","Q_Score":0,"Tags":"python,sqlalchemy,pyramid","A_Id":10555714,"CreationDate":"2012-05-11T12:08:00.000","Title":"Using sqlalchemy in pyramid_jqm","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset\/refresh the localhost. I tried to create new projects\/directories with different content but my localhost constantly shows the old \"Hello world!\" I get the following in the log window:\n\nWARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.\n WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\\users\\tomek\\appdata\\local\\temp\\dev_appserver.datastore\n WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python \"PIL\" module. ImportError: No module named _imaging\n\nPlease help...","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":437,"Q_Id":10575184,"Users Score":2,"Answer":"Those warnings shouldn't prevent you from seeing new 'content,' they simply mean that you are missing some libraries necessary to run local versions of CloudSQL (MySQL) and the Images API.\nFirst to do is try to clear your browser cache. What changes did you make to your Hello World app?","Q_Score":0,"Tags":"python,google-app-engine","A_Id":10575238,"CreationDate":"2012-05-13T20:57:00.000","Title":"Localhost is not refreshing\/reseting","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset\/refresh the localhost. I tried to create new projects\/directories with different content but my localhost constantly shows the old \"Hello world!\" I get the following in the log window:\n\nWARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.\n WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\\users\\tomek\\appdata\\local\\temp\\dev_appserver.datastore\n WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python \"PIL\" module. ImportError: No module named _imaging\n\nPlease help...","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":437,"Q_Id":10575184,"Users Score":0,"Answer":"Press CTRL-F5 in your browser, while on the page. Forces a cache refresh.","Q_Score":0,"Tags":"python,google-app-engine","A_Id":10593822,"CreationDate":"2012-05-13T20:57:00.000","Title":"Localhost is not refreshing\/reseting","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset\/refresh the localhost. I tried to create new projects\/directories with different content but my localhost constantly shows the old \"Hello world!\" I get the following in the log window:\n\nWARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.\n WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\\users\\tomek\\appdata\\local\\temp\\dev_appserver.datastore\n WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python \"PIL\" module. ImportError: No module named _imaging\n\nPlease help...","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":437,"Q_Id":10575184,"Users Score":0,"Answer":"You can try opening up the DOM reader (Mac: alt+command+i, Windows: shift+control+i) the reload the page. It's weird, but it works for me.","Q_Score":0,"Tags":"python,google-app-engine","A_Id":41388817,"CreationDate":"2012-05-13T20:57:00.000","Title":"Localhost is not refreshing\/reseting","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have found that ultramysql meets my requirement. But it has no document, and no windows binary package.\nI have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql.\nIs monkey.patch_all() make mysql operations async?\nCan anyone show me a correct way to go.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1197,"Q_Id":10580835,"Users Score":1,"Answer":"Postgres may be better suited due to its asynchronous capabilities","Q_Score":4,"Tags":"python,mysql,gevent","A_Id":12335813,"CreationDate":"2012-05-14T09:41:00.000","Title":"How to use mysql in gevent based programs in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have found that ultramysql meets my requirement. But it has no document, and no windows binary package.\nI have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql.\nIs monkey.patch_all() make mysql operations async?\nCan anyone show me a correct way to go.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1197,"Q_Id":10580835,"Users Score":1,"Answer":"I think one solution is use pymysql. Since pymysql use python socket, after monkey patch, should be work with gevent.","Q_Score":4,"Tags":"python,mysql,gevent","A_Id":13006283,"CreationDate":"2012-05-14T09:41:00.000","Title":"How to use mysql in gevent based programs in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm currently developing an application which connects to a database using sqlalchemy. The idea consists of having several instances of the application running in different computers using the same database. I want to be able to see changes in the database in all instances of the application once they are commited. I'm currently using sqlalchemy event interface, however it's not working when I have several concurrent instances of the application. I change something in one of the instances, but there are no signals emitted in the other instances.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1036,"Q_Id":10601947,"Users Score":0,"Answer":"You said it, you are using SQLAlchemy's event interface, it is not the one of the RDBMS, and SQLAlchemy does not communicate with the other instances connected to that DB.\nSQLAlchemy's event system calls a function in your own process. It's up to you to make this function send a signal to the rest of them via the network (or however they are connected). As long as SQLAlchemy is concerned, it doesn't know about the other instances connected to your database.\nSo, you might want to start another server on the machine with the database running, and make all the other listening to it, and act accordingly.\nHope it helps.","Q_Score":2,"Tags":"python,concurrency,sqlalchemy","A_Id":10602194,"CreationDate":"2012-05-15T13:37:00.000","Title":"Concurrency in sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a couchdb instance with database a and database b. They should contain identical sets of documents, except that the _rev property will be different, which, AIUI, means I can't use replication.\nHow do I verify that the two databases really do contain the same documents which are all otherwise 'equal'?\nI've tried using the python-based couchdb-dump tool with a lot of sed magic to get rid of the _rev and MD5 and ETag headers, but then it still seems that property order in the JSON structure is slightly random, which means I still can't compare the output easily with something like diff.\nIs there a better approach here? Have other people wanted to solve a similar problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":685,"Q_Id":10615980,"Users Score":1,"Answer":"If you want to make sure they're exactly the same, write a map job that emits the document path as the key, and the documents hash (generated any way you like) as the value. Do not include the _rev field in the hash generation.\nYou cannot reduce to a single hash because order is not guaranteed, but you can feed the resultant JSON document to a good diff program.","Q_Score":1,"Tags":"couchdb,replication,couchdb-python","A_Id":10616421,"CreationDate":"2012-05-16T09:45:00.000","Title":"Compare two couchdb databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have made a python ladon webservice and I run is on Ubuntu with Apache2 and mod_wsgi. (I use Python 2.6).\nThe webservice connect to a postgreSQL database with psycopg2 python module.\nMy problem is that the psycopg2.connection is closed (or destroyed) automatically after a little time (after about 1 or 2 minutes). \nThe other hand if I run the server with\nladon2.6ctl testserve\ncommand (http:\/\/ladonize.org\/index.php\/Python_Configuration)\nthan the server is working and the connection is not closed automatically.\nI can't understand why the connection is closed with apache+mod_wsgi and in this case the webserver is very slowly.\nCan anyone help me?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":450,"Q_Id":10636409,"Users Score":1,"Answer":"If you are using mod_wsgi in embedded moe, especially with preform MPM for Apache, then likely that Apache is killing off the idle processes. Try using mod_wsgi daemon mode, which keeps process persistent and see if it makes a difference.","Q_Score":0,"Tags":"python,web-services,apache2,mod-wsgi,psycopg2","A_Id":10645670,"CreationDate":"2012-05-17T13:12:00.000","Title":"Python psycopg2 + mod_wsgi: connection is very slow and automatically close","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wish to consume a .net webservice containing the results of SQL Server query using a Python client. I have used the Python Suds library to interface to the same web service but not with a set of results. How should I structure the data so it is efficiently transmitted and consumed by a Python client. There should be a maximum of 40 rows of data, containing 60 bytes of data per row in 5 columns.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":175,"Q_Id":10638071,"Users Score":1,"Answer":"Suds is a library to connect via SOAP, so you may already have blown \"efficiently transmitted\" out of the window, as this is a particularly verbose format over the wire. Your maximum data size is relatively small, and so should almost certainly be transmitted back in a single message so the SOAP overhead is incurred only once. So you should create a web service that returns a list or array of results, and call it once. This should be straightforwardly serialised to a single XML body that Suds then gives you access to.","Q_Score":0,"Tags":".net,python,sql-server,web-services,suds","A_Id":10653866,"CreationDate":"2012-05-17T14:49:00.000","Title":"SQL Query result via .net webservice to a non .net- Python client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm seeing some unexpected behaviour with Flask-SQLAlchemy, and I don't understand what's going on:\nIf I make a change to a record using e.g. MySQL Workbench or Sequel Pro, the running app (whether running under WSGI on Apache, or from the command line) isn't picking up the change. If I reload the app by touching the WSGI file, or by reloading it (command line), I can see the changed record. I've verified this by running an all() query in the interactive shell, and it's the same \u2013 no change until I quit the shell, and start again. I get the feeling I'm missing something incredibly obvious here \u2013 it's a single table, no joins etc. \u2013 Running MySQL 5.5.19, and SQLA 0.7.7 on 2.7.3","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":326,"Q_Id":10645793,"Users Score":1,"Answer":"you app's SELECT is probably within its own transaction \/ session so changes submitted by another session (e.g. MySQL Workbench connection) are not yet visible for your SELECT. You can easily verify it by enabling mysql general log or by setting 'echo: false' in your create_engine(...) definition. Chances are you're starting your SQLAlchemy session in SET AUTOCOMMIT = 0 mode which requires explicit commit or rollback (when you restart \/ reload, Flask-SQLAlchemy does it for you automatically). Try either starting your session in autocommit=true mode or stick explicit commit\/rollback before calling your SELECT.","Q_Score":1,"Tags":"python,flask-sqlalchemy","A_Id":15194364,"CreationDate":"2012-05-18T01:57:00.000","Title":"Flask SQLAlchemy not picking up changed records","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Background:\nI'm trying to use a Google Map as an interface to mark out multiple polygons, that can be stored in a Postgres Database.\nThe Database will then be queried with a geocoded Longitude Latitude Point to determine which of the Drawn Polygons encompass the point.\nUsing Python and Django.\nQuestion\nHow do I configure the Google Map to allow a user to click around and specify multiple polygon areas?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":667,"Q_Id":10647482,"Users Score":1,"Answer":"\"Using Python and Django\" only, you're not going to do this. Obviously you're going to need Javascript.\nSo you may as well dump Google Maps and use an open-source web mapping framework. OpenLayers has a well-defined Javascript API which will let you do exactly what you want. Examples in the OpenLayers docs show how.\nYou'll thank me later - specifically when Google come asking for a fee for their map tiles and you can't switch your Google Maps widget to OpenStreetMap or some other tile provider. This Actually Happens.","Q_Score":0,"Tags":"python,django,postgresql,google-maps,postgis","A_Id":10648479,"CreationDate":"2012-05-18T06:01:00.000","Title":"Mark Out Multiple Delivery Zones on Google Map and Store in Database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"There is a way to avoid duplicate files in mongo gridfs?\nOr I have to do that via application code (I am using pymongo)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2727,"Q_Id":10648729,"Users Score":1,"Answer":"You could use md5 hash and compare new hash with exists before saving file.","Q_Score":5,"Tags":"python,mongodb,gridfs","A_Id":10648760,"CreationDate":"2012-05-18T07:48:00.000","Title":"Mongo: avoid duplicate files in gridfs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a way to avoid duplicate files in mongo gridfs?\nOr I have to do that via application code (I am using pymongo)","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2727,"Q_Id":10648729,"Users Score":5,"Answer":"The MD5 sum is already part of Mongo's gridfs meta-data, so you could simply set a unique index on that column and the server will refuse to store the file. No need to compare on the client side.","Q_Score":5,"Tags":"python,mongodb,gridfs","A_Id":10650262,"CreationDate":"2012-05-18T07:48:00.000","Title":"Mongo: avoid duplicate files in gridfs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any good example projects which uses SQLAlchemy (with Python Classes) that I can look into? (which has at least some basic database operations - CRUD)\nI believe that, it is a good way to learn any programming language by looking into someone's code.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12180,"Q_Id":10656426,"Users Score":0,"Answer":"What kind of environment are you looking to work with on top of SQLAlchemy?\nMost likely, if you are using a popular web framework like django, Flask or Pylons, you can find many examples and tutorials specific to that framework that include SQLAlchemy.\nThis will boost your knowledge both with SQLAlchemy and whatever else it is you are working with.\nChances are, you won't find any good project examples in 'just' SQLAlchemy as it essentially a tool.","Q_Score":18,"Tags":"python,sqlalchemy","A_Id":10778146,"CreationDate":"2012-05-18T16:32:00.000","Title":"SQLAlchemy Example Projects","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am confused about why python needs cursor object. I know jdbc and there the database connection is quite intuitive but in python I am confused with cursor object. Also I am doubtful about what is the difference between cursor.close() and connection.close() function in terms of resource release.","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":17423,"Q_Id":10660411,"Users Score":5,"Answer":"Connection object is your connection to the database, close that when you're done talking to the database all together. Cursor object is an iterator over a result set from a query. Close those when you're done with that result set.","Q_Score":41,"Tags":"python,python-db-api","A_Id":10660537,"CreationDate":"2012-05-18T22:16:00.000","Title":"difference between cursor and connection objects","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am thinking about creating an open source data management web application for various types of data. \nA privileged user must be able to \n\nadd new entity types (for example a 'user' or a 'family') \nadd new properties to entity types (for example 'gender' to 'user')\nremove\/modify entities and properties\n\nThese will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me:\na) How should the data be stored in the database? Should I dynamically add\/remove database tables and\/or columns during runtime?\nI am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add\/remove tables (entities) and\/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database.\nAnyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management?\nb) How to implement this in Python using an ORM or NoSQL?\nIf you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables\/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy?\nIf you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis?\nThanks for your suggestions!\nEdit in response to some comments:\nThe idea is that all instances (\"rows\") of a certain entity (\"table\") share the same set of properties\/attributes (\"columns\"). However, it will be perfectly valid if certain instances have an empty value for certain properties\/attributes.\nBasically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property.\nThe datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a \"Pythonic\", state-of-the-art, scalable, and reliable web application.\nI see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document\/Collection model of Mongo\/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction?\nEdit 2 in response to some answers\/comments:\nFrom most of your answers, I conclude that the dynamic creation\/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design.\nAs exactly this dynamic nature should be the main purpose\/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine.\nExpressed in an abstract way, the application needs to manage\n\nthe data layout, i.e. a \"dynamic list\" of valid entity types and a \"dynamic list\" of properties for each valid entity type\nthe data itself\n\nI am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":4158,"Q_Id":10672939,"Users Score":6,"Answer":"What you're asking about is a common requirement in many systems -- how to extend a core data model to handle user-defined data. That's a popular requirement for packaged software (where it is typically handled one way) and open-source software (where it is handled another way).\nThe earlier advice to learn more about RDBMS design generally can't hurt. What I will add to that is, don't fall into the trap of re-implementing a relational database in your own application-specific data model! I have seen this done many times, usually in packaged software. Not wanting to expose the core data model (or permission to alter it) to end users, the developer creates a generic data structure and an app interface that allows the end user to define entities, fields etc. but not using the RDBMS facilities. That's usually a mistake because it's hard to be nearly as thorough or bug-free as what a seasoned RDBMS can just do for you, and it can take a lot of time. It's tempting but IMHO not a good idea.\nAssuming the data model changes are global (shared by all users once admin has made them), the way I would approach this problem would be to create an app interface to sit between the admin user and the RDBMS, and apply whatever rules you need to apply to the data model changes, but then pass the final changes to the RDBMS. So for example, you may have rules that say entity names need to follow a certain format, new entities are allowed to have foreign keys to existing tables but must always use the DELETE CASCADE rule, fields can only be of certain data types, all fields must have default values etc. You could have a very simple screen asking the user to provide entity name, field names & defaults etc. and then generate the SQL code (inclusive of all your rules) to make these changes to your database.\nSome common rules & how you would address them would be things like:\n-- if a field is not null and has a default value, and there are already existing records in the table before that field was added by the admin, update existing records to have the default value while creating the field (multiple steps -- add field allowing null; update all existing records; alter the table to enforce not null w\/ default) -- otherwise you wouldn't be able to use a field-level integrity rule)\n-- new tables must have a distinct naming pattern so you can continue to distinguish your core data model from the user-extended data model, i.e. core and user-defined have different RDBMS owners (dbo. vs. user.) or prefixes (none for core, __ for user-defined) or somesuch.\n-- it is OK to add fields to tables that are in the core data model (as long as they tolerate nulls or have a default), and it is OK for admin to delete fields that admin added to core data model tables, but admin cannot delete fields that were defined as part of the core data model.\nIn other words -- use the power of the RDBMS to define the tables and manage the data, but in order to ensure whatever conventions or rules you need will always be applied, do this by building an app-to-DB admin function, instead of giving the admin user direct DB access.\nIf you really wanted to do this via the DB layer only, you could probably achieve the same by creating a bunch of stored procedures and triggers that would implement the same logic (and who knows, maybe you would do that anyway for your app). That's probably more of a question of how comfortable are your admin users working in the DB tier vs. via an intermediary app.\n\nSo to answer your questions directly:\n(1) Yes, add tables and columns at run time, but think about the rules you will need to have to ensure your app can work even once user-defined data is added, and choose a way to enforce those rules (via app or via DB \/ stored procs or whatever) when you process the table & field changes.\n(2) This issue isn't strongly affected by your choice of SQL vs. NoSQL engine. In every case, you have a core data model and an extended data model. If you can design your app to respond to a dynamic data model (e.g. add new fields to screens when fields are added to a DB table or whatever) then your app will respond nicely to changes in both the core and user-defined data model. That's an interesting challenge but not much affected by choice of DB implementation style.\nGood luck!","Q_Score":20,"Tags":"python,database,dynamic,sqlalchemy,redis","A_Id":10792940,"CreationDate":"2012-05-20T11:16:00.000","Title":"Which database model should I use for dynamic modification of entities\/properties during runtime?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am thinking about creating an open source data management web application for various types of data. \nA privileged user must be able to \n\nadd new entity types (for example a 'user' or a 'family') \nadd new properties to entity types (for example 'gender' to 'user')\nremove\/modify entities and properties\n\nThese will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me:\na) How should the data be stored in the database? Should I dynamically add\/remove database tables and\/or columns during runtime?\nI am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add\/remove tables (entities) and\/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database.\nAnyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management?\nb) How to implement this in Python using an ORM or NoSQL?\nIf you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables\/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy?\nIf you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis?\nThanks for your suggestions!\nEdit in response to some comments:\nThe idea is that all instances (\"rows\") of a certain entity (\"table\") share the same set of properties\/attributes (\"columns\"). However, it will be perfectly valid if certain instances have an empty value for certain properties\/attributes.\nBasically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property.\nThe datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a \"Pythonic\", state-of-the-art, scalable, and reliable web application.\nI see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document\/Collection model of Mongo\/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction?\nEdit 2 in response to some answers\/comments:\nFrom most of your answers, I conclude that the dynamic creation\/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design.\nAs exactly this dynamic nature should be the main purpose\/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine.\nExpressed in an abstract way, the application needs to manage\n\nthe data layout, i.e. a \"dynamic list\" of valid entity types and a \"dynamic list\" of properties for each valid entity type\nthe data itself\n\nI am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":4158,"Q_Id":10672939,"Users Score":3,"Answer":"So, if you conceptualize your entities as \"documents,\" then this whole problem maps onto a no-sql solution pretty well. As commented, you'll need to have some kind of model layer that sits on top of your document store and performs tasks like validation, and perhaps enforces (or encourages) some kind of schema, because there's no implicit backend requirement that entities in the same collection (parallel to table) share schema.\nAllowing privileged users to change your schema concept (as opposed to just adding fields to individual documents - that's easy to support) will pose a little bit of a challenge - you'll have to handle migrating the existing data to match the new schema automatically.\nReading your edits, Mongo supports the kind of searching\/ordering you're looking for, and will give you the support for \"empty cells\" (documents lacking a particular key) that you need.\nIf I were you (and I happen to be working on a similar, but simpler, product at the moment), I'd stick with Mongo and look into a lightweight web framework like Flask to provide the front-end. You'll be on your own to provide the model, but you won't be fighting against a framework's implicit modeling choices.","Q_Score":20,"Tags":"python,database,dynamic,sqlalchemy,redis","A_Id":10707420,"CreationDate":"2012-05-20T11:16:00.000","Title":"Which database model should I use for dynamic modification of entities\/properties during runtime?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a large SQLServer database on my current hosting site...\nand\nI would like to import it into Google BigData.\nIs there a method for this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":108,"Q_Id":10705572,"Users Score":1,"Answer":"I think that the answer is that there is no general recipe for doing this. In fact, I don't even think it makes sense to have a general recipe ...\nWhat you need to do is to analyse the SQL schemas and work out an appropriate mapping to BigData schemas. Then you figure out how to migrate the data.","Q_Score":0,"Tags":"python,sql-server,bigdata","A_Id":10713425,"CreationDate":"2012-05-22T15:52:00.000","Title":"Porting data from SQLServer to BigData","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to copy an existing neo4j database in Python. I even do not need it for backup, just to play around with while keeping the original database untouched. However, there is nothing about copy\/backup operations in neo4j.py documentation (I am using python embedded binding). \nCan I just copy the whole folder with the original neo4j database to a folder with a new name?\nOr is there any special method available in neo4j.py?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":310,"Q_Id":10724345,"Users Score":2,"Answer":"Yes,\nyou can copy the whole DB directory when you have cleanly shut down the DB for backup.","Q_Score":1,"Tags":"python,copy,backup,neo4j","A_Id":10736999,"CreationDate":"2012-05-23T16:44:00.000","Title":"Copy neo4j database from python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I get sales data every 15 minutes. The sales transactions are stored in a mysql database. I need to be able to graph this data, and allow the user to re-size the scale of time. The info would be graphed on a django website. \nHow would I go about doing this, and are there any open source tools that I could look into?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3292,"Q_Id":10779244,"Users Score":1,"Answer":"HighCharts have awesome features you can also build pivot charts using that one but they will charge you .You can look over Py Chart also","Q_Score":0,"Tags":"python,mysql,sql","A_Id":10779681,"CreationDate":"2012-05-28T04:10:00.000","Title":"How to graph mysql data in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In google app engine, can I call \"get_or_insert\" from inside a transaction?\nThe reason I ask is because I'm not sure if there is some conflict with having this run its own transaction inside an already running transaction.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":308,"Q_Id":10790381,"Users Score":2,"Answer":"No. get_or_insert is syntactic sugar for a transactional function that fetches or inserts a record. You can implement it yourself trivially, but that will only work if the record you're operating on is in the same entity group as the rest of the entities in the current transaction, or if you have cross-group transactions enabled.","Q_Score":2,"Tags":"python,google-app-engine","A_Id":10791742,"CreationDate":"2012-05-28T21:01:00.000","Title":"In app engine, can I call \"get_or_insert\" from inside a transaction?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"In db.py,I can use a function(func insert) insert data into sqlite correctly.\nNow I want to insert data into sqlite through python-fastcgi, in\n fastcgi (just named post.py ) I can get the request data correctly,but\n when I call db.insert,it gives me internal server error. \n\nI already did chmod 777 slqite.db. Anyone know whats problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1289,"Q_Id":10793042,"Users Score":4,"Answer":"Ffinally I found the answer: \nthe sqlite3 library needs write permissions also on the directory that contains it, probably because it needs to create a lockfile.\nTherefor when I use sql to insert data there is no problem, but when I do it through web cgi,fastcgi etc\uff09to insert data there would be an error. \nJust add write permission to the directory.","Q_Score":2,"Tags":"python,sqlite,fastcgi","A_Id":10796243,"CreationDate":"2012-05-29T04:23:00.000","Title":"sqlite3 insert using python and python cgi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have run a few trials and there seems to be some improvement in speed if I set autocommit to False.\nHowever, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first?\nOr, am I completely mistaken as to what commit actually does?\nNote: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1170,"Q_Id":10803012,"Users Score":0,"Answer":"As long as you use the same connection, the database should show you a consistent view on the data, e.g. with all changes made so far in this transaction.\nOnce you commit, the changes will be written to disk and be visible to other (new) transactions and connections.","Q_Score":2,"Tags":"python,mysql,odbc,pyodbc","A_Id":10803049,"CreationDate":"2012-05-29T16:22:00.000","Title":"Does setting autocommit to true take longer than batch committing?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have run a few trials and there seems to be some improvement in speed if I set autocommit to False.\nHowever, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first?\nOr, am I completely mistaken as to what commit actually does?\nNote: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1170,"Q_Id":10803012,"Users Score":1,"Answer":"The default transaction mode for InnoDB is REPEATABLE READ, all the read will be consistent within a transaction. If you insert rows and query them in the same transaction, you will not see the newly inserted row, but they will be stored when you commit the transaction. If you want to see the newly inserted row before you commit the transaction, you can set the isolation level to READ COMMITTED.","Q_Score":2,"Tags":"python,mysql,odbc,pyodbc","A_Id":10803230,"CreationDate":"2012-05-29T16:22:00.000","Title":"Does setting autocommit to true take longer than batch committing?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved. \nThe other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up.\nHow do you usually handle your database migrations and schema changes?","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":11397,"Q_Id":10826266,"Users Score":1,"Answer":"South isnt used everywhere. Like in my orgainzation we have 3 levels of code testing. One is local dev environment, one is staging dev enviroment, and third is that of a production . \nLocal Dev is on the developers hands where he can play according to his needs. Then comes staging dev which is kept identical to production, ofcourse, until a db change has to be done on the live site, where we do the db changes on staging first, and check if everything is working fine and then we manually change the production db making it identical to staging again.","Q_Score":22,"Tags":"python,mysql,django,migration,django-south","A_Id":10872504,"CreationDate":"2012-05-31T01:12:00.000","Title":"Database migrations on django production","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved. \nThe other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up.\nHow do you usually handle your database migrations and schema changes?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":11397,"Q_Id":10826266,"Users Score":0,"Answer":"If its not trivial, you should have pre-prod database\/ app that mimic the production one. To avoid downtime on production.","Q_Score":22,"Tags":"python,mysql,django,migration,django-south","A_Id":70559647,"CreationDate":"2012-05-31T01:12:00.000","Title":"Database migrations on django production","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Using python(fastcgi),lighttpd,sqlite3 for server\nUpdate data of sqlite3 every weekend.\nThats means, every user get the same data from server before weekend,and server query database for every user's request.\nMy question is:\nIs there any way to cache data for users,server using cache data to response all users before updating data,not query database every time.\nLike using a global variable for a week,until update it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":167,"Q_Id":10843191,"Users Score":1,"Answer":"You can use a cache such as memcached to store it once retrieved.","Q_Score":0,"Tags":"python,sqlite,fastcgi,lighttpd","A_Id":10843435,"CreationDate":"2012-06-01T01:04:00.000","Title":"cache data in python and sqlite3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to write a python program for appending live stock quotes from a csv file to an excel file (which is already open) using xlrd and xlwt.\nThe task is summarised below.\nFrom my stock-broker's application, a csv file is continually being updated on my hard disk.\nI wish to write a program which, when run, would append the new data from csv file to an excel file, which is kept open (I wonder whether it is possible to read & write an open file).\nI wish to keep the file open because I will be having stock-charts in it.\nIs it possible? If yes, how?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1299,"Q_Id":10851726,"Users Score":1,"Answer":"Not directly. xlutils can use xlrd and xlwt to copy a spreadsheet, and appending to a \"to be written\" worksheet is straightforward. I don't think reading the open spreadsheet is a problem -- but xlwt will not write to the open book\/sheet.\nYou might write an Excel VBA macro to draw the graphs. In principle, I think a macro from a command workbook could close your stock workbook, invoke your python code to copy and update, open the new spreadsheet, and maybe run the macro to re-draw the graphs.\nAnother approach is to use matplotlib for the graphs. I'd think a sleep loop could wake up every n seconds, grab the new csv data, append it to your \"big\" csv data, and re-draw the graph. Taking this approach keeps you in python and should make things a lot easier, imho. Disclosure: my Python is better than my VBA.","Q_Score":1,"Tags":"python,xlrd,xlwt","A_Id":10857757,"CreationDate":"2012-06-01T14:01:00.000","Title":"xlrd - append data to already opened workbook","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to save an array of dates. I am providing a list of date objects, yet psycopg2 is throwing the above error.\nAny thoughts on how I can work around this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1679,"Q_Id":10854532,"Users Score":1,"Answer":"This is a PostgreSQL error: you need an explicit cast. Add ::date[] after the value or the placeholder.","Q_Score":1,"Tags":"python,django,psycopg2","A_Id":10914900,"CreationDate":"2012-06-01T17:03:00.000","Title":"psycopg2 column is of type date[] but expression is of type text[]","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an open source PHP website and I intend to modify\/translate (mostly constant strings) it so it can be used by Japanese users.\nThe original code is PHP+MySQL+Apache and written in English with charset=utf-8\nI want to change, for example, the word \"login\" into Japanese counterpart \"\u30ed\u30b0\u30a4\u30f3\" etc\nI am not sure whether I have to save the PHP code in utf-8 format (just like Python)?\nI only have experience with Python, so what other issues I should take care of?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":148,"Q_Id":10868473,"Users Score":2,"Answer":"If it's in the file, then yes, you will need to save the file as UTF-8.\nIf it's is in the database, you do not need to save the PHP file as UTF-8.\nIn PHP, strings are basically just binary blobs. You will need to save the file as UTF-8 so the correct bytes are read in. In theory, if you saved the raw bytes in an ANSI file, it would still be output to the browser correctly, just your editor would not display it correctly, and you would run the risk of your editor manipulating it incorrectly.\nAlso, when handling non-ANSI strings, you'll need to be careful to use the multi-byte versions of string manipulation functions (str_replace will likely botch a utf-8 string for example).","Q_Score":2,"Tags":"php,python,mysql,apache,utf-8","A_Id":10868488,"CreationDate":"2012-06-03T06:52:00.000","Title":"PHP for Python Programmers: UTF-8 Issues","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an open source PHP website and I intend to modify\/translate (mostly constant strings) it so it can be used by Japanese users.\nThe original code is PHP+MySQL+Apache and written in English with charset=utf-8\nI want to change, for example, the word \"login\" into Japanese counterpart \"\u30ed\u30b0\u30a4\u30f3\" etc\nI am not sure whether I have to save the PHP code in utf-8 format (just like Python)?\nI only have experience with Python, so what other issues I should take care of?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":148,"Q_Id":10868473,"Users Score":0,"Answer":"If the file contains UTF-8 characters then save it with UTF-8. Otherwise you can save it in any format. One thing you should be aware of is that the PHP interpreter does not support the UTF-8 byte order mark so make sure you save it without that.","Q_Score":2,"Tags":"php,python,mysql,apache,utf-8","A_Id":10868497,"CreationDate":"2012-06-03T06:52:00.000","Title":"PHP for Python Programmers: UTF-8 Issues","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using CGI scripts, I can run single Python files on my server and then use their output on my website. \nHowever, I have a more complicated program on my computer that I would like to run on the server. It involves several modules I have written myself, and the SQLITE3 module built in Python. The program involves reading from a .db file and then using that data. \nOnce I run my main Python executable from a browser, I get a \"500: Internal server error\" error.\nI just wanted to know whether I need to change something in the permission settings or something for Python files to be allowed to import other Python files, or to read from a .db file. \nI appreciate any guidance, and sorry if I'm unclear about anything I'm new to this site and coding in general.\nFOLLOW UP: So, as I understand, there isn't anything inherently wrong with importing Python files on a server?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":10900319,"Users Score":0,"Answer":"I suggest you look in the log of your server to find out what caused the 500 error.","Q_Score":0,"Tags":"python,sqlite,web","A_Id":10900387,"CreationDate":"2012-06-05T15:34:00.000","Title":"Importing Python files into each other on a web server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It looks like this is what e.g. MongoEngine does. The goal is to have model files be able to access the db without having to explicitly pass around the context.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":877,"Q_Id":10906477,"Users Score":2,"Answer":"Pyramid has nothing to do with it. The global needs to handle whatever mechanism the WSGI server is using to serve your application.\nFor instance, most servers use a separate thread per request, so your global variable needs to be threadsafe. gunicorn and gevent are served using greenlets, which is a different mechanic.\nA lot of engines\/orms support a threadlocal connection. This will allow you to access your connection as if it were a global variable, but it is a different variable in each thread. You just have to make sure to close the connection when the request is complete to avoid that connection spilling over into the next request in the same thread. This can be done easily using a Pyramid tween or several other patterns illustrated in the cookbook.","Q_Score":2,"Tags":"python,pyramid","A_Id":10907158,"CreationDate":"2012-06-05T23:41:00.000","Title":"In Pyramid, is it safe to have a python global variable that stores the db connection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"is there a difference if i use \"\"\"..\"\"\" in the sql of cusror.execute. Even if there is any slight difference please tell","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":10910246,"Users Score":0,"Answer":"No, other than the string can contain newlines.","Q_Score":1,"Tags":"python,sql,string,mysql-python","A_Id":10910268,"CreationDate":"2012-06-06T07:55:00.000","Title":"What is the use of \"\"\"...\"\"\" in python instead of \"...\" or '...', especially in MySQLdb cursor.execute","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track. \nI am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?","AnswerCount":4,"Available Count":2,"Score":0.2449186624,"is_accepted":false,"ViewCount":8119,"Q_Id":10920423,"Users Score":5,"Answer":"Usually you are safe, because *.pyc are regenerated if the corresponding *.py changes its content.\nIt is problematic if you delete a *.py file and you are still importing from it in another file. In this case you are importing from the *.pyc file if it is existing. But this will be a bug in your code and is not really related to your mercurial workflow. \nConclusion: Every famous Python library is ignoring their *.pyc files, just do it ;)","Q_Score":7,"Tags":"python,django,mercurial,pyc","A_Id":10920888,"CreationDate":"2012-06-06T18:58:00.000","Title":"What to do with pyc files when Django or python is used with Mercurial?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track. \nI am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":8119,"Q_Id":10920423,"Users Score":0,"Answer":"Sure if you have a .pyc file from an older version of the same module python will use that. Many times I have wondered why my program wasn't reflecting the changes I made, and realized it was because I had old pyc files.\nIf this means that .pyc are not reflecting your current version then yes you will have to delete all .pyc files.\nIf you are on linux you can find . -name *.pyc -delete","Q_Score":7,"Tags":"python,django,mercurial,pyc","A_Id":10920511,"CreationDate":"2012-06-06T18:58:00.000","Title":"What to do with pyc files when Django or python is used with Mercurial?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a sqlite3 database that I created from Python (2.7) on a local machine, and am trying to copy it to a remote location. I ran \"sqlite3 posts.db .backup posts.db.bak\" to create a copy (I can use the original and this new copy just fine). But when I move the copied file to the remote location, suddenly every command gives me: sqlite3.OperationalError: database is locked. How do I safely move a sqlite3 database so that I can use it after the move?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":624,"Q_Id":10922394,"Users Score":0,"Answer":"You did a .backup on the source system, but you don't mention doing a .restore on the target system. Please clarify.\nYou don't mention what versions of the sqlite3 executable you have on the source and target systems.\nYou don't mention how you transferred the .bak file from the source to the target.\nWas the source db being accessed by another process when you did the .backup?\nHow big is the file? Have you considered zip\/copy\/unzip instead of backup\/copy\/restore?","Q_Score":0,"Tags":"python,sqlite,copy","A_Id":10922927,"CreationDate":"2012-06-06T21:13:00.000","Title":"How to safely move an SQLite3 database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a desktop application that send POST requests to a server where a django app store the results. DB server and web server are not on the same machine and it happens that sometimes the connectivity is lost for a very short time but results in a connection error on some requests:\n\nOperationalError: (2003, \"Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (110)\")\n\nOn a \"normal\" website I guess you'd not worry too much: the browser display a 500 error page and the visitor tries again later. \nIn my case loosing info posted by a request is not an option and I am wondering how to handle this? I'd try to catch on this exception, wait for the connectivity to come back (lag is not a problem) and then continue the process. But as the exception can occur about anywhere in the code I'm a bit stuck on how to proceed.\nThanks for your advice.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2536,"Q_Id":10930459,"Users Score":1,"Answer":"You could use a middleware with a process_view method and a try \/ except wrapping your call. \nOr you could decorate your views and wrap the call there.\nOr you could use class based views with a base class that has a method decorator on its dispatch method, or an overriden.dispatch.\nReally, you have plenty of solutions.\nNow, as said above, you might want to modify your Desktop application too!","Q_Score":1,"Tags":"python,mysql,django","A_Id":10935789,"CreationDate":"2012-06-07T11:00:00.000","Title":"Django: how to properly handle a database connection error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I cant find \"best\" solution for very simple problem(or not very)\nHave classical set of data: posts that attached to users, comments that attached to post and to user.\nNow i can't decide how to build scheme\/classes\nOn way is to store user_id inside comments and inside.\nBut what happens when i have 200 comments on page?\nOr when i have N posts on page?\nI mean it should be 200 additional requests to database to display user info(such as name,avatar)\nAnother solution is to embed user data into each comment and each post.\nBut first -> it is huge overhead, second -> model system is getting corrupted(using mongoalchemy), third-> user can change his info(like avatar). And what then? As i understand update operation on huge collections of comments or posts is not simple operation...\nWhat would you suggest? Is 200 requests per page to mongodb is OK(must aim for performance)?\nOr may be I am just missing something...","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":919,"Q_Id":10931889,"Users Score":1,"Answer":"What I would do with mongodb would be to embed the user id into the comments (which are part of the structure of the \"post\" document).\nThree simple hints for better performances:\n1) Make sure to ensure an index on the user_id\n2) Use comment pagination method to avoid querying 200 times the database\n3) Caching is your friend","Q_Score":3,"Tags":"python,mongodb,mongoalchemy,nosql","A_Id":10932004,"CreationDate":"2012-06-07T12:34:00.000","Title":"MongoDB: Embedded users into comments","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am making a little add-on for a game, and it needs to store information on a player:\n\nusername\nip-address\nlocation in game\na list of alternate user names that have came from that ip or alternate ip addresses that come from that user name\n\nI read an article a while ago that said that unless I am storing a large amount of information that can not be held in ram, that I should not use a database. So I tried using the shelve module in python, but I'm not sure if that is a good idea.\nWhen do you guys think it is a good idea to use a database, and when it better to store information in another way , also what are some other ways to store information besides databases and flat file databases.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5782,"Q_Id":10957877,"Users Score":7,"Answer":"Assuming by 'database' you mean 'relational database' - even the embedded databases like SQLite come with some overhead compared to a plain text file. But, sometimes that overhead is worth it compared to rolling your own. \nThe biggest question you need to ask is whether you are storing relational data - whether things like normalisation and SQL queries make any sense at all. If you need to lookup data across multiple tables using joins, you should certainly use a relational database - that's what they're for. On the other hand, if all you need to do is lookup into one table based on its primary key, you probably want a CSV file. Pickle and shelve are useful if what you're persisting is the objects you use in your program - if you can just add the relevant magic methods to your existing classes and expect it all to make sense.\nCertainly \"you shouldn't use databases unless you have a lot of data\" isn't the best advice - the amount of data goes more to what database you might use if you are using one. SQLite, for example, wouldn't be suitable for something the size of Stackoverflow - but, MySQL or Postgres would almost certainly be overkill for something with five users.","Q_Score":14,"Tags":"python,database,flat-file","A_Id":10957953,"CreationDate":"2012-06-09T02:16:00.000","Title":"When is it appropriate to use a database , in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1106,"Q_Id":10968439,"Users Score":9,"Answer":"This depends on lots of things like the size of the entities and the number of values that need to look up in the index, so it's best to benchmark it for your specific application. Also beware that if you find that on a sunny day it takes e.g. 10 seconds to load all your items, that probably means that some small fraction of your queries will run into a timeout due to natural variations in datastore performance, and occasionally your app will hit the timeout all the time when the datastore is having a bad day (it happens).","Q_Score":5,"Tags":"python,google-app-engine,gql,app-engine-ndb","A_Id":10974037,"CreationDate":"2012-06-10T11:51:00.000","Title":"What is the Google Appengine Ndb GQL query max limit?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1106,"Q_Id":10968439,"Users Score":7,"Answer":"Basically you don't have the old limit of 1000 entities per query anymore, but consider using a reasonable limit, because you can hit the time out error and it's better to get them in batches so users won't wait during load time.","Q_Score":5,"Tags":"python,google-app-engine,gql,app-engine-ndb","A_Id":10969575,"CreationDate":"2012-06-10T11:51:00.000","Title":"What is the Google Appengine Ndb GQL query max limit?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"When I say 'equivalent', I mean an ORM that allows for the same work-style. That is;\n\nSetting up a database\nDispensing and editing 'beans' (table rows) as if the table was already ready, while the table is being created behind the scenes\nReviewing, indexing and polishing the table structure before production\n\nThanks for any leads","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":700,"Q_Id":10987162,"Users Score":0,"Answer":"Short answer, there is a proof-of-concept called PyBean as answered by Gabor de Mooij, but it barely offers any features and cannot be used. There are no other Python libraries that work like PyBean.","Q_Score":1,"Tags":"php,python,mysql,orm,redbean","A_Id":13714374,"CreationDate":"2012-06-11T20:35:00.000","Title":"Is there a RedBeanPHP equivalent for Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here's the scenario:\nI have a url in a MySQL database that contains Unicode. The database uses the Latin-1 encoding. Now, when I read the record from MySQL using Python, it gets converted to Unicode because all strings follow the Unicode format in Python.\nI want to write the URL into a text file -- to do so, it needs to be converted to bytes (UTF-8). This was done successfully.\nNow, given the URLS that are in the text file, I want to query the db for these SAME urls in the database. I do so by calling the source command to execute a few select queries.\nResult: I get no matches.\nI suspect that the problem stems from my conversion to UTF-8, which somehow is messing up the symbols.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1752,"Q_Id":10990496,"Users Score":0,"Answer":"You most probably need to set your mysql shell client to use utf8.\nYou can set it either in mysql shell directly by running set character set utf8.\nOr by adding default-character-set=utf8 to your ~\/.my.cnf.","Q_Score":0,"Tags":"python,mysql,unicode,encoding,utf-8","A_Id":10992555,"CreationDate":"2012-06-12T04:22:00.000","Title":"Unicode to UTF-8 encoding issue when importing SQL text file into MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way of editing and save a specified cell in Excel 2010 .xlsx file from Node.JS. I realize, that maybe there are no production-ready solutions for NodeJS at this time. However, NodeJS supports C++ libraries, so could you suggest me any suitable lib compatible with Node?\nAlso, I had an idea to process this task via Python (xlrd, xlwt) and call it with NodeJS. What do you think of this? Are there any more efficient methods to edit XLSX from NodeJS? Thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3505,"Q_Id":11007460,"Users Score":0,"Answer":"Basically you have 2 possibilities:\n\nnode.js does not support C++ libraries but it is possible to write bindings for node.js that interact with a C\/C++ library. So you need to get your feet wet on writing a C++ addon for the V8 (the JavaScript engine behind node.js)\nfind a command line program which does what you want to do. (It does not need to be Python.) You could call this from your JavaScript code by using a child-process.\n\nFirst option is more work, but would be result in faster executing time (when done right). Second possibility is easier to realise.\nP.S.: To many question for one question. I've no idea about the xls-whatever stuff, besides it's \"actually\" only XML.","Q_Score":0,"Tags":"c++,python,excel,node.js,read-write","A_Id":11008175,"CreationDate":"2012-06-13T02:27:00.000","Title":"Node.JS\/C++\/Python - edit Excel .xlsx file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Okay., We have Rails webapp which stores data in a mysql data base. The table design was not read efficient. So we resorted to creating a separate set of read only tables in mysql and made all our internal API calls use that tables for read. We used callbacks to keep the data in sync between both the set of tables. Now we have a another Python app which is going to mess with the same database - now how do we proceed maintaining the data integrity? \nActive record callbacks can't be used anymore. We know we can do it with triggers. But is there a any other elegant way to do this? How to people achieve to maintain the integrity of such derived data.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":296,"Q_Id":11013976,"Users Score":1,"Answer":"Yes, refactor the code to put a data web service in front of the database and let the Ruby and Python apps talk to the service. Let it maintain all integrity and business rules. \n\"Don't Repeat Yourself\" - it's a good rule.","Q_Score":2,"Tags":"python,mysql,ruby-on-rails,database,triggers","A_Id":11014025,"CreationDate":"2012-06-13T11:31:00.000","Title":"Maintaining data integrity in mysql when different applications are accessing it","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an excel spreadsheet (version 1997-2003) and another nonspecific database file (a .csy file, I am assuming it can be parsed as a text file as that is what it appears to be). I need to take information from both sheets, match them up, put them on one line, and print it to a text file. I was going to use python for this as usuing the python plugins for Visual Studio 2010 alongside the xlrd package seems to be the best way I could find for excel files, and I'd just use default packages in python for the other file. \nWould python be a good choice of language to both learn and program this script in? I am not familiar with scripting languages other then a little bit of VBS, so any language will be a learning experience for me. \nConverting the xls to csv is not an option, there are too many excel files, and the wonky formatting of them would make fishing through the csv more difficult then using xlrd.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":2962,"Q_Id":11020919,"Users Score":-1,"Answer":"Python is beginner-friendly and is good with string manipulation so it's a good choice. I have no idea how easy awk is to learn without programming experience but I would consider that as it's more or less optimized for processing csv's.","Q_Score":1,"Tags":"python,excel,file-io,scripting","A_Id":11020968,"CreationDate":"2012-06-13T18:16:00.000","Title":"First time writing a script, not sure what language to use (parsing excel and other files)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are writing an inventory system and I have some questions about sqlalchemy (postgresql) and transactions\/sessions. This is a web app using TG2, not sure this matters but to much info is never a bad. \n\nHow can make sure that when changing inventory qty's that i don't run into race conditions. If i understand it correctly if user on is going to decrement inventory on an item to say 0 and user two is also trying to decrement the inventory to 0 then if user 1s session hasn't been committed yet then user two starting inventory number is going to be the same as user one resulting in a race condition when both commit, one overwriting the other instead of having a compound effect.\nIf i wanted to use postgresql sequence for things like order\/invoice numbers how can I get\/set next values from sqlalchemy without running into race conditions?\n\nEDIT: I think i found the solution i need to use with_lockmode, using for update or for share. I am going to leave open for more answers or for others to correct me if I am mistaken.\nTIA","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2935,"Q_Id":11033892,"Users Score":3,"Answer":"If two transactions try to set the same value at the same time one of them will fail. The one that loses will need error handling. For your particular example you will want to query for the number of parts and update the number of parts in the same transaction.\nThere is no race condition on sequence numbers. Save a record that uses a sequence number the DB will automatically assign it.\nEdit:\nNote as Limscoder points out you need to set the isolation level to Repeatable Read.","Q_Score":2,"Tags":"python,postgresql,web-applications,sqlalchemy,turbogears2","A_Id":11034199,"CreationDate":"2012-06-14T13:15:00.000","Title":"SQLAlchemy(Postgresql) - Race Conditions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using mongoexport to export mongodb data which also has Image data in Binary format.\nExport is done in csv format.\nI tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk.\nBut it seems that, data is corrupt and image is not getting stored.\nHas anybody come across such situation or resolved similar thing ?\nThanks,","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":930,"Q_Id":11055921,"Users Score":-1,"Answer":"Depending how you stored the data, it may be prefixed with 4 bytes of size. Are the corrupt exports 4 bytes\/GridFS chunk longer than you'd expect?","Q_Score":1,"Tags":"python,image,mongodb,csv","A_Id":11058611,"CreationDate":"2012-06-15T18:01:00.000","Title":"Can mongoexport be used to export images stored in binary format in mongodb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using mongoexport to export mongodb data which also has Image data in Binary format.\nExport is done in csv format.\nI tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk.\nBut it seems that, data is corrupt and image is not getting stored.\nHas anybody come across such situation or resolved similar thing ?\nThanks,","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":930,"Q_Id":11055921,"Users Score":0,"Answer":"One thing to watch out for is an arbitrary 2MB BSON Object size limit in several of 10gen's implementations. You might have to denormalize your image data and store it across multiple objects.","Q_Score":1,"Tags":"python,image,mongodb,csv","A_Id":11056533,"CreationDate":"2012-06-15T18:01:00.000","Title":"Can mongoexport be used to export images stored in binary format in mongodb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to manipulate a large amount of numerical\/textual data, say total of 10 billion entries which can be theoretically organized as 1000 of 10000*1000 tables.\nMost calculations need to be performed on a small subset of data each time (specific rows or columns), such that I don't need all the data at once.\nTherefore, I am intersted to store the data in some kind of database so I can easily search the database, retrieve multiple rows\/columns matching defined criteria, make some calculations and update the database.The database should be accessible with both Python and Matlab, where I use Python mainly for creating raw data and putting it into database and Matlab for the data processing. \nThe whole project runs on Windows 7. What is the best and mainly the simplest database I can use for this purpose? I have no prior experience with databases at all.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3289,"Q_Id":11058409,"Users Score":3,"Answer":"IMO simply use the file system with a file format that can you read\/write in both MATLAB and Python. Databases usually imply a relational model (excluding the No-SQL ones), which would only add complexity here.\nBeing more MATLAB-inclined, you can directly manipulate MAT-files in SciPy with scipy.io.loadmat\/scipy.io.savemat functions. This is the native MATLAB format for storing data, with save\/load functions.\nUnless of course you really need databases, then ignore my answer :)","Q_Score":4,"Tags":"python,database,matlab","A_Id":11058566,"CreationDate":"2012-06-15T21:27:00.000","Title":"What the simplest database to use with both Python and Matlab?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a search engine that I can point to a column in my database that supports advanced functions like spelling correction and \"close to\" results. \nRight now I'm just using \nSELECT from where LIKE %% \nand I'm missing some results particularly when users misspell items.\nI've written some code to fix misspellings by running it through a spellchecker but thought there may be a better out-of-the box option to use. Google turns up lots of options for indexing and searching the entire site where I really just need to index and search this one table column.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":153,"Q_Id":11082229,"Users Score":3,"Answer":"Apache Solr is a great Search Engine that provides (1) N-Gram Indexing (search for not just complete strings but also for partial substrings, this helps greatly in getting similar results) (2) Provides an out of box Spell Corrector based on distance metric\/edit distance (which will help you in getting a \"did you mean chicago\" when the user types in chicaog) (3) It provides you with a Fuzzy Search option out of box (Fuzzy Searches helps you in getting close matches for your query, for an example if a user types in GA-123 he would obtain VMDEO-123 as a result) (4) Solr also provides you with \"More Like This\" component which would help you out like the above options.\nSolr (based on Lucene Search Library) is open source and is slowly rising to become the de-facto in the Search (Vertical) Industry and is excellent for database searches (As you spoke about indexing a database column, which is a cakewalk for Solr). Lucene and Solr are used by many Fortune 500 companies as well as Internet Giants.\nSphinx Search Engine is also great (I love it too as it has very low foot print for everything & is C++ based) but to put it simply Solr is much more popular.\nNow Python support and API's are available for both. However Sphinx is an exe and Solr is an HTTP. So for Solr you simply have to call the Solr URL from your python program which would return results that you can send to your front end for rendering, as simple as that)\nSo far so good. Coming to your question:\nFirst you should ask yourself that whether do you really require a Search Engine? Search Engines are good for all use cases mentioned above but are really made for searching across huge amounts of full text data or million's of rows of tabular data. The Algorithms like Did you Mean, Similar Records, Spell Correctors etc. can be written on top. Before zero-ing on Solr please also search Google for (1) Peter Norvig Spell Corrector & (2) N-Gram Indexing. Possibility is that just by writing few lines of code you may get really the stuff that you were looking out for.\nI leave it up to you to decide :)","Q_Score":3,"Tags":"python,mysql,database,search","A_Id":11088110,"CreationDate":"2012-06-18T11:52:00.000","Title":"Search Engine for a single DB column","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a search engine that I can point to a column in my database that supports advanced functions like spelling correction and \"close to\" results. \nRight now I'm just using \nSELECT from
    where LIKE %% \nand I'm missing some results particularly when users misspell items.\nI've written some code to fix misspellings by running it through a spellchecker but thought there may be a better out-of-the box option to use. Google turns up lots of options for indexing and searching the entire site where I really just need to index and search this one table column.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":153,"Q_Id":11082229,"Users Score":1,"Answer":"I would suggest looking into open source technologies like Sphynx Search.","Q_Score":3,"Tags":"python,mysql,database,search","A_Id":11087295,"CreationDate":"2012-06-18T11:52:00.000","Title":"Search Engine for a single DB column","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. \nFor instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?\nThanks in Advance!","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2701,"Q_Id":11100997,"Users Score":0,"Answer":"Instead of deleting orders - you should create a field which is a boolean (call it whatever you like - for example, deleted) and set this field to 1 for \"deleted\" orders.\nMessing with a serial field (which is what your auto-increment field is called in postgres) will lead to problems later; especially if you have foreign keys and relationships with tables.\nNot only will it impact your database server's performance; it also will impact on your business as eventually you will have two orders floating around that have the same order number; even though you have \"deleted\" one from the database, the order number may already be referenced somewhere else - like in a receipt your printed for your customer.","Q_Score":1,"Tags":"python,django,postgresql","A_Id":11101114,"CreationDate":"2012-06-19T12:32:00.000","Title":"Auto Increment Field in Django\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. \nFor instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?\nThanks in Advance!","AnswerCount":7,"Available Count":4,"Score":0.1137907297,"is_accepted":false,"ViewCount":2701,"Q_Id":11100997,"Users Score":4,"Answer":"You are going to have to implement that feature yourself, I doubt very much that a relational db will do that for you, and for good reason: it means updating a potentially large number of rows when one row is deleted.\nAre you sure you need this? It could become expensive.","Q_Score":1,"Tags":"python,django,postgresql","A_Id":11101064,"CreationDate":"2012-06-19T12:32:00.000","Title":"Auto Increment Field in Django\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. \nFor instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?\nThanks in Advance!","AnswerCount":7,"Available Count":4,"Score":-0.0285636566,"is_accepted":false,"ViewCount":2701,"Q_Id":11100997,"Users Score":-1,"Answer":"Try to set the value with type sequence in postgres using pgadmin.","Q_Score":1,"Tags":"python,django,postgresql","A_Id":11101032,"CreationDate":"2012-06-19T12:32:00.000","Title":"Auto Increment Field in Django\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. \nFor instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?\nThanks in Advance!","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2701,"Q_Id":11100997,"Users Score":0,"Answer":"I came across this looking for something else and wanted to point something out:\nBy storing the order in a field in the same table as your data, you lose data integrity, or if you index it things will get very complicated if you hit a conflict. In other words, it's very easy to have a bug (or something else) give you two 3's, a missing 4, and other weird things can happen. I inherited a project with a manual sort order that was critical to the application (there were other issues as well) and this was constantly an issue, with just 200-300 items.\nThe right way to handle a manual sort order is to have a separate table to manage it and sort with a join. This way your Order table will have exactly 10 entries with just it's PK (the order number) and a foreign key relationship to the ID of the items you want to sort. Deleted items just won't have a reference anymore.\nYou can continue to sort on delete similar to how you're doing it now, you'll just be updating the Order model's FK to list instead of iterating through and re-writing all your items. Much more efficient.\nThis will scale up to millions of manually sorted items easily. But rather than using auto-incremented ints, you would want to give each item a random order id in between the two items you want to place it between and keep plenty of space (few hundred thousand should do it) so you can arbitrarily re-sort them.\nI see you mentioned that you've only got 10 rows here, but designing your architecture to scale well the first time, as a practice, will save you headaches down the road, and once you're in the habit of it, it won't really take you any more time.","Q_Score":1,"Tags":"python,django,postgresql","A_Id":15074698,"CreationDate":"2012-06-19T12:32:00.000","Title":"Auto Increment Field in Django\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new with SQL\/Python.\nI was wondering if there is a way for me to sort or categorize expense items into three primary categories.\nThat is I have a 56,000 row list with about 100+ different expense categories. They vary from things like Payroll, Credit Card Pmt, telephone, etc.\nI would like to put them into three categories, for the sake of analysis.\nI know I could do a GIANT IF statement in Excel, but that would be really time consuming, based on the fact that there are 100+ sub categories.\nIs there any way to expedite the process with Python or even in Excel?\nAlso, I don't know if this is material or not, but I am preparing this file to be uploaded to a SQL database.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":11121395,"Users Score":0,"Answer":"You should create a table called something like ExpenseCategories, with the columns ExpenseCategory, PrimaryCategory.\nThis table would have one row for each expense category (which you can enforce with a constraint if you like). You would then join this table with your existing data in SQL.\nBy the way, in Excel, you could do this with a vlookup() rather than an if(). The vlookup() is analogous to using a lookup table in SQL. The equivalent of an if() would be a giant case statement, which is another possibility.","Q_Score":0,"Tags":"python,sql","A_Id":11121498,"CreationDate":"2012-06-20T14:05:00.000","Title":"Method for Sorting a list of expense categories into specific categories","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. \nAs an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code.\nI think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. \nWhat formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. \nFrom what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. \nJSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally.\nSpeed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use.\nI've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4619,"Q_Id":11129844,"Users Score":0,"Answer":"I would be tempted to research a little into some GUI that could output graphviz (DOT format) with annotations, so you could create the rooms and links between them (a sort of graph). Then later, you might want another format to support heftier info.\nBut should make it easy to create maps, links between rooms (containing items or traps etc..), and you could use common libraries to produce graphics of the maps in png or something.\nJust a random idea off the top of my head - feel free to ignore!","Q_Score":5,"Tags":"python","A_Id":11130087,"CreationDate":"2012-06-20T23:59:00.000","Title":"Optimal format for simple data storage in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. \nAs an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code.\nI think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. \nWhat formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. \nFrom what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. \nJSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally.\nSpeed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use.\nI've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?","AnswerCount":8,"Available Count":3,"Score":0.1243530018,"is_accepted":false,"ViewCount":4619,"Q_Id":11129844,"Users Score":5,"Answer":"Though there are good answers here already, I would simply recommend JSON for your purposes for the sole reason that since you're a new programmer it will be the most straightforward to read and translate as it has the most direct mapping to native Python data types (lists [] and dictionaries {}). Readability goes a long way and is one of the tenets of Python programming.","Q_Score":5,"Tags":"python","A_Id":11129974,"CreationDate":"2012-06-20T23:59:00.000","Title":"Optimal format for simple data storage in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. \nAs an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code.\nI think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. \nWhat formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. \nFrom what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. \nJSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally.\nSpeed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use.\nI've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?","AnswerCount":8,"Available Count":3,"Score":0.024994793,"is_accepted":false,"ViewCount":4619,"Q_Id":11129844,"Users Score":1,"Answer":"If you want editability, YAML is the best option of the ones you've named, because it doesn't have <> or {} required delimiters.","Q_Score":5,"Tags":"python","A_Id":11129853,"CreationDate":"2012-06-20T23:59:00.000","Title":"Optimal format for simple data storage in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to get some understanding on the question that I was pretty sure was clear for me. Is there any way to create table using psycopg2 or any other python Postgres database adapter with the name corresponding to the .csv file and (probably the most important) with columns that are specified in the .csv file.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2067,"Q_Id":11130261,"Users Score":1,"Answer":"I'll leave you to look at the psycopg2 library properly - this is off the top of my head (not had to use it for a while, but IIRC the documentation is ample).\nThe steps are:\n\nRead column names from CSV file\nCreate \"CREATE TABLE whatever\" ( ... )\nMaybe INSERT data\nimport os.path\nmy_csv_file = '\/home\/somewhere\/file.csv'\ntable_name = os.path.splitext(os.path.split(my_csv_file)[1])[0]\ncols = next(csv.reader(open(my_csv_file)))\n\nYou can go from there...\nCreate a SQL query (possibly using a templating engine for the fields and then issue the insert if needs be)","Q_Score":3,"Tags":"python,postgresql,psycopg2","A_Id":11130568,"CreationDate":"2012-06-21T00:59:00.000","Title":"Dynamically creating table from csv file using psycopg2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use PyODBC to connect to an Access database. It works fine on Windows, but running it under OS X I get\u2014\n\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"access.py\", line 10, in init\n self.connection = connect(driver='{Microsoft Access Driver (.mdb)}', dbq=path, pwd=password)\n pyodbc.Error: ('00000', '[00000] [iODBC][Driver Manager]dlopen({Microsoft Access Driver (.mdb)}, 6): image not found (0) (SQLDriverConnect)')\n\nDo I have to install something else? Have I installed PyODBC wrong?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":1896,"Q_Id":11154965,"Users Score":3,"Answer":"pyodbc allows connecting to ODBC data sources, but it does not actually implements drivers. \nI'm not familiar with OS X, but on Linux ODBC sources are typically described in odbcinst.ini file (location is determined by ODBCSYSINI variable). \nYou will need to install Microsoft Access ODBC driver for OS X.","Q_Score":1,"Tags":"python,ms-access,pyodbc","A_Id":11155551,"CreationDate":"2012-06-22T11:03:00.000","Title":"PyODBC \"Image not found (0) (SQLDriverConnect)\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to set up Flask and SQLAlchemy on Windows but I've been running into issues.\nI've been using Flask-SQLAlchemy along with PostgreSQL 9.1.4 (32 bit) and the Psycopg2 package. Here are the relevant bits of code, I created a basic User model just to test that my DB is connecting, and committing.\nThe three bits of code would come from the __init__.py file of my application, the models.py file and my settings.py file.\nWhen I try opening up my interactive prompt and try the code in the following link out I get a ProgrammingError exception (details in link).\nWhat could be causing this? I followed the documentation and I'm simply confused as to what I'm doing wrong especially considering that I've also used Django with psycopg2 and PostgreSQL on Windows.","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":7080,"Q_Id":11167518,"Users Score":4,"Answer":"At the time you execute create_all, models.py has never been imported, so no class is declared. Thus, create_all does not create any table.\nTo solve this problem, import models before running create_all or, even better, don't separate the db object from the model declaration.","Q_Score":1,"Tags":"python,sqlalchemy,flask,flask-sqlalchemy","A_Id":11210290,"CreationDate":"2012-06-23T06:47:00.000","Title":"Setting up Flask-SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to load fixtures into the system when a new VM is up. I have dumped MongoDB and Postgres. But I can't just sit in front of the PC whenever a new machine is up. I want to be able to just \"issue\" a command or the script automatically does it.\nBut a command like pg_dump to dump PostgreSQL will require a password. The problem is, the script that I uses to deploy these fixtures should be under version control. The file that contains this password (if that's the only way to do automation) will not be committed. If it needs to be committed, the deploy repository is restricted for internal developers only.\nMy question is... what do you consider a good practice in this situation? I am thinking of using Python's Popen to issue these commands. \nThanks.\nI also can put it in the cache server... but not sure if it's the only \"better\" way...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":11174324,"Users Score":0,"Answer":"You have to give the user that loads the fixture the privileges to write on the database regardless which way you are going to load the data.\nWith Postgres you can give login permission without password to specific users and eliminate the problem of a shared password or you can store the password in the pgpass file within the home directory.\nPersonally I find fabric a very nice tool to do deploys, in this specific case I will use it to connect to the remote machine and issue a psql -f 'dump_data.sql' -1 command.","Q_Score":1,"Tags":"python,deployment,fixtures","A_Id":11174357,"CreationDate":"2012-06-24T01:20:00.000","Title":"Security concerns while loading fixtures","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a desktop app that has 65 modules, about half of which read from or write to an SQLite database. I've found that there are 3 ways that the database can throw an SQliteDatabaseError: \n\nSQL logic error or missing database (happens unpredictably every now and then)\nDatabase is locked (if it's being edited by another program, like SQLite Database Browser)\nDisk I\/O error (also happens unpredictably)\n\nAlthough these errors don't happen often, when they do they lock up my application entirely, and so I can't just let them stand. \nAnd so I've started re-writing every single access of the database to be a pointer to a common \"database-access function\" in its own module. That function then can catch these three errors as exceptions and thereby not crash, and also alert the user accordingly. For example, if it is a \"database is locked error\", it will announce this and ask the user to close any program that is also using the database and then try again. (If it's the other errors, perhaps it will tell the user to try again later...not sure yet). Updating all the database accesses to do this is mostly a matter of copy\/pasting the redirect to the common function--easy work.\nThe problem is: it is not sufficient to just provide this database-access function and its announcements, because at all of the points of database access in the 65 modules there is code that follows the access that assumes the database will successfully return data or complete a write--and when it doesn't, that code has to have a condition for that. But writing those conditionals requires carefully going into each access point and seeing how best to handle it. This is laborious and difficult for the couple of hundred database accesses I'll need to patch in this way.\nI'm willing to do that, but I thought I'd inquire if there were a more efficient\/clever way or at least heuristics that would help in finishing this fix efficiently and well.\n(I should state that there is no particular \"architecture\" of this application...it's mostly what could be called \"ravioli code\", where the GUI and database calls and logic are all together in units that \"go together\". I am not willing to re-write the architecture of the whole project in MVC or something like this at this point, though I'd consider it for future projects.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":237,"Q_Id":11215535,"Users Score":1,"Answer":"Your gut feeling is right. There is no way to add robustness to the application without reviewing each database access point separately.\nYou still have a lot of important choice at how the application should react on errors that depends on factors like,\n\nIs it attended, or sometimes completely unattended?\nIs delay OK, or is it important to report database errors promptly?\nWhat are relative frequencies of the three types of failure that you describe?\n\nNow that you have a single wrapper, you can use it to do some common configuration and error handling, especially:\n\nset reasonable connect timeouts\nset reasonable busy timeouts\nenforce command timeouts on client side\nretry automatically on errors, especially on SQLITE_BUSY (insert large delays between retries, fail after a few retries)\nuse exceptions to reduce the number of application level handlers. You may be able to restart the whole application on database errors. However, do that only if you have confidence as to in which state you are aborting the application; consistent use of transactions may ensure that the restart method does not leave inconsistent data behind.\nask a human for help when you detect a locking error\n\n...but there comes a moment where you need to bite the bullet and let the error out into the application, and see what all the particular callers are likely to do with it.","Q_Score":2,"Tags":"python,database,sqlite,error-handling","A_Id":11215911,"CreationDate":"2012-06-26T20:38:00.000","Title":"Efficient approach to catching database errors","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating. \nIs there any good reason not to use utf-8 as a default (and storage space seems like not a good reason)? Not trying to be argumentitive, just curious.\nthx","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":519,"Q_Id":11219060,"Users Score":6,"Answer":"Once upon a time there was no unicode or UTF-8, and disparate encoding schemes were in use throughout the world. \nIt wasn't until back in 1988 that the initial unicode proposal was issued, with the goal of encoding all the worlds characters in a common encoding. \nThe first release in 1991 covered many character representations, however, it wasn't until 2006 that Balinese, Cuneiform, N'Ko, Phags-pa, and Phoenician were added. \nUntil then the Phoenicians, and the others, were unable to represent their language in UTF-8 pissing off many programmers who wondered why everything was not just defaulting to UTF-8.","Q_Score":8,"Tags":"python,mysql,ruby,utf-8","A_Id":11219610,"CreationDate":"2012-06-27T03:37:00.000","Title":"why doesn't EVERYTHING default to UTF-8?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating. \nIs there any good reason not to use utf-8 as a default (and storage space seems like not a good reason)? Not trying to be argumentitive, just curious.\nthx","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":519,"Q_Id":11219060,"Users Score":-1,"Answer":"Some encodings have different byte orders (little and big endian)","Q_Score":8,"Tags":"python,mysql,ruby,utf-8","A_Id":11219088,"CreationDate":"2012-06-27T03:37:00.000","Title":"why doesn't EVERYTHING default to UTF-8?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1125,"Q_Id":11223147,"Users Score":1,"Answer":"If you're not after just parameter substitution, but full construction of the SQL, you have to do that using string operations on your end. The ? replacement always just stands for a value. Internally, the SQL string is compiled to SQLite's own bytecode (you can find out what it generates with EXPLAIN thesql) and ? replacements are done by just storing the value at the correct place in the value stack; varying the query structurally would require different bytecode, so just replacing a value wouldn't be enough.\nYes, this does mean you have to be ultra-careful. If you don't want to allow updates, try opening the DB connection in read-only mode.","Q_Score":0,"Tags":"python,sqlite","A_Id":11224222,"CreationDate":"2012-06-27T09:27:00.000","Title":"Python + Sqlite 3. How to construct queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1125,"Q_Id":11223147,"Users Score":1,"Answer":"If you're trying to transmit changes to the database to another computer, why do they have to be expressed as SQL strings? Why not pickle the query string and the parameters as a tuple, and have the other machine also use SQLite parameterization to query its database?","Q_Score":0,"Tags":"python,sqlite","A_Id":11224475,"CreationDate":"2012-06-27T09:27:00.000","Title":"Python + Sqlite 3. How to construct queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1125,"Q_Id":11223147,"Users Score":0,"Answer":"I want how to get the parsed 'sql param'.\n\nIt's all open source so you have full access to the code doing the parsing \/ sanitization. Why not just reading this code and find out how it works and if there's some (possibly undocumented) implementation that you can reuse ?","Q_Score":0,"Tags":"python,sqlite","A_Id":11224003,"CreationDate":"2012-06-27T09:27:00.000","Title":"Python + Sqlite 3. How to construct queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a worksheet.title method but not workbook.title method. Looking in the documentation there is no explicit way to find it, I wasn't sure if anyone knew a workaround or trick to get it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10098,"Q_Id":11233140,"Users Score":2,"Answer":"A workbook doesn't really have a name - normally you'd just consider it to be the basename of the file it's saved as... slight update - yep, even in VB WorkBook.Name just returns \"file on disk.xls\"","Q_Score":3,"Tags":"python,excel,openpyxl","A_Id":11233362,"CreationDate":"2012-06-27T18:54:00.000","Title":"Is there a way to get the name of a workbook in openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a webapp on google appengine with python and my app lets users post topics and respond to them and the website is basically a collection of these posts categorized onto different pages.\nNow I only have around 200 posts and 30 visitors a day right now but that is already taking up nearly 20% of my reads and 10% of my writes with the datastore. I am wondering if it is more efficient to use the google app engine's built in get_by_id() function to retrieve posts by their IDs or if it is better to build my own. For some of the queries I will simply have to use GQL or the built in query language because they are retrieved on more than just and ID but I wanted to see which was better.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":11270434,"Users Score":0,"Answer":"I'd suggest using pre-existing code and building around that in stead of re-inventing the wheel.","Q_Score":0,"Tags":"python,google-app-engine,indexing,google-cloud-datastore","A_Id":11270908,"CreationDate":"2012-06-30T00:18:00.000","Title":"use standard datastore index or build my own","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"We are using Python Pyramid with SQLAlchemy and MySQL to build a web application. We would like to have user-specific database connections, so every web application user has their own database credentials. This is primarily for security reasons, so each user only has privileges for their own database content. We would also like to maintain the performance advantage of connection pooling. Is there a way we can setup a new engine at login time based on the users credentials, and reuse that engine for requests made by the same user?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":343,"Q_Id":11299182,"Users Score":0,"Answer":"The best way to do this that I know is to use the same database with multiple schemas. Unfortunately I don't think this works with MySQL. The idea is that you connection pool engines to the same database and then when you know what user is associated with the request you can switch schemas for that connection.","Q_Score":0,"Tags":"python,sqlalchemy,pyramid","A_Id":11300227,"CreationDate":"2012-07-02T18:29:00.000","Title":"How to manage user-specific database connections in a Pyramid Web Application?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have successfully installed py27-mysql from MacPorts and MySQL-python-1.2.3c1 on a machine running Snow Leopard. Because I have MySQL 5.1.48 in an odd location (\/usr\/local\/mysql\/bin\/mysql\/), I had to edit the setup.cfg file when I installed mysql-python. However, now that it's installed, I'm still getting the error \"ImportError: No module named MySQLdb\" when I run \"import MySQLdb\" in python. What is left to install? Thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":11304019,"Users Score":0,"Answer":"MacPorts' py27-mysql, MySQL-python, and MySQLdb are all synonyms for the same thing. If you successfully installed py27-mysql, you should not need anything else, and it's possible you've messed up your python site-packages. Also, make sure you are invoking the right python binary, i.e. MacPorts' python27 and not the one that comes with Mac OS X.","Q_Score":0,"Tags":"mysql-python","A_Id":12535972,"CreationDate":"2012-07-03T03:19:00.000","Title":"setting up mysql-python on Snow Leopard","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Now on writing path as sys.path.insert(0,'\/home\/pooja\/Desktop\/mysite'), it ran fine asked me for the word tobe searched and gave this error: \n\nTraceback (most recent call last):\nFile \"call.py\", line 32, in \ns.save()\nFile\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/models\/base.py\",\n line 463, in save\nself.save_base(using=using, force_insert=force_insert,\n force_update=force_update)\nFile\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/models\/base.py\",\n line 524, in \nsave_base\nmanager.using(using).filter(pk=pk_val).exists())):\nFile\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/models\/query.py\",\n line 562, in exists\nreturn self.query.has_results(using=self.db)\nFile\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/models\/sql\/query.py\",\n line 441, in has_results\nreturn bool(compiler.execute_sql(SINGLE))\nFile\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/models\/sql\/compiler.py\",\n line 818, in execute_sql\ncursor.execute(sql, params)\nFile\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/backends\/util.py\",\n line 40, in execute\nreturn self.cursor.execute(sql, params) File\n \"\/usr\/local\/lib\/python2.6\/dist-packages\/django\/db\/backends\/sqlite3\/base.py\",\n line 337, in execute\n return Database.Cursor.execute(self, query, params)\ndjango.db.utils.DatabaseError: no such table: search_keywords\n\nPlease help!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":768,"Q_Id":11307928,"Users Score":1,"Answer":"The exception says: no such table: search_keywords, which is quite self-explanatory and means that there is no database table with such name. So:\n\nYou may be using relative path to db file in settings.py, which resolves to a different db depending on place where you execute the script. Try to use absolute path and see if it helps.\nYou have not synced your models with the database. Run manage.py syncdb to generate the database tables.","Q_Score":1,"Tags":"python,django,linux,sqlite,ubuntu-10.04","A_Id":11308029,"CreationDate":"2012-07-03T09:18:00.000","Title":"error in accessing table created in django in the python code","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents.\nat this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading\/writing and updating stuff to the blobstore it takes too much time and the app gets really slow. \ni thought of using a nosql db on an external machine to do these operations over appengine. \na few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests\/inserts from different tenants. \nlets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load?\nis this even the right way to go?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":239,"Q_Id":11319890,"Users Score":0,"Answer":"The overhead of making calls from appengine to these external machines is going to be worse than the performance you're seeing now (I would expect). why not just move everything to a non-appengine machine?\nI can't speak for couch, but mongo or redis are definitely capable of handling serious load as long as they are set up correctly and with enough horsepower for your needs.","Q_Score":1,"Tags":"javascript,python,google-app-engine,nosql,multi-tenant","A_Id":11319983,"CreationDate":"2012-07-03T22:08:00.000","Title":"key\/value store with good performance for multiple tenants","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents.\nat this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading\/writing and updating stuff to the blobstore it takes too much time and the app gets really slow. \ni thought of using a nosql db on an external machine to do these operations over appengine. \na few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests\/inserts from different tenants. \nlets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load?\nis this even the right way to go?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":239,"Q_Id":11319890,"Users Score":2,"Answer":"Why not use the much faster regular appengine datastore instead of blobstore? Simply store your documents in regular entities as Blob property. Just make sure the entity size doesn't exceed 1 MB in which case you have to split up your data into more then one entity. I run an application whith millions of large Blobs that way.\nTo further speed up things use memcache or even in-memory cache. Consider fetching your entites with eventual consistency which is MUCH faster. Run as many database ops in parallel as possible using either bulk operations or the async API.","Q_Score":1,"Tags":"javascript,python,google-app-engine,nosql,multi-tenant","A_Id":11323377,"CreationDate":"2012-07-03T22:08:00.000","Title":"key\/value store with good performance for multiple tenants","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Not sure if the title is a great way to word my actual problem and I apologize if this is too general of a question but I'm having some trouble wrapping my head around how to do something. \nWhat I'm trying to do:\nThe idea is to create a MySQL database of 'outages' for the thousands of servers I'm responsible for monitoring. This would give a historical record of downtime and an easy way to retroactively tell what happened. The database will be queried by a fairly simple PHP form where one could browse these outages by date or server hostname etc. \nWhat I have so far:\nI have a python script that runs as a cron periodically to call the Pingdom API to get a list of current down alerts reported by the pingdom service. For each down alert, a row is inserted into a database containing a hostname, time stamp, pingdom check id, etc. I then have a simple php form that works fine to query for down alerts. \nThe problem:\nWhat I have now is missing some important features and isn't quite what I'm looking for. Currently, querying this database would give me a simple list of down alerts like this:\nPindom alerts for Test_Check from 2012-05-01 to 2012-06-30:\ntest_check was reported DOWN at 2012-05-24 00:11:11\ntest_check was reported DOWN at 2012-05-24 00:17:28\ntest_check was reported DOWN at 2012-05-24 00:25:24\ntest_check was reported DOWN at 2012-05-24 00:25:48\nWhat I would like instead is something like this:\ntest_check was reported down for 15 minutes (2012-05-24 00:11:11 to 2012-05-24 00:25:48)(link to comment on this outage)(link to info on this outage). \nIn this ideal end result, there would be one row containing a outage ID, hostname of the server pingdom is reporting down, the timestamp for when that box was reported down originally and the timestamp for when it was reported up again along with a 'comment' field I (and other admins) would use to add notes about this particular event after the fact. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options.\nI'm a little lost as to how I will go about combining several down alerts that occur within a short period of time into a single 'outage' that would be inserted into a separate table in the existing MySQL database where individual down alerts are currently being stored. This would allow me to comment and add specific details for future reference and would generally make this thing a lot more usable. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options.\nI've been wracking my brain trying to figure out how to do this. It seems like a simple concept but I'm a somewhat inexperienced programmer (I'm a Linux admin by profession) and I'm stumped at this point. \nI'm looking for any thoughts, advice, examples or even just a more technical explanation of what I'm trying to do here to help point me in the right direction. I hope this makes sense. Thanks in advance for any advice :)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":11329588,"Users Score":0,"Answer":"The most basic solution with the setup you have now would be to:\n\nGet a list of all events, ordered by server ID and then by time of the event\nLoop through that list and record the start of a new event \/ end of an old event for your new database when:\n\nthe server ID changes\nthe time between the current event and the previous event from the same server is bigger than a certain threshold you set.\nStore the old event you were monitoring in your new database\n\n\nThe only complication I see, is that the next time you run the script, you need to make sure that you continue monitoring events that were still taking place at the time you last ran the script.","Q_Score":0,"Tags":"php,python,mysql,json,pingdom","A_Id":11329769,"CreationDate":"2012-07-04T12:56:00.000","Title":"How can I combine rows of data into a new table based on similar timestamps? (python\/MySQL\/PHP)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a bit of Python code that watches a certain directory for new files, and inserts new files into a database using the cx_Oracle module. This program will be running as a service. At a given time there could be many files arriving at once, but there may also be periods of up to an hour where no files are received. Regarding good practice: is it bad to keep a database connection open indefinitely? On one hand something tells me that it's not a good idea, but on the other hand there is a lot of overhead in creating a new database object for every file received and closing it afterwards, especially when many files are received at once. Any suggestions on how to approach this would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1153,"Q_Id":11346224,"Users Score":2,"Answer":"If you only need one or two connections, I see no harm in keeping them open indefinitely.\nWith Oracle, creating a new connection is an expensive operation, unlike in some other databases, such as MySQL where it is very cheap to create a new connection. Sometimes it can even take a few seconds to connect which can become a bit of a bottleneck for some applications if they close and open connections too frequently.\nAn idle connection on Oracle uses a small amount of memory, but aside from that, it doesn't consume any other resources while it sits there idle.\nTo keep your DBAs happy, you will want to make sure you don't have lots of idle connections left open, but I'd be happy with one or two.","Q_Score":3,"Tags":"python,oracle","A_Id":11347776,"CreationDate":"2012-07-05T14:17:00.000","Title":"Keeping database connection open - good practice?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to query ODBC compliant databases using pyodbc in ubuntu. For that, i have installed the driver (say mysql-odbc-driver). After installation the odbcinst.ini file with the configurations gets created in the location \/usr\/share\/libmyodbc\/odbcinst.ini\nWhen i try to connect to the database using my pyodbc connection code, i get a driver not found error message.\nNow when I copy the contents of the file to \/etc\/odbcinst.ini, it works!\nThis means pyodbc searches for the driver information in file \/etc\/odbcinst.ini.\nHow can I change the location where it searches the odbcinst.ini file for the driver information\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7504,"Q_Id":11393269,"Users Score":6,"Answer":"Assuming you are using unixODBC here was some possibilities:\n\nrebuild unixODBC from scratch and set --sysconfdir\nexport ODBCSYSINI env var pointing to a directory and unixODBC will look here for odbcinst.ini and odbc.ini system dsns\nexport ODBCINSTINI and point it at your odbcinst.ini file\n\nBTW, I doubt pyodbc looks anything up in the odbcinst.ini file but unixODBC will. There is a list of ODBC Driver manager APIs which can be used to examine ODBC ini files.","Q_Score":5,"Tags":"python,odbc,pyodbc","A_Id":11393468,"CreationDate":"2012-07-09T10:30:00.000","Title":"setting the location where pyodbc searches for odbcinst.ini file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency. \nEach time each instance shall take out one item, delete it from the list and process it with some procedures.\nFirst I tried to store the list in a sqlite database, but sqlite allows multiple read-locks which means multiple instances might get the same item from the database.\nIs there any way that makes each instance will get an unique item to process?\nI could use other data structure (other database or just file) if needed.\nBy the way, is there a way to check whether a DELETE operation is successful or not, after executing cursor.execute(delete_query)?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":484,"Q_Id":11430276,"Users Score":0,"Answer":"Why not read in all the items from the database and put them in a queue? You can have a worker thread get at item, process it and move on to the next one.","Q_Score":0,"Tags":"python,database,sqlite,concurrency,locking","A_Id":20908479,"CreationDate":"2012-07-11T10:07:00.000","Title":"Concurrency on sqlite database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency. \nEach time each instance shall take out one item, delete it from the list and process it with some procedures.\nFirst I tried to store the list in a sqlite database, but sqlite allows multiple read-locks which means multiple instances might get the same item from the database.\nIs there any way that makes each instance will get an unique item to process?\nI could use other data structure (other database or just file) if needed.\nBy the way, is there a way to check whether a DELETE operation is successful or not, after executing cursor.execute(delete_query)?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":484,"Q_Id":11430276,"Users Score":0,"Answer":"How about another field in db as a flag (e.g. PROCESSING, UNPROCESSED, PROCESSED)?","Q_Score":0,"Tags":"python,database,sqlite,concurrency,locking","A_Id":11430479,"CreationDate":"2012-07-11T10:07:00.000","Title":"Concurrency on sqlite database using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script to be run as a cron and I was wondering, is there any difference in speed between the Ruby MySQL or Python MySQL in terms of speed\/efficiency? Would I be better of just using PHP for this task?\nThe script will get data from a mysql database with 20+ fields and store them in another table every X amount of minutes. Not much processing of the data will be necessary.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":254,"Q_Id":11431679,"Users Score":7,"Answer":"Just pick the language you feel most comfortable with. It shouldn't make a noticeable difference. \nAfter writing the application, you can search for bottlenecks and optimize that","Q_Score":0,"Tags":"python,mysql,ruby","A_Id":11431795,"CreationDate":"2012-07-11T11:28:00.000","Title":"Python MySQL vs Ruby MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"since it is not possible to access mysql remotely on GAE, without the google cloud sql, \ncould I put a sqlite3 file on google cloud storage and access it through the GAE with django.db.backends.sqlite3?\nThanks.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2078,"Q_Id":11462291,"Users Score":0,"Answer":"No. SQLite requires native code libraries that aren't available on App Engine.","Q_Score":1,"Tags":"python,django,sqlite,google-app-engine,google-cloud-storage","A_Id":11498320,"CreationDate":"2012-07-12T23:44:00.000","Title":"Google App Engine + Google Cloud Storage + Sqlite3 + Django\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"since it is not possible to access mysql remotely on GAE, without the google cloud sql, \ncould I put a sqlite3 file on google cloud storage and access it through the GAE with django.db.backends.sqlite3?\nThanks.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2078,"Q_Id":11462291,"Users Score":0,"Answer":"Google Cloud SQL is meant for this, why don't you want to use it?\nIf you have every frontend instance load the DB file, you'll have a really hard time synchronizing them. It just doesn't make sense. Why would you want to do this?","Q_Score":1,"Tags":"python,django,sqlite,google-app-engine,google-cloud-storage","A_Id":11463047,"CreationDate":"2012-07-12T23:44:00.000","Title":"Google App Engine + Google Cloud Storage + Sqlite3 + Django\/Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is it possible to determine fields available in a table (MySQL DB) pragmatically at runtime using SQLAlchemy or any other python library ? Any help on this would be great.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":11500239,"Users Score":0,"Answer":"You can run the SHOW TABLE TABLENAME and get the columns of the tables.","Q_Score":4,"Tags":"python,sqlalchemy","A_Id":11500397,"CreationDate":"2012-07-16T08:00:00.000","Title":"How to determine fields in a table using SQLAlchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to know whether mysql query with browser is faster or python's MySQLdb is faster. I am using MysqlDb with PyQt4 for desktop ui and PHP for web ui.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":161,"Q_Id":11508670,"Users Score":1,"Answer":"I believe you're asking about whether Python or PHP (what I think you mean by browser?) is more efficient at making a database call.\nThe answer? It depends on the specific code and calls, but it's going to be largely the same. Both Python and PHP are interpreted languages and interpret the code at run time. If either of the languages you were using were compiled (say, like, if you used C), I'd say you might see a speed advantage of one over the other, but with the current information you've given us, I can't really judge that.\nI would use the language you are most comfortable in or feel would best fit the task - they're both going to connect to a MySQL database and do the same exact commands and queries, so just write the code in the easiest way possible for you to do it.\nAlso, your question as posed doesn't make much sense. Browsers don't interact with a MySQL database, PHP, which is executed by a server when you request a page, does.","Q_Score":0,"Tags":"php,python,mysql,pyqt4,mysql-python","A_Id":11510083,"CreationDate":"2012-07-16T16:38:00.000","Title":"browser query vs python MySQLdb query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to know whether mysql query with browser is faster or python's MySQLdb is faster. I am using MysqlDb with PyQt4 for desktop ui and PHP for web ui.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":11508670,"Users Score":0,"Answer":"Browsers don't perform database queries (unless you consider the embedded SQLite database), so not only is your question nonsensical, it is in fact completely irrelevant.","Q_Score":0,"Tags":"php,python,mysql,pyqt4,mysql-python","A_Id":11509874,"CreationDate":"2012-07-16T16:38:00.000","Title":"browser query vs python MySQLdb query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to do HelloWorld on GoogleAppEngine, but getting the following error.\nC:\\LearningGoogleAppEngine\\HelloWorld>dev_appserver.py helloworld\nWARNING 2012-07-17 10:21:37,250 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.\nTraceback (most recent call last):\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\dev_appserver.py\", line 133, in \nrun_file(file, globals())\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\dev_appserver.py\", line 129, in run_file\nexecfile(script_path, globals_)\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\google\\appengine\\tools\\dev_appserver_main.py\", line 694, in sys.exit(main(sys.argv))\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\google\\appengine\\tools\\dev_appserver_main.py\", line 582, in main root_path, {}, default_partition=default_partition)\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\google\\appengine\\tools\\dev_appserver.py\", line 3217, in LoadAppConfig raise AppConfigNotFoundError\ngoogle.appengine.tools.dev_appserver.AppConfigNotFoundError\nI've found posts on GoogleCode, StackO regarding this issue. But no matter what I try, I still can't overcome this error.\nPython version installed on Windows 7 machine is: 2.7.3\nGAE Launcher splash screen displays the following:\nRelease 1.7.0\nApi versions: ['1']\nPython: 2.5.2\nwxPython : 2.8.8.1(msw-unicode) \nCan someone help?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1146,"Q_Id":11520573,"Users Score":1,"Answer":"it's been a while, but I believe I've previously fixed this by adding import rdbms to dev_appserver.py\nhmm.. or was that import MySQLdb? (more likely)","Q_Score":3,"Tags":"google-app-engine,python-2.7","A_Id":11533684,"CreationDate":"2012-07-17T10:25:00.000","Title":"GoogleAppEngine error: rdbms_mysqldb.py:74","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Trying to do HelloWorld on GoogleAppEngine, but getting the following error.\nC:\\LearningGoogleAppEngine\\HelloWorld>dev_appserver.py helloworld\nWARNING 2012-07-17 10:21:37,250 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.\nTraceback (most recent call last):\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\dev_appserver.py\", line 133, in \nrun_file(file, globals())\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\dev_appserver.py\", line 129, in run_file\nexecfile(script_path, globals_)\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\google\\appengine\\tools\\dev_appserver_main.py\", line 694, in sys.exit(main(sys.argv))\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\google\\appengine\\tools\\dev_appserver_main.py\", line 582, in main root_path, {}, default_partition=default_partition)\nFile \"C:\\Program Files (x86)\\Google\\google_appengine\\google\\appengine\\tools\\dev_appserver.py\", line 3217, in LoadAppConfig raise AppConfigNotFoundError\ngoogle.appengine.tools.dev_appserver.AppConfigNotFoundError\nI've found posts on GoogleCode, StackO regarding this issue. But no matter what I try, I still can't overcome this error.\nPython version installed on Windows 7 machine is: 2.7.3\nGAE Launcher splash screen displays the following:\nRelease 1.7.0\nApi versions: ['1']\nPython: 2.5.2\nwxPython : 2.8.8.1(msw-unicode) \nCan someone help?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1146,"Q_Id":11520573,"Users Score":0,"Answer":"just had the exact same error messages: I found that restarting Windows fixed everything and I did not have to deviate from the YAML or py file given on the google helloworld python tutorial.","Q_Score":3,"Tags":"google-app-engine,python-2.7","A_Id":12513978,"CreationDate":"2012-07-17T10:25:00.000","Title":"GoogleAppEngine error: rdbms_mysqldb.py:74","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I would like to get the suggestion on using No-SQL datastore for my particular requirements.\nLet me explain:\n I have to process the five csv files. Each csv contains 5 million rows and also The common id field is presented in each csv.So, I need to merge all csv by iterating 5 million rows.So, I go with python dictionary to merge all files based on the common id field.But here the bottleneck is you can't store the 5 million keys in memory(< 1gig) with python-dictionary.\nSo, I decided to use No-Sql.I think It might be helpful to process the 5 million key value storage.Still I didn't have clear thoughts on this.\nAnyway we can't reduce the iteration since we have the five csvs each has to be iterated for updating the values.\nIs it there an simple steps to go with that?\n If this is the way Could you give me the No-Sql datastore to process the key-value pair?\nNote: We have the values as list type also.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":347,"Q_Id":11522232,"Users Score":0,"Answer":"If this is just a one-time process, you might want to just setup an EC2 node with more than 1G of memory and run the python scripts there. 5 million items isn't that much, and a Python dictionary should be fairly capable of handling it. I don't think you need Hadoop in this case.\nYou could also try to optimize your scripts by reordering the items in several runs, than running over the 5 files synchronized using iterators so that you don't have to keep everything in memory at the same time.","Q_Score":1,"Tags":"python,nosql","A_Id":11522576,"CreationDate":"2012-07-17T12:15:00.000","Title":"Process 5 million key-value data in python.Will NoSql solve?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on the XLWT XLRD XLUTIL packages. Whenever I write to a new sheet, all the formulas have been obliterated. \nI tried the following fixes, but they all failed:\n\nRe-write all the formulas in with a loop:\nFailure: XLWT Formula does not support advanced i.e. VLOOKUP Formulas\nDoing the calculations all in Python: this is ridiculous\n\nHow can I preserve the formulas using the above packages? Can I use some other packages to solve my problem? Or, do I need to code my own solution?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1819,"Q_Id":11527100,"Users Score":1,"Answer":"(a) xlrd does not currently support extracting formulas. \n(b) You say \"XLWT Formula does not support advanced i.e. VLOOKUP Formulas\". This is incorrect. If you are the same person that I seem to have convinced that xlwt supports VLOOKUP etc after a lengthy exchange of private emails over the last few days, please say so. Otherwise please supply a valid (i.e. Excel accepts it) formula that xlwt won't parse correctly.\n(c) Doing the calculations in Python is not ridiculous if the output is only for display.","Q_Score":1,"Tags":"python,excel,formula,xlrd,xlwt","A_Id":11596820,"CreationDate":"2012-07-17T16:44:00.000","Title":"Preserving Formula in Excel Python XLWT","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I specify the column that I want in my query using a model (it selects all columns by default)? I know how to do this with the sqlalchmey session: session.query(self.col1), but how do I do it with with models? I can't do SomeModel.query(). Is there a way?","AnswerCount":10,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":191024,"Q_Id":11530196,"Users Score":2,"Answer":"result = ModalName.query.add_columns(ModelName.colname, ModelName.colname)","Q_Score":183,"Tags":"python,sqlalchemy,flask-sqlalchemy","A_Id":68064416,"CreationDate":"2012-07-17T20:16:00.000","Title":"Flask SQLAlchemy query, specify column names","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a data organization issue. I'm working on a client\/server project where the server must maintain a copy of the client's filesystem structure inside of a database that resides on the server. The idea is to display the filesystem contents on the server side in an AJAX-ified web interface. Right now I'm simply uploading a list of files to the database where the files are dumped sequentially. The problem is how to recapture the filesystem structure on the server end once they're in the database. It doesn't seem feasible to reconstruct the parent->child structure on the server end by iterating through a huge list of files. However, when the file objects have no references to each other, that seems to be the only option.\nI'm not entirely sure how to handle this. As near as I can tell, I would need to duplicate some type of filesystem data structure on the server side (in a Btree perhaps?) with objects maintaining pointers to their parents and\/or children. I'm wondering if anyone has had any similar past experiences they could share, or maybe some helpful resources to point me in the right direction.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1195,"Q_Id":11554676,"Users Score":2,"Answer":"I suggest to follow the Unix way. Each file is considered a stream of bytes, nothing more, nothing less. Each file is technically represented by a single structure called i-node (index node) that keeps all information related to the physical stream of the data (including attributes, ownership,...). \nThe i-node does not contain anything about the readable name. Each i-node is given a unique number (forever) that acts for the file as its technical name. You can use similar number to give the stream of bytes in database its unique identification. The i-nodes are stored on the disk in a separate contiguous section -- think about the array of i-node structures (in the abstract sense), or about the separate table in the database.\nBack to the file. This way it is represented by unique number. For your database representation, the number will be the unique key. If you need the other i-node information (file attributes), you can add the other columns to the table. One column will be of the blob type, and it will represent the content of the file (the stream of bytes). For AJAX, I gues that the files will be rather small; so, you should not have a problem with the size limits of the blob.\nSo far, the files are stored in as a flat structure (as the physical disk is, and as the relational database is).\nThe structure of directory names and file names of the files are kept separately, in another files (kept in the same structure, together with the other files, represented also by their i-node). Basically, the directory file captures tuples (bare_name, i-node number). (This way the hard links are implemented in Unix -- two names are paired with the same i-none number.) The root directory file has to have a fixed technical identification -- i.e. the reserved i-node number.","Q_Score":4,"Tags":"python,database,data-structures,filesystems","A_Id":11554828,"CreationDate":"2012-07-19T05:50:00.000","Title":"Data structures in python: maintaining filesystem structure within a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a huge database in sqlite3 of 41 million rows in a table. However, it takes around 14 seconds to execute a single query. I need to significantly improve the access time! Is this a hard disk hardware limit or a processor limit? If it is a processor limit then I think I can use the 8 processors I have to parallelise the query. However I never found a way to parallelize queries in SQLite for python. Is there any way to do this? Can I have a coding example? Or are other database programs more efficient in access? If so then which ones?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":204,"Q_Id":11556783,"Users Score":1,"Answer":"Firstly, make sure any relevant indexes are in place to assist in efficient queries -- which may or may not help...\nOther than that, SQLite is meant to be a (strangely) lite embedded SQL DB engine - 41 million rows is probably pushing it depending on number and size of columns etc...\nYou could take your DB and import it to PostgreSQL or MySQL, which are both open-source RDMS's with Python bindings and extensive feature sets. They'll be able to handle queries, indexing, caching, memory management etc... on large data effectively. (Or at least, since they're designed for that purpose, more effectively than SQLite which wasn't...)","Q_Score":0,"Tags":"python,sqlite,parallel-processing","A_Id":11557147,"CreationDate":"2012-07-19T08:25:00.000","Title":"reducing SQLITE3 access time in Python (Parallelization?)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am parsing .xlsx files using openpyxl.While writing into the xlsx files i need to maintain the same font colour as well as cell colour as was present in the cells of my input .xlsx files.Any idea how to extract the colour coding from the cell and then implement the same in another excel file.Thanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":11562279,"Users Score":0,"Answer":"I believe you can access the font colour by:\n colour = ws.cell(row=id,column=id).style.font.color\nI am not sure how to access the cell colour though.","Q_Score":0,"Tags":"python,python-3.x,excel-2007,openpyxl","A_Id":11783592,"CreationDate":"2012-07-19T13:47:00.000","Title":"How to detect colours and then apply colours while working with .xlsx(excel-2007) files on python 3.2(windows 7)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?","AnswerCount":6,"Available Count":2,"Score":-0.0333209931,"is_accepted":false,"ViewCount":1336,"Q_Id":11566537,"Users Score":-1,"Answer":"select top 1 * from tablename order by date_and_time DESC (for sql server)\nselect * from taablename order by date_and_time DESC limit 1(for mysql)","Q_Score":3,"Tags":"python,mysql","A_Id":11566922,"CreationDate":"2012-07-19T17:50:00.000","Title":"Retrieving only the most recent row in MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?","AnswerCount":6,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1336,"Q_Id":11566537,"Users Score":3,"Answer":"SELECT * FROM table ORDER BY date, time LIMIT 1","Q_Score":3,"Tags":"python,mysql","A_Id":11566549,"CreationDate":"2012-07-19T17:50:00.000","Title":"Retrieving only the most recent row in MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a GUI that is dependent on information from MySQL table, what i want to be able to do is to display a message every time the table is updated with new data. I am not sure how to do this or even if it is possible. I have codes that retrieve the newest MySQL update but I don't know how to have a message every time new data comes into a table. Thanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1004,"Q_Id":11567357,"Users Score":3,"Answer":"Quite simple and straightforward solution will be just to poll the latest autoincrement id from your table, and compare it with what you've seen at the previous poll. If it is greater -- you have new data. This is called 'active polling', it's simple to implement and will suffice if you do this not too often. So you have to store the last id value somewhere in your GUI. And note that this stored value will reset when you restart your GUI application -- be sure to think what to do at the start of the GUI. Probably you will need to track only insertions that occur while GUI is running -- then, at the GUI startup you need just to poll and store current id value, and then poll peroidically and react on its changes.","Q_Score":1,"Tags":"python,mysql","A_Id":11567806,"CreationDate":"2012-07-19T18:48:00.000","Title":"Scanning MySQL table for updates Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Database A resides on server server1, while database B resides on server server2.\nBoth servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).\nIn such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?\nIf so, how do I go about it, programatically,","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":214,"Q_Id":11585494,"Users Score":0,"Answer":"Without doing something like replicating database A onto the same server as database B and then doing the JOIN, this would not be possible.","Q_Score":0,"Tags":"mysql,python-2.7","A_Id":11585571,"CreationDate":"2012-07-20T19:04:00.000","Title":"MySQL Joins Between Databases On Different Servers Using Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Database A resides on server server1, while database B resides on server server2.\nBoth servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).\nIn such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?\nIf so, how do I go about it, programatically,","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":214,"Q_Id":11585494,"Users Score":0,"Answer":"I don't know python, so I'm going to assume that when you do a query it comes back to python as an array of rows.\nYou could query table A and after applying whatever filters you can, return that result to the application. Same to table B. Create a 3rd Array, loop through A, and if there is a joining row in B, add that joined row to the 3rd array. In the end the 3rd array would have the equivalent of a join of the two tables. It's not going to be very efficient, but might work okay for small recordsets.","Q_Score":0,"Tags":"mysql,python-2.7","A_Id":11585697,"CreationDate":"2012-07-20T19:04:00.000","Title":"MySQL Joins Between Databases On Different Servers Using Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on my Thesis where the Python application that connects to other linux servers over SSH is implemented. The question is about storing the passwords in the database (whatever kind, let's say MySQL for now). For sure keeping them not encrypted is a bad idea. But what can I do to feel comfortable with storing this kind of confidential data and use them later to connect to other servers? When I encrypt the password I'll not be able to use it to login the other machine.\nIs the public\/private keys set the only option in this case?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":454,"Q_Id":11587845,"Users Score":3,"Answer":"In my opinion using key authentication is the best and safest in my opinion for the SSH part and is easy to implement.\nNow to the meat of your question. You want to store these keys, or passwords, into a database and still be able to use them. This requires you to have a master password that can decrypt them from said database. This points a point of failure into a single password which is not ideal. You could come up with any number of fancy schemes to encrypt and store these master passwords, but they are still on the machine that is used to log into the other servers and thus still a weak point.\nInstead of looking at this from the password storage point of view, look at it from a server security point of view. If someone has access to the server with the python daemon running on it then they can log into any other server thus this is more of a environment security issue than a password one.\nIf you can think of a way to get rid of this singular point of failure then encrypting and storing the passwords in a remote database will be fine as long as the key\/s used to encrypt them are secure and unavailable to anyone else which is outside the realm of the python\/database relationship.","Q_Score":1,"Tags":"python,mysql,ssh,password-protection","A_Id":11588240,"CreationDate":"2012-07-20T22:40:00.000","Title":"Storing encrypted passwords storage for remote linux servers","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python and sqlite3 to handle a website. I need all timezones to be in localtime, and I need daylight savings to be accounted for. The ideal method to do this would be to use sqlite to set a global datetime('now') to be +10 hours.\nIf I can work out how to change sqlite's 'now' with a command, then I was going to use a cronjob to adjust it (I would happily go with an easier method if anyone has one, but cronjob isn't too hard)","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3357,"Q_Id":11590082,"Users Score":2,"Answer":"you can try this code, I am in Taiwan , so I add 8 hours:\nDateTime('now','+8 hours')","Q_Score":1,"Tags":"python,sqlite,timezone,dst","A_Id":21014456,"CreationDate":"2012-07-21T06:48:00.000","Title":"sqlite timezone now","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a method of checking all the fields in a MySQL table. Let's say I have a MySQL table with the fields One Two Three Four Five and Big One. These are fields that contains numbers that people enter in, sort of like the Mega Millions. Users enter numbers and it inserts the numbers they picked from least to greatest.\nNumbers would be drawn and I need a way of checking if any of the numbers that each user picked matched the winning numbers drawn, same for the Big One. If any matched, I would have it do something specific, like if one number or all numbers matched.\nI hope you understand what I am saying. Thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":11597835,"Users Score":0,"Answer":"I would imagine MySQL has some sort of 'set' logic in it, but if it's lacking, I know Python has sets, so I'll use an example of those in my solution:\n\nCreate a set with the numbers of the winning ticket:\nwinners = set({11, 22, 33, 44, 55})\nFor each query, jam all it's numbers into a set too:\ncurrent_user = set({$query[0], $query[1], $query[2]...$query[4]})\nPrint out how many overlapping numbers there are:\nprint winners.intersection(current_user)\n\nAnd finally, for the 'big one', use an if statement.\nLet me know if this helps.","Q_Score":1,"Tags":"python,mysql","A_Id":11919262,"CreationDate":"2012-07-22T04:49:00.000","Title":"Python MySQL Number Matching","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a company back-end that should include a password-safe type feature. Obviously the passwords needs to be plain text so the users can read them, or at least \"reversible\" to plain text somehow, so I can't use hashes.\nIs there anything more secure I can do than just placing the passwords in plain-text into the database?\n\nNote: These are (mostly) auto-generated passwords that is never re-used for anything except the purpose they are saved for, which is mostly FTP server credentials.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":105,"Q_Id":11603136,"Users Score":2,"Answer":"You can use MySQL's ENCODE(), DES_ENCRYPT() or AES_ENCRYPT() functions, and store the keys used to encrypt in a secure location.","Q_Score":0,"Tags":"python,mysql,hash,passwords","A_Id":11603255,"CreationDate":"2012-07-22T18:59:00.000","Title":"What security measures can I take to secure passwords that can't be hashed in a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am wondering what the most reliable way to generate a timestamp is using Python. I want this value to be put into a MySQL database, and for other programming languages and programs to be able to parse this information and use it.\nI imagine it is either datetime, or the time module, but I can't figure out which I'd use in this circumstance, nor the method.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":390,"Q_Id":11642105,"Users Score":0,"Answer":"For a database, your best bet is to store it in the database-native format, assuming its precision matches your needs. For a SQL database, the DATETIME type is appropriate.\nEDIT: Or TIMESTAMP.","Q_Score":0,"Tags":"python,time","A_Id":11642138,"CreationDate":"2012-07-25T03:04:00.000","Title":"Most reliable way to generate a timestamp with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am wondering what the most reliable way to generate a timestamp is using Python. I want this value to be put into a MySQL database, and for other programming languages and programs to be able to parse this information and use it.\nI imagine it is either datetime, or the time module, but I can't figure out which I'd use in this circumstance, nor the method.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":390,"Q_Id":11642105,"Users Score":0,"Answer":"if it's just a simple timestamp that needs to be read by multiple programs, but which doesn't need to \"mean\" anything in sql, and you don't care about different timezones for different users or anything like that, then seconds from the unix epoch (start of 1970) is a simple, common standard, and is returned by time.time().\npython actually returns a float (at least on linux), but if you only need accuracy to the second store it as an integer.\nif you want something that is more meaningful in sql then use a sql type like datetime or timestamp. that lets you do more \"meaningful\" queries (like query for a particular day) more easily (you can do them with seconds from epoch too, but it requires messing around with conversions), but it also gets more complicated with timezones and converting into different formats in different languages.","Q_Score":0,"Tags":"python,time","A_Id":11642253,"CreationDate":"2012-07-25T03:04:00.000","Title":"Most reliable way to generate a timestamp with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a system where entries are added to a SQL database sporadically throughout the day. I am trying to create a system which imports these entries to SOLR each time. \nI cant seem to find any infomation about adding individual records to SOLR from SQL. Can anyone point me in the right direction or give me a bit more information to get me going?\nAny help would be much appreciated,\nJames","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":606,"Q_Id":11647112,"Users Score":0,"Answer":"Besides DIH, you could setup a trigger in your db to fire Solr's REST service that would update changed docs for all inserted\/updated\/deleted documents.\nAlso, you could setup a Filter (javax.servlet spec) in your application to intercept server requests and push them to Solr before they even reach database (it can even be done in the same transaction, but there's rarely a real need for that, eventual consistency is usually fine for search engines).","Q_Score":1,"Tags":"python,search,solr","A_Id":11679439,"CreationDate":"2012-07-25T09:52:00.000","Title":"SOLR - Adding a single entry at a time","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am trying to connect to mysql in django. it asked me to install the module. the module prerequisites are \"MySQL 3.23.32 or higher\" etc. do i really need to install mysql, cant i just connect to remote one??","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":62,"Q_Id":11653040,"Users Score":4,"Answer":"You need to install the client libraries. The Python module is a wrapper around the client libraries. You don't need to install the server.","Q_Score":0,"Tags":"python,mysql,django","A_Id":11653215,"CreationDate":"2012-07-25T15:19:00.000","Title":"Not able to install python mysql module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"when I try to install the pyodbc by using \"python setup.py build install\", it shows up with some errors like the following:\ngcc -pthread -fno-strict-aliasing -DNDEBUG -march=i586 -mtune=i686 -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fwrapv -fPIC -DPYODBC_VERSION=3.0.3 -I\/usr\/include\/python2.6 -c \/root\/Desktop\/pyodbc-3.0.3\/src\/sqlwchar.cpp -o build\/temp.linux-i686-2.6\/root\/Desktop\/pyodbc-3.0.3\/src\/sqlwchar.o -Wno-write-strings\nIn file included from \/root\/Desktop\/pyodbc-3.0.3\/src\/sqlwchar.cpp:2:\n\/root\/Desktop\/pyodbc-3.0.3\/src\/pyodbc.h:41:20: error: Python.h: No such file or directory\n\/root\/Desktop\/pyodbc-3.0.3\/src\/pyodbc.h:42:25: error: floatobject.h: No such file or directory\n\/root\/Desktop\/pyodbc-3.0.3\/src\/pyodbc.h:43:24: error: longobject.h: No such file or directory\n\/root\/Desktop\/pyodbc-3.0.3\/src\/pyodbc.h:44:24: error: boolobject.h: No such file or directory\nand few more lines with similar feedback, in the end of the reply is like:\n\/root\/Desktop\/pyodbc-3.0.3\/src\/pyodbccompat.h:106: error: expected \u2018,\u2019 or \u2018;\u2019 before \u2018{\u2019 token\nerror: command 'gcc' failed with exit status 1\nand I have searched around for the solutions, everyone says to install python-devel and it will be fine, but I got this working on a 64bit opensuse without the python-devel,but it doesn't work on the 32bit one, and I couldn't found the right version for python2.6.0-8.12.2 anywhere on the internet... so I'm quite confused, please help! thanks in advance.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4644,"Q_Id":11691039,"Users Score":2,"Answer":"I don't see a way around having the Python header files (which are part of python-devel package). They are required to compile the package.\nMaybe there was a pre-compiled egg for the 64bit version somewhere, and this is how it got installed.\nWhy are you reluctant to install python-devel?","Q_Score":1,"Tags":"python,pyodbc,opensuse","A_Id":11691895,"CreationDate":"2012-07-27T15:32:00.000","Title":"Error when installing pyodbc on opensuse","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing an application that makes heavy use of geodjango (on PostGis) and spatial lookups. Distance queries on database side work great, but now I have to calculate distance between two points on python side of application (these points come from models obtained using separate queries). \nI can think of many ways that would calculate this distance, but I want to know do it in manner that is consistent with what the database will output.\nIs there any magic python function that calculates distance between two points given in which SRID they are measured? If not what other approach could you propose.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1898,"Q_Id":11703407,"Users Score":0,"Answer":"Use the appropriate data connection to execute the SQL function that you're already using, then retrieve that... Keeps everything consistent.","Q_Score":2,"Tags":"python,django,gis,postgis,geodjango","A_Id":11703980,"CreationDate":"2012-07-28T17:57:00.000","Title":"How to calculate distance between points on python side of my application in way that is consistent in what database does","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like <92>,<89>, <94> etc. \nAny thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1903,"Q_Id":11705114,"Users Score":0,"Answer":"Are all these \"junk\" characters in the range <80> to <9F>? If so, it's highly likely that they're Microsoft \"Smart Quotes\" (Windows-125x encodings). Someone wrote up the text in Word or Outlook, and copy\/pasted it into a Web application. Both Latin-1 and UTF-8 regard these characters as control characters, and the usual effect is that the text display gets cut off (Latin-1) or you see a ?-in-black-diamond-invalid-character (UTF-8).\nNote that Word and Outlook, and some other MS products, provide a UTF-8 version of the text for clipboard use. Instead of <80> to <9F> codes, Smart Quotes characters will be proper multibyte UTF-8 sequences. If your Web page is in UTF-8, you should normally get a proper UTF-8 character instead of the Smart Quote in Windows-125x encoding. Also note that this is not guaranteed behavior, but \"seems to work pretty consistently\". It all depends on a UTF-8 version of the text being available, and properly handled (i.e., you didn't paste into, say, gvim on the PC, and then copy\/paste into a Web text form). This may well also work for various PC applications, so long as they are looking for UTF-8-encoded text.","Q_Score":1,"Tags":"python,mysql,vim,encoding,smart-quotes","A_Id":18619898,"CreationDate":"2012-07-28T22:20:00.000","Title":"Junk characters (smart quotes, etc.) in output file","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an items table that is related to an item_tiers table. The second table consists of inventory receipts for an item in the items table. There can be 0 or more records in the item_tiers table related to a single record in the items table. How can I, using query, get only records that have 1 or more records in item tiers....\nresults = session.query(Item).filter(???).join(ItemTier)\nWhere the filter piece, in pseudo code, would be something like ...\nif the item_tiers table has one or more records related to item.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":144,"Q_Id":11746610,"Users Score":1,"Answer":"If there is a foreign key defined between tables, SA will figure the join condition for you, no need for additional filters.\n\nThere is, and i was really over thinking this. Thanks for the fast response. \u2013 Ominus","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":11747157,"CreationDate":"2012-07-31T18:24:00.000","Title":"SQLAlchemy - Query show results where records exist in both table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I currently run my own server \"in the cloud\" with PHP using mod_fastcgi and mod_vhost_alias. My mod_vhost_alias config uses a VirtualDocumentRoot of \/var\/www\/%0\/htdocs so that I can serve any domain that routes to my server's IP address out of a directory with that name.\nI'd like to begin writing and serving some Python projects from my server, but I'm unsure how to configure things so that each site has access to the appropriate script processor.\nFor example, for my blog, dead-parrot.com, I'm running a PHP blog platform (Habari, not WordPress). But I'd like to run an app I've written in Flask on not-dead-yet.com. \nI would like to enable Python execution with as little disruption to my mod_vhost_alias configuration as possible, so that I can continue to host new domains on this server simply by adding an appropriate directory. I'm willing to alter the directory structure, if necessary, but would prefer not to add additional, specific vhost config files for every new Python-running domain, since apart from being less convenient than my current setup with just PHP, it seems kind of hacky to have to name these earlier alphabetically to get Apache to pick them up before the single mod_vhost_alias vhost config.\nDo you know of a way that I can set this up to run Python and PHP side-by-side as conveniently as I do just PHP? Thanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6266,"Q_Id":11796126,"Users Score":0,"Answer":"Even I faced the same situation, and initially I was wondering in google but later realised and fixed it, I'm using EC2 service in aws with ubuntu and I created alias to php and python individually and now I can access both.","Q_Score":3,"Tags":"php,python,apache,mod-vhost-alias","A_Id":36646397,"CreationDate":"2012-08-03T12:53:00.000","Title":"Can I run PHP and Python on the same Apache server using mod_vhost_alias and mod_wsgi?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have really big collection of files, and my task is to open a couple of random files from this collection treat their content as a sets of integers and make an intersection of it. \nThis process is quite slow due to long times of reading files from disk into memory so I'm wondering whether this process of reading from file can be speed up by rewriting my program in some \"quick\" language. Currently I'm using python which could be inefficient for this kind of job. (I could implement tests myself if I knew some other languages beside python and javascript...)\nAlso will putting all the date into database help? Files wont fit the RAM anyway so it will be reading from disk again only with database related overhead. \nThe content of files is the list of long integers. 90% of the files are quite small, less than a 10-20MB, but 10% left are around 100-200mb. As input a have filenames and I need read each of the files and output integers present in every file given. \nI've tried to put this data in mongodb but that was as slow as plain files based approach because I tried to use mongo index capabilities and mongo does not store indexes in RAM. \nNow I just cut the 10% of the biggest files and store rest in the redis, sometimes accessing those big files. This is, obviously temporary solution because my data grows and amount of RAM available does not.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":213,"Q_Id":11805309,"Users Score":3,"Answer":"One thing you could try is calculating intersections of the files on a chunk-by-chunk basis (i.e., read x-bytes into memory from each, calculate their intersections, and continue, finally calculating the intersection of all intersections).\nOr, you might consider using some \"heavy-duty\" libraries to help you. Consider looking into PyTables (with HDF storage)\/using numpy for calculating intersections. The benefit there is that the HDF layer should help deal with not keeping the entire array structure in memory all at once---though I haven't tried any of these tools before, it seems like they offer what you need.","Q_Score":4,"Tags":"python,file,file-io,io,filesystems","A_Id":11805422,"CreationDate":"2012-08-04T01:52:00.000","Title":"Is speed of file opening\/reading language dependent?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some SQL Server tables that contain Image data types.\nI want to make it somehow usable in PostgreSQL. I'm a python programmer, so I have a lot of learn about this topic. Help?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":297,"Q_Id":11805709,"Users Score":0,"Answer":"What you need to understand first is that the interfaces at the db level are likely to be different. Your best option is to write an abstraction layer for the blobs (and maybe publish it open source for the dbs you want to support).\nOn the PostgreSQL side you need to figure out whether you want to bo with bytea or lob. These are very different and have different features and limitations. If you are enterprising you might build in at least support in the spec for selecting them. In general bytea is better for smaller files while lob has more management overhead but it can both support larger files and supports chunking, seeking etc.","Q_Score":0,"Tags":"python,sql-server,postgresql,blob","A_Id":15846639,"CreationDate":"2012-08-04T03:36:00.000","Title":"How can I select and insert BLOB between different databases using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the High-Replication Datastore (I'm using NDB), the consistency is eventual. In order to get a guaranteed complete set, ancestor queries can be used. Ancestor queries also provide a great way to get all the \"children\" of a particular ancestor with kindless queries. In short, being able to leverage the ancestor model is hugely useful in GAE.\nThe problem I seem to have is rather simplistic. Let's say I have a contact record and a message record. A given contact record is being treated as the ancestor for each message. However, it is possible that two contacts are created for the same person (user error, different data points, whatever). This situation produces two contact records, which have messages related to them.\nI need to be able to \"merge\" the two records, and bring put all the messages into one big pile. Ideally, I'd be able to modify ancestor for one of the record's children.\nThe only way I can think of doing this, is to create a mapping and make my app check to see if record has been merged. If it has, look at the mappings to find one or more related records, and perform queries against those. This seems hugely inefficient. Is there more of \"by the book\" way of handling this use case?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1422,"Q_Id":11854137,"Users Score":9,"Answer":"The only way to change the ancestor of an entity is to delete the old one and create a new one with a new key. This must be done for all child (and grand child, etc) entities in the ancestor path. If this isn't possible, then your listed solution works.\nThis is required because the ancestor path of an entity is part of its unique key. Parents of entities (i.e., entities in the ancestor path) need not exist, so changing a parent's key will leave the children in the datastore with no parent.","Q_Score":5,"Tags":"python,google-app-engine,google-cloud-datastore","A_Id":11855209,"CreationDate":"2012-08-07T21:04:00.000","Title":"How to change ancestor of an NDB record?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am looking for a pure-python SQL library that would give access to both MySQL and PostgreSQL.\nThe only requirement is to run on Python 2.5+ and be pure-python, so it can be included with the script and still run on most platforms (no-install).\nIn fact I am looking for a simple solution that would allow me to write SQL and export the results as CSV files.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2182,"Q_Id":11868582,"Users Score":1,"Answer":"Use SQL-Alchemy. It will work with most database types, and certainly does work with postgres and MySQL.","Q_Score":3,"Tags":"python,mysql,postgresql","A_Id":11870176,"CreationDate":"2012-08-08T16:03:00.000","Title":"Pure python SQL solution that works with PostgreSQL and MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am relatively new to Django and one thing that has been on my mind is changing the database that will be used when running the project.\nBy default, the DATABASES 'default' is used to run my test project. But in the future, I want to be able to define a 'production' DATABASES configuration and have it use that instead.\nIn a production environment, I won't be able to \"manage.py runserver\" so I can't really set the settings.\nI read a little bit about \"routing\" the database to use another database, but is there an easier way so that I won't need to create a new router every time I have another database I want to use (e.g. I can have test database, production database, and development database)?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":304,"Q_Id":11878454,"Users Score":1,"Answer":"You can just use a different settings.py in your production environment.\nOr - which is a bit cleaner - you might want to create a file settings_local.py next to settings.py where you define a couple of settings that are specific for the current machine (like DEBUG, DATABASES, MEDIA_ROOT etc.) and do a from settings_local import * at the beginning of your generic settings.py file. Of course settings.py must not overwrite these imported settings.","Q_Score":0,"Tags":"python,database,django,configuration","A_Id":11878547,"CreationDate":"2012-08-09T07:14:00.000","Title":"How do I make Django use a different database besides the 'default'?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly.\nShould I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":9082,"Q_Id":11889104,"Users Score":1,"Answer":"I think connection pooling is the best thing to do if this application is to serve multiple clients and concurrently.","Q_Score":8,"Tags":"python,postgresql,web-applications,flask,psycopg2","A_Id":11889137,"CreationDate":"2012-08-09T17:48:00.000","Title":"Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly.\nShould I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?","AnswerCount":5,"Available Count":3,"Score":0.1194272985,"is_accepted":false,"ViewCount":9082,"Q_Id":11889104,"Users Score":3,"Answer":"The answer depends on how many such requests will happen and how many concurrently in your web app ? Connection pooling is usually a better idea if you expect your web app to be busy with 100s or even 1000s of user concurrently logged in. If you are only doing this as a side project and expect less than few hundred users, you can probably get away without pooling.","Q_Score":8,"Tags":"python,postgresql,web-applications,flask,psycopg2","A_Id":11889659,"CreationDate":"2012-08-09T17:48:00.000","Title":"Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly.\nShould I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":9082,"Q_Id":11889104,"Users Score":0,"Answer":"Pooling seems to be totally impossible in context of Flask, FastAPI and everything relying on wsgi\/asgi dedicated servers with multiple workers.\nReason for this behaviour is simple: you have no control about the pooling and master thread\/process.\nA pooling instance is only usable for a single thread serving a set of clients - so for just one worker. Any other worker will get it's own pool and therefore there cannot be any sharing of established connections.\nLogically it's also impossible, because you cannot share these object states across threads\/processes in multi core env with python (2.x - 3.8).","Q_Score":8,"Tags":"python,postgresql,web-applications,flask,psycopg2","A_Id":61078209,"CreationDate":"2012-08-09T17:48:00.000","Title":"Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've searched and I can't seem to find anything.\nHere is the situation:\n\nt1 = table 1\nt2 = table 2\nv = view of table 1 and table 2 joined\n\n1.) User 1 is logged into database. Does SELECT * FROM v;\n2.) User 2 is logged into same database and does INSERT INTO t1 VALUES(1, 2, 3);\n3.) User 1 does another SELECT * FROM v; User 1 can't see the inserted row from User 2 until logging out and logging back in.\nSeems like views don't get sync'd across \"sessions\"? How can I make it so User 1 can see the INSERT?\nFYI I'm using python and mysqldb.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":542,"Q_Id":11979276,"Users Score":1,"Answer":"Instead of logging out and logging back in, user 2 could simply commit their transaction. \nMySQL InnoDB tables use transactions, requiring a BEGIN before one or more SQL statements, and either COMMIT or ROLLBACK afterwards, resulting in all your updates\/inserts\/deletes either happening or not. But there's a \"feature\" that results in an automatic BEGIN if not explicitly issued, and an automatic COMMIT when the connection is closed. This is why you see the changes after the other user closes the connection. \nYou should really get into the habit of explicitly beginning and committing your transactions, but there's also another way: set connection.autocommit = True, which will result in every sql update\/insert\/delete being wrapped in its own implicit transaction, resulting in the behavior you originally expected.\nDon't take what I said above to be entirely factually correct, but it suffices to explain the fundamentals of what's going on and how to control it.","Q_Score":2,"Tags":"python,mysql,mysql-python","A_Id":11979334,"CreationDate":"2012-08-16T00:51:00.000","Title":"MySQL view doesn't update when underlaying table changes across different users","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am fairly new to databases and have just figured out how to use MongoDB in python2.7 on Ubuntu 12.04. An application I'm writing uses multiple python modules (imported into a main module) that connect to the database. Basically, each module starts by opening a connection to the DB, a connection which is then used for various operations.\nHowever, when the program exits, the main module is the only one that 'knows' about the exiting, and closes its connection to MongoDB. The other modules do not know this and have no chance of closing their connections. Since I have little experience with databases, I wonder if there are any problems leaving connections open when exiting.\nShould I:\n\nLeave it like this?\nInstead open the connection before and close it after each operation?\nChange my application structure completely?\nSolve this in a different way?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1207,"Q_Id":11989408,"Users Score":3,"Answer":"You can use one pymongo connection across different modules. You can open it in a separate module and import it to other modules on demand. After program finished working, you are able to close it. This will be the best option.\nAbout other questions:\n\nYou can leave like this (all connections will be closed when script finishes execution), but leaving something unclosed is a bad form.\nYou can open\/close connection for each operation (but establishing connection is a time-expensive operation.\nThat what I'd advice you (see this answer's first paragraph)\nI think this point can be merged with 3.","Q_Score":3,"Tags":"python,mongodb,pymongo","A_Id":11989459,"CreationDate":"2012-08-16T14:29:00.000","Title":"When to disconnect from mongodb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could any one shed some light on how to migrate my MongoDB to PostgreSQL? What tools do I need, what about handling primary keys and foreign key relationships, etc?\nI had MongoDB set up with Django, but would like to convert it back to PostgreSQL.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1475,"Q_Id":12034390,"Users Score":1,"Answer":"Whether the migration is easy or hard depends on a very large number of things including how many different versions of data structures you have to accommodate. In general you will find it a lot easier if you approach this in stages:\n\nEnsure that all the Mongo data is consistent in structure with your RDBMS model and that the data structure versions are all the same.\nMove your data. Expect that problems will be found and you will have to go back to step 1.\n\nThe primary problems you can expect are data validation problems because you are moving from a less structured data platform to a more structured one.\nDepending on what you are doing regarding MapReduce you may have some work there as well.","Q_Score":2,"Tags":"python,django,mongodb,database-migration,django-postgresql","A_Id":15858338,"CreationDate":"2012-08-20T08:25:00.000","Title":"From MongoDB to PostgreSQL - Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two programs: the first only write to sqlite db, and the second only read. May I be sure that there are never be some errors? Or how to avoid from it (in python)?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":383,"Q_Id":12046760,"Users Score":1,"Answer":"generally, it is safe if there is only one program writing the sqlite db at one time.\n(If not, it will raise exception like \"database is locked.\" while two write operations want to write at the same time.)\nBy the way, it is no way to guarantee the program will never have errors. using Try ... catch to handle exception will make the program much safer.","Q_Score":3,"Tags":"python,concurrency,sqlite","A_Id":12047988,"CreationDate":"2012-08-20T23:51:00.000","Title":"sqlite3: safe multitask read & write - how to?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm using xlrd to pull data from an Excel sheet. I get it open and it pulls the data perfectly fine.\nMy problem is the sheet updates automatically with data from another program. It is updating stock information using an rtd pull.\nHas anyone ever figured out any way to pull data from a sheet like this that is up-to-date?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":364,"Q_Id":12049067,"Users Score":1,"Answer":"Since all that xlrd can do is read a file, I'm assuming that the excel file is saved after each update.\nIf so, use os.stat() on the file before reading it with xlrd and save the results (or at least those of os.stat().st_mtime). Then periodically use os.stat() again, and check if the file modification time (os.stat().st_mtime) has changed, indicating that the file has been changed. If so, re-read the file with xlrd.","Q_Score":0,"Tags":"python,excel","A_Id":12049844,"CreationDate":"2012-08-21T05:48:00.000","Title":"Pulling from an auto-updating Excel sheet","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently sitting in front of a more specific problem which has to do with fail-over support \/ redundancy for a specific web site which will be hosted over @ WebFaction. Unfortunately replication at the DB level is not an option as I would have to install my own local PostgreSQL instances for every account and I am worried about performance amongst other things. So I am thinking about using Django's multi-db feature and routing all writes to all (shared) databases and the balance the reads to the nearest db.\nMy problem is now that all docs I read seem to indicate that this would most likely not be possible. To be more precise what I would need:\n\nroute all writes to a specific set of dbs (same type, version, ...)\nif one write fails, all the others will be rolled back (transactions)\nroute all reads to the nearest db (could be statically configured)\n\nIs this currently possible with Django's multi-db support?\nThanks a lot in advance for any help\/hints...","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":345,"Q_Id":12070031,"Users Score":1,"Answer":"I was looking for something similar. What I found is: \n1) Try something like Xeround cloud DB - it's built on MySQL and is compatible but doesn't support savepoints. You have to disable this in (a custom) DB engine. The good thing is that they replicate at the DB level and provide automatic scalability and failover. Your app works as if there's a single DB. They are having some connectivity issues at the moment though which are blocking my migration.\n2) django-synchro package - looks promissing for replications at the app layer but I have some concerns about it. It doesn't work on objects.update() which I use a lot in my code.","Q_Score":1,"Tags":"python,django,redundancy,webfaction,django-orm","A_Id":12934130,"CreationDate":"2012-08-22T09:24:00.000","Title":"Django multi-db: Route all writes to multiple databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo?\nI've read all about both but there's a lot I don't understand. Mongo sounds better\/easier but at the same time it sounds better\/easier for those that already know Relational Databases very well and are looking for something more agile.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":5188,"Q_Id":12078928,"Users Score":1,"Answer":"Postgres is a great database for Django in production. sqlite is amazing to develop with. You will be doing a lot of work to try to not use a RDBMS on your first Django site.\nOne of the greatest strengths of Django is the smooth full-stack integration, great docs, contrib apps, app ecosystem. Choosing Mongo, you lose a lot of this. GeoDjango also assumes SQL and really loves postgres\/postgis above others - and GeoDjango is really awesome.\nIf you want to use Mongo, I might recommend that you start with something like bottle, flask, tornado, cyclone, or other that are less about the full-stack integration and less assuming about you using a certain ORM. The Django tutorial, for instance, assumes that you are using the ORM with a SQL DB.","Q_Score":3,"Tags":"python,sql,django,nosql","A_Id":12078992,"CreationDate":"2012-08-22T18:07:00.000","Title":"First time Django database SQL or NoSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo?\nI've read all about both but there's a lot I don't understand. Mongo sounds better\/easier but at the same time it sounds better\/easier for those that already know Relational Databases very well and are looking for something more agile.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5188,"Q_Id":12078928,"Users Score":0,"Answer":"sqlite is the simplest to start with. If you already know SQL toss a coin to choose between MySQL and Postgres for your first project!","Q_Score":3,"Tags":"python,sql,django,nosql","A_Id":12079233,"CreationDate":"2012-08-22T18:07:00.000","Title":"First time Django database SQL or NoSQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need python and php support. I am currently using mongodb and it is great for my data (test results), but I need to store results of a different type of test which are over 32 MB and exceed mongo limit of 16 MB.\nCurrently each test is a big python dictionary and I retrieve and represent them with php.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":128,"Q_Id":12090204,"Users Score":0,"Answer":"You can store up to 16MB of data per MongoDB BSON document (e.g. using the pymongo Binary datatype). For arbitrary large data you want to use GridFS which basically stored your data as chunks + extra metadata. When you using MongoDB with its replication features (replica sets) you will have kind of a distributed binary store (don't mix this up with a distributed filesystem (no integration with local filesystem).","Q_Score":2,"Tags":"php,python,mongodb,size,limit","A_Id":12090898,"CreationDate":"2012-08-23T11:07:00.000","Title":"no-sql database for document sizes over 32 MB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using MySQLdb. I am developing a simple GUI application using Rpy2. What my program does?\n- User can input the static data and mathematical operations will be computed using those data.\n- Another thing where I am lost is, user will give the location of their database and the program will computer maths using the data from the remote database.\nI have accomplished the result using the localhost. \nHow can I do it from the remote database? Any idea? \nThanx in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":187,"Q_Id":12091413,"Users Score":0,"Answer":"When you establish the MySQL connection, use the remote machines IP address \/ hostname and corresponding credentials (username, password).","Q_Score":0,"Tags":"python,database","A_Id":12091455,"CreationDate":"2012-08-23T12:17:00.000","Title":"How to take extract data from the remote database in Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use\n\npython 2.7 \npyodbc module\ngoogle app engine 1.7.1\n\nI can use pydobc with python but the Google App Engine can't load the module. I get a no module named pydobc error.\nHow can I fix this error or how can use MS-SQL database with my local Google App Engine.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2793,"Q_Id":12108816,"Users Score":0,"Answer":"You could, at least in theory, replicate your data from the MS-SQL to the Google Cloud SQL database. It is possible create triggers in the MS-SQL database so that every transaction is reflected on your App Engine application via a REST API you will have to build.","Q_Score":3,"Tags":"python,sql-server,google-app-engine","A_Id":12116542,"CreationDate":"2012-08-24T11:45:00.000","Title":"How can use Google App Engine with MS-SQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to copy and use the example 'User Authentication with PostgreSQL database' from the web.py cookbook. I can not figure out why I am getting the following errors.\n\n at \/login\n'ThreadedDict' object has no attribute 'login'\n at \/login\n'ThreadedDict' object has no attribute 'privilege'\n\nHere is the error output to the terminal for the second error. (the first is almost identical)\n\nTraceback (most recent call last):\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/web.py-0.37-py2.7.egg\/web\/application.py\", line 239, in process\n return self.handle()\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/web.py-0.37-py2.7.egg\/web\/application.py\", line 230, in handle\n return self._delegate(fn, self.fvars, args)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/web.py-0.37-py2.7.egg\/web\/application.py\", line 420, in _delegate\n return handle_class(cls)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/web.py-0.37-py2.7.egg\/web\/application.py\", line 396, in handle_class\n return tocall(*args)\n File \"\/home\/erik\/Dropbox\/Python\/Web.py\/Code.py\", line 44, in GET\n render = create_render(session.privilege)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/web.py-0.37-py2.7.egg\/web\/session.py\", line 71, in __getattr__\n return getattr(self._data, name)\nAttributeError: 'ThreadedDict' object has no attribute 'privilege'\n\n127.0.0.1:36420 - - [25\/Aug\/2012 01:12:38] \"HTTP\/1.1 GET \/login\" - 500 Internal Server Error\n\n\nHere is my code.py file. Pretty much cut-n-paste from the cookbook. I tried putting all of the class and def on top of the main code. I have also tried launching python with sudo as mentioned in another post.\n\nimport web\n\nclass index:\n def GET(self):\n todos = db.select('todo')\n return render.index(todos)\n\nclass add:\n def POST(self):\n i = web.input()\n n = db.insert('todo', title=i.title)\n raise web.seeother('\/')\n\ndef logged():\n return False #I added this to test error #1, Now I get error #2\n #if session.login==1:\n # return True\n #else:\n # return False\n\ndef create_render(privilege):\n if logged():\n if privilege == 0:\n render = web.template.render('templates\/reader')\n elif privilege == 1:\n render = web.template.render('templates\/user')\n elif privilege == 2:\n render = web.template.render('templates\/admin')\n else:\n render = web.template.render('templates\/communs')\n else:\n render = web.template.render('templates\/communs')\n return render\n\n\n\nclass Login:\n\n def GET(self):\n if logged():\n render = create_render(session.privilege)\n return '%s' % render.login_double()\n else:\n # This is where error #2 is\n render = create_render(session.privilege)\n return '%s' % render.login()\n\n def POST(self):\n name, passwd = web.input().name, web.input().passwd\n ident = db.select('users', where='name=$name', vars=locals())[0]\n try:\n if hashlib.sha1(\"sAlT754-\"+passwd).hexdigest() == ident['pass']:\n session.login = 1\n session.privilege = ident['privilege']\n render = create_render(session.privilege)\n return render.login_ok()\n else:\n session.login = 0\n session.privilege = 0\n render = create_render(session.privilege)\n return render.login_error()\n except:\n session.login = 0\n session.privilege = 0\n render = create_render(session.privilege)\n return render.login_error()\n\n\nclass Reset:\n\n def GET(self):\n session.login = 0\n session.kill()\n render = create_render(session.privilege)\n return render.logout()\n\n\n\n\n#web.config.debug = False\n\nrender = web.template.render('templates\/', base='layout')\nurls = (\n '\/', 'index',\n '\/add', 'add',\n '\/login', 'Login',\n '\/reset', 'Reset'\n )\n\napp = web.application(urls, globals())\ndb = web.database(dbn='postgres', user='hdsfgsdfgsd', pw='dfgsdfgsdfg', db='postgres', host='fdfgdfgd.com')\n\nstore = web.session.DiskStore('sessions')\n\n# Too me, it seems this is being ignored, at least the 'initializer' part\nsession = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0})\n\n\n\nif __name__ == \"__main__\": app.run()","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2157,"Q_Id":12120539,"Users Score":0,"Answer":"Okay, I was able to figure out what I did wrong. Total newbie stuff and all part of the learning process. This code now works, well mostly. The part that I was stuck on is now working. See my comments in the code\nThanks\n\nimport web\n\nweb.config.debug = False\n\nrender = web.template.render('templates\/', base='layout')\nurls = (\n '\/', 'index',\n '\/add', 'add',\n '\/login', 'Login',\n '\/reset', 'Reset'\n )\n\napp = web.application(urls, globals())\ndb = web.database(blah, blah, blah)\n\nstore = web.session.DiskStore('sessions')\nsession = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0})\n\n\nclass index:\n def GET(self):\n todos = db.select('todo')\n return render.index(todos)\n\nclass add:\n def POST(self):\n i = web.input()\n n = db.insert('todo', title=i.title)\n raise web.seeother('\/')\n\ndef logged():\n if session.get('login', False):\n return True\n else:\n return False\n\ndef create_render(privilege):\n if logged():\n if privilege == 0:\n render = web.template.render('templates\/reader')\n elif privilege == 1:\n render = web.template.render('templates\/user')\n elif privilege == 2:\n render = web.template.render('templates\/admin')\n else:\n render = web.template.render('templates\/communs')\n else:\n ## This line is key, i do not have a communs folder, thus returning an unusable object\n #render = web.template.render('templates\/communs') #Original code from example\n\n render = web.template.render('templates\/', base='layout')\n return render\n\n\n\nclass Login:\n\n def GET(self):\n if logged():\n ## Using session.get('something') instead of session.something does not blow up when it does not exit \n render = create_render(session.get('privilege'))\n return '%s' % render.login_double()\n else:\n render = create_render(session.get('privilege'))\n return '%s' % render.login()\n\n def POST(self):\n name, passwd = web.input().name, web.input().passwd\n ident = db.select('users', where='name=$name', vars=locals())[0]\n try:\n if hashlib.sha1(\"sAlT754-\"+passwd).hexdigest() == ident['pass']:\n session.login = 1\n session.privilege = ident['privilege']\n render = create_render(session.get('privilege'))\n return render.login_ok()\n else:\n session.login = 0\n session.privilege = 0\n render = create_render(session.get('privilege'))\n return render.login_error()\n except:\n session.login = 0\n session.privilege = 0\n render = create_render(session.get('privilege'))\n return render.login_error()\n\n\nclass Reset:\n\n def GET(self):\n session.login = 0\n session.kill()\n render = create_render(session.get('privilege'))\n return render.logout()\n\n\nif __name__ == \"__main__\": app.run()","Q_Score":0,"Tags":"python,session,login,web.py","A_Id":12137859,"CreationDate":"2012-08-25T08:47:00.000","Title":"web.py User Authentication with PostgreSQL database example","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using PyMongo and gevent together, from a Django application. In production, it is hosted on Gunicorn.\nI am creating a single Connection object at startup of my application. I have some background task running continuously and performing a database operation every few seconds.\nThe application also serves HTTP requests as any Django app.\nThe problem I have is the following. It only happens in production, I have not been able to reproduce it on my dev environment. When I let the application idle for a little while (although the background task is still running), on the first HTTP request (actually the first few), the first \"find\" operation I perform never completes. The greenlet actually never resumes. This causes the first few HTTP requests to time-out.\nHow can I fix that? Is that a bug in gevent and\/or PyMongo?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":862,"Q_Id":12157350,"Users Score":4,"Answer":"I found what the problem is. By default PyMongo has no network timeout defined on the connections, so what was happening is that the connections in the pool got disconnected (because they aren't used for a while). Then when I try to reuse a connection and perform a \"find\", it takes a very long time for the connection be detected as dead (something like 15 minutes). When the connection is detected as dead, the \"find\" call finally throws an AutoReconnectError, and a new connection is spawned up to replace to stale one.\nThe solution is to set a small network timeout (15 seconds), so that the call to \"find\" blocks the greenlet for 15 seconds, raises an AutoReconnectError, and when the \"find\" is retried, it gets a new connection, and the operation succeeds.","Q_Score":3,"Tags":"python,mongodb,pymongo,gevent,greenlets","A_Id":12163744,"CreationDate":"2012-08-28T10:26:00.000","Title":"Deadlock with PyMongo and gevent","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to install mysql-python package on a machine with Centos 6.2 with Percona Server.\nHowever I'm running into EnvironmentError: mysql_config not found error. \nI've carefully searched information regarding this error but all I found is that one needs to add path to mysql_config binary to the PATH system variable.\nBut it looks like, with my percona installation, a don't have mysql_config file at all\nfind \/ -type f -name mysql_config returns nothing.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1025,"Q_Id":12202303,"Users Score":0,"Answer":"mysql_config is a part of mysql-devel package.","Q_Score":1,"Tags":"python,percona","A_Id":12202936,"CreationDate":"2012-08-30T17:27:00.000","Title":"mysql-python with Percona Server installation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a collection that is potentially going to be very large. Now I know MongoDB doesn't really have a problem with this, but I don't really know how to go about designing a schema that can handle a very large dataset comfortably. So I'm going to give an outline of the problem.\nWe are collecting large amounts of data for our customers. Basically, when we gather this data it is represented as a 3-tuple, lets say (a, b, c), where b and c are members of sets B and C respectively. In this particular case we know that the B and C sets will not grow very much over time. For our current customers we are talking about ~200,000 members. However, the A set is the one that keeps growing over time. Currently we are at about ~2,000,000 members per customer, but this is going to grow (possibly rapidly.) Also, there are 1->n relations between b->a and c->a.\nThe workload on this data set is basically split up into 3 use cases. The collections will be periodically updated, where A will get the most writes, and B and C will get some, but not many. The second use case is random access into B, then aggregating over some number of documents in C that pertain to b \\in B. And the last usecase is basically streaming a large subset from A and B to generate some new data.\nThe problem that we are facing is that the indexes are getting quite big. Currently we have a test setup with about 8 small customers, the total dataset is about 15GB in size at the moment, and indexes are running at about 3GB to 4GB. The problem here is that we don't really have any hot zones in our dataset. It's basically going to get an evenly distributed load amongst all documents.\nBasically we've come up with 2 options to do this. The one that I described above, where all data for all customers is piled into one collection. This means we'd have to create an index om some field that links the documents in that collection to a particular customer.\nThe other options is to throw all b's and c's together (these sets are relatively small) but divide up the C collection, one per customer. I can imangine this last solution being a bit harder to manage, but since we rarely access data for multiple customers at the same time, it would prevent memory problems. MongoDB would be able to load the customers index into memory and just run from there.\nWhat are your thoughts on this?\nP.S.: I hope this wasn't too vague, if anything is unclear I'll go into some more details.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":239,"Q_Id":12210307,"Users Score":1,"Answer":"It sounds like the larger set (A if I followed along correctly), could reasonably be put into its own database. I say database rather than collection, because now that 2.2 is released you would want to minimize lock contention between the busier database and the others, and to do that a separate database would be best (2.2 introduced database level locking). That is looking at this from a single replica set model, of course.\nAlso the index sizes sound a bit out of proportion to your data size - are you sure they are all necessary? Pruning unneeded indexes, combining and using compound indexes may well significantly reduce the pain you are hitting in terms of index growth (it would potentially make updates and inserts more efficient too). This really does need specifics and probably belongs in another question, or possibly a thread in the mongodb-user group so multiple eyes can take a look and make suggestions.\nIf we look at it with the possibility of sharding thrown in, then the truly important piece is to pick a shard key that allows you to make sure locality is preserved on the shards for the pieces you will frequently need to access together. That would lend itself more toward a single sharded collection (preserving locality across multiple related sharded collections is going to be very tricky unless you manually split and balance the chunks in some way). Sharding gives you the ability to scale out horizontally as your indexes hit the single instance limit etc. but it is going to make the shard key decision very important.\nAgain, specifics for picking that shard key are beyond the scope of this more general discussion, similar to the potential index review I mentioned above.","Q_Score":2,"Tags":"python,mongodb","A_Id":12216914,"CreationDate":"2012-08-31T06:56:00.000","Title":"Split large collection into smaller ones?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any generally accepted practices to get around this? Specifically, for user-submitted images uploaded to a web service. My application is running in Python.\nSome hacked solutions that came to mind:\n\nDisplay the uploaded image from a local directory until the S3 image is ready, then \"hand it off\" and update the database to reflect the change.\nDisplay a \"waiting\" progress indicator as a background gif and the image will just appear when it's ready (w\/ JavaScript)","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":323,"Q_Id":12241945,"Users Score":1,"Answer":"I'd save time and not do anything. The wait times are pretty fast. \nIf you wanted to stall the end-user, you could just show a 'success' page without the image. If the image isn't available, most regular users will just hit reload.\nIf you really felt like you had to... I'd probably go with a javascript solution like this:\n\nhave a 'timestamp uploaded' column in your data store\nif the upload time is under 1 minute, instead of rendering an img=src tag... render some javascript that polls the s3 bucket in 15s intervals\n\nAgain, chances are most users will never experience this - and if they do, they won't really care. The UX expectations of user generated content are pretty low ( just look at Facebook ); if this is an admin backend for an 'enterprise' service that would make workflow better, you may want to invest time on the 'optimal' solution. For a public facing website though, i'd just forget about it.","Q_Score":0,"Tags":"python,amazon-s3,amazon-web-services","A_Id":12242133,"CreationDate":"2012-09-03T04:22:00.000","Title":"What are some ways to work with Amazon S3 not offering read-after-write consistency in US Standard?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating a game mod for Counter-Strike in python, and it's basically all done. The only thing left is to code a REAL database, and I don't have any experience on sqlite, so I need quite a lot of help.\nI have a Player class with attribute self.steamid, which is unique for every Counter-Strike player (received from the game engine), and self.entity, which holds in an \"Entity\" for player, and Entity-class has lots and lots of more attributes, such as level, name and loads of methods. And Entity is a self-made Python class).\nWhat would be the best way to implement a database, first of all, how can I save instances of Player with an other instance of Entity as it's attribute into a database, powerfully?\nAlso, I will need to get that users data every time he connects to the game server, (I have player_connect event), so how would I receive the data back?\nAll the tutorials I found only taught about saving strings or integers, but nothing about whole instances. Will I have to save every attribute on all instances (Entity instance has few more instances as it's attributes, and all of them have huge amounts of attributes...), or is there a faster, easier way?\nAlso, it's going to be a locally saved database, so I can't really use any other languages than sql.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":356,"Q_Id":12266016,"Users Score":0,"Answer":"You need an ORM. Either you roll your own (which I never suggest), or you use one that exists already. Probably the two most popular in Python are sqlalchemy, and the ORM bundled with Django.","Q_Score":1,"Tags":"python,database,sqlite,instance","A_Id":12268131,"CreationDate":"2012-09-04T14:47:00.000","Title":"Python sqlite3, saving instance of a class with an other instance as it's attribute?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a few a few model classes such as a user class which is passed a dictionary, and wraps it providing various methods, some of which communicate with the database when a value needs to be changed. The dictionary itself is made from an sqlalchemy RowProxy, so all its keys are actually attribute names taken directly from the sql user table. (attributes include user_id, username, email, passwd, etc)\nIf a user is logged in, should I simply save this dictionary to a redis key value store, and simply call a new user object when needed and pass it this dictionary from redis(which should be faster than only saving a user id in a session and loading the values again from the db based on that user_id)? \nOr should I somehow serialize the entire object and save it in redis? I'd appreciate any alternate methods of managing model and session objects that any of you feel would be better as well.\nIn case anyone is wondering I'm only using the sqlalchemy expression language, and not the orm. I'm using the model classes as interfaces, and coding against those.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":241,"Q_Id":12292277,"Users Score":4,"Answer":"Unless you're being really careful, serializing the entire object into redis is going to cause problems. You're effectively treating it like a cache, so you have to be careful that those values are expired if the user changes something about themselves. You also have to make sure that all of the values are serializable (likely via pickle). You didn't specify whether this is a premature optimization so I'm going to say that it probably is and recommend that you just track the user id and reload his information when you need it from your database.","Q_Score":1,"Tags":"python,session,sqlalchemy,session-state,pyramid","A_Id":12320928,"CreationDate":"2012-09-06T02:42:00.000","Title":"How do I go about storing session objects?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hi I intend to draw a chart with data in an xlsx file.\nIn order to keep the style, I HAVE TO draw it within excel.\nI found a package named win32com, which can give a support to manipulate excel file with python on win32 platform, but I don't know where is the doc.....\nAnother similar question is how to change the style of cells, such as font, back-color ?\nSo maybe all I wanna know is the doc, you know how to fish is more useful than fishes.... and an example is better.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":2868,"Q_Id":12296563,"Users Score":1,"Answer":"Documentation for win32com is next to non-existent as far I know. However, I use the following method to understand the commands.\n\nMS-Excel\nIn Excel, record a macro of whatever action you intend to, say plotting a chart. Then go to the Macro menu and use View Macro to get the underlying commands. More often than not, the commands used would guide you to the corresponding commands in python that you need to use.\n\nPythonwin\nYou can use pythonwin to browse the underlying win32com defined objects (in your case Microsoft Excel Objects). In pythonwin (which can be found at \\Lib\\site-packages\\pythonwin\\ in your python installation), go to Tools -> COM Makepy Utility, select your required Library (in this case, Microsoft Excel 14.0 Object Library) and press Ok. Then when the process is complete, go to Tools -> COM Browser and open the required library under Registered Libraries. Note the ID no. as this would correspond to the source file. You can browse the various components of the library in the COM Browser.\n\nSource\nGo to \\Lib\\site-packages\\win32com\\ in your python installation folder. Run makepy.py and choose the required library. After this, the source file of the library can be found at \\Lib\\site-packages\\win32com\\gen_py . It is one of those files with the wacky name. The name corresponds to that found in Pythonwin. Open the file, and search for the commands you saw in the Excel Macro. (#2 and #3 maybe redundant, I am not sure)","Q_Score":0,"Tags":"python,excel,win32com","A_Id":13086152,"CreationDate":"2012-09-06T09:03:00.000","Title":"How to draw a chart with excel using python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Python with SQLAlchemy for some relational tables. For the storage of some larger data-structures I use Cassandra. I'd prefer to use just one technology (cassandra) instead of two (cassandra and PostgreSQL). Is it possible to store the relational data in cassandra as well?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":8349,"Q_Id":12297847,"Users Score":3,"Answer":"playOrm supports JOIN on noSQL so that you CAN put relational data into noSQL but it is currently in java. We have been thinking of exposing a S-SQL language from a server for programs like yours. Would that be of interest to you?\nThe S-SQL would look like this(if you don't use partitions, you don't even need anything before the SELECT statement piece)...\nPARTITIONS t(:partId) SELECT t FROM TABLE as t INNER JOIN t.security as s WHERE s.securityType = :type and t.numShares = :shares\")\nThis allows relational data in a noSQL environment AND IF you partition your data, you can scale as well very nicely with fast queries and fast joins.\nIf you like, we can quickly code up a prototype server that exposes an interface where you send in S-SQL requests and we return some form of json back to you. We would like it to be different than SQL result sets which was a very bad idea when left joins and inner joins are in the picture.\nie. we would return results on a join like so (so that you can set a max results that actually works)...\ntableA row A - tableB row45\n - tableB row65\n - tableB row 78\ntableA row C - tableB row46\n - tableB row93\nNOTICE that we do not return multiple row A's so that if you have max results 2 you get row A and row C where as in ODBC\/JDBC, you would get ONLY rowA two times with row45 and row 65 because that is what the table looks like when it is returned (which is kind of stupid when you are in an OO language of any kind).\njust let playOrm team know if you need anything on the playOrm github website.\nDean","Q_Score":6,"Tags":"python,sqlalchemy,cassandra","A_Id":12302894,"CreationDate":"2012-09-06T10:17:00.000","Title":"Can I use SQLAlchemy with Cassandra CQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to expand my recommender system to include other features (dimensions). So far, I'm tracking how a user rates some document, and using that to do the recommendations. I'm interested in adding more features, such as user location, age, gender, and so on.\nSo far, a few mysql tables have been enough to handle this, but i fear it will quickly become messy as i add more features.\nMy question: how can i best represent and persist this kind of multi dimensional data?\nPython specific tips would be helpful.\nThank you","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":289,"Q_Id":12355416,"Users Score":0,"Answer":"An SQL database should work fine in your case. In fact, you can store all the training examples in just one database, each row representing a particular training set and each column representing a feature. You can add features by adding collumns as and when required. In a relational database, you might come across access errors when querying for your data for various inconsistency reasons. Try using a NoSQL database. I personally user MongoDB and Pymongo on python to store the training examples as dicts in JSON format. (Easier for web apps this way).","Q_Score":1,"Tags":"python,numpy,scipy,data-mining","A_Id":12369285,"CreationDate":"2012-09-10T16:05:00.000","Title":"Multi feature recommender system representation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to expand my recommender system to include other features (dimensions). So far, I'm tracking how a user rates some document, and using that to do the recommendations. I'm interested in adding more features, such as user location, age, gender, and so on.\nSo far, a few mysql tables have been enough to handle this, but i fear it will quickly become messy as i add more features.\nMy question: how can i best represent and persist this kind of multi dimensional data?\nPython specific tips would be helpful.\nThank you","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":289,"Q_Id":12355416,"Users Score":0,"Answer":"I recommend using tensors, which is multidimensional arrays. You can use any data table or simply text files to store a tensor. Each line or row is a record \/ transaction with different features all listed.","Q_Score":1,"Tags":"python,numpy,scipy,data-mining","A_Id":24491488,"CreationDate":"2012-09-10T16:05:00.000","Title":"Multi feature recommender system representation","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to set up some basic data I\/O scripts in python that read and write from a local sqlite db. I'd like to use the command line to verify that my scripts work as expected, but they don't pick up on any of the databases or tables I'm creating.\nMy first script writes some data from a dict into the table, and the second script reads it and prints it.\nWrite:\n# first part of script creates a dict called 'totals'\n\nimport sqlite3 as lite\n\ncon = lite.connect('test.db')\n\nwith con:\n cur = con.cursor()\n\n cur.execute(\"DROP TABLE IF EXISTS testtbl\") \n\n cur.execute(\"CREATE TABLE testtbl(Date TEXT PRIMARY KEY, Count INT, AverageServerTime REAL, TotalServerTime REAL, AverageClientTime REAL, TotalClientTime REAL)\")\n cur.execute('INSERT INTO testtbl VALUES(\"2012-09-08\", %s, %s, %s, %s, %s)' % (float(totals['count()']), float(totals['serverTime\/count()']), float(totals['serverTime']), float(totals['totalLoadTime\/count()']), float(totals['totalLoadTime'])))\n\nRead:\n\nimport sqlite3 as lite\n\ncon = lite.connect('test.db')\n\nwith con: \n\n cur = con.cursor() \n cur.execute(\"SELECT * FROM testtbl\")\n\n rows = cur.fetchall()\n\n for row in rows:\n print row\n\nThese scripts are separate and both work fine. However, if I navigate to the directory in the command line and activate sqlite3, nothing further works. I've tried '.databases', '.tables', '.schema' commands and can't get it to respond to this particular db. I can create dbs within the command line and view them, but not the ones created by my script. How do I link these up?\nRunning stock Ubuntu 12.04, Python 2.7.3, SQLite 3.7.9. I also installed libsqlite3-dev but that hasn't helped.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1252,"Q_Id":12360279,"Users Score":2,"Answer":"Are you putting the DB file name in the command ? \n$ sqlite3 test.db","Q_Score":1,"Tags":"python,linux,sqlite,ubuntu","A_Id":12360397,"CreationDate":"2012-09-10T22:23:00.000","Title":"sqlite3 command line tools don't work in Ubuntu","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Okay, \nI kinda asked this question already, but noticed that i might have not been as clear as i could have been, and might have made some errors myself.\nI have also noticed many people having the same or similar problems with sqlite3 in python. So i thought i would ask this as clearly as i could, so it could possibly help others with the same issues aswell.\nWhat does python need to find when compiling, so the module is enabled and working?\n(In detail, i mean exact files, not just \"sqlite dev-files\")?\nAnd if it needs a library, it propably needs to be compiled with the right architecture?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3032,"Q_Id":12420338,"Users Score":0,"Answer":"As I understand you would like to install python form sources. To make sqlite module available you have to install sqlite package and its dev files (for example sqlite-devel for CentOS). That's it. YOu have to re-configure your sources after installing the required packages.\nBtw you will face up the same problem with some other modules.","Q_Score":0,"Tags":"python,sqlite","A_Id":12420541,"CreationDate":"2012-09-14T07:59:00.000","Title":"What does Python need to install sqlite3 module?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For my database project, I am using SQL Alchemy. I have a unit test that adds the object to the table, finds it, updates it, and deletes it. After it goes through that, I assumed I would call the session.rollback method in order to revert the database changes. It does not work because my sequences are not reverted. My plan for the project is to have one database, I do not want to create a test database.\nI could not find in the documentation on SQL Alchemy on how to properly rollback the database changes. Does anyone know how to rollback the database transaction?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3017,"Q_Id":12440044,"Users Score":-3,"Answer":"Postgres does not rollback advances in a sequence even if the sequence is used in a transaction which is rolled back. (To see why, consider what should happen if, before one transaction is rolled back, another using the same sequence is committed.)\nBut in any case, an in-memory database (SQLite makes this easy) is the best choice for unit tests.","Q_Score":4,"Tags":"python,unit-testing,sqlalchemy,rollback","A_Id":12443800,"CreationDate":"2012-09-15T18:39:00.000","Title":"How to rollback the database in SQL Alchemy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am provided with text files containing data that I need to load into a postgres database.\nThe files are structured in records (one per line) with fields separated by a tilde (~). Unfortunately it happens that every now and then a field content will include a tilde.\nAs the files are not tidy CSV, and the tilde's not escaped, this results in records containing too many fields, which cause the database to throw an exception and stop loading.\nI know what the record should look like (text, integer, float fields).\nDoes anyone have suggestions on how to fix the overlong records? I code in per but I am happy with suggestions in python, javascript, plain english.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":111,"Q_Id":12553197,"Users Score":0,"Answer":"If you know what each field is supposed to be, perhaps you could write a regular expression which would match that field type only (ignoring tildes) and capture the match, then replace the original string in the file?","Q_Score":1,"Tags":"python,perl,language-agnostic","A_Id":12553211,"CreationDate":"2012-09-23T14:36:00.000","Title":"Messed up records - separator inside field content","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks","AnswerCount":12,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":52339,"Q_Id":12570465,"Users Score":0,"Answer":"Given that encryption at rest is a much desired data standard now, smart_open does not support this afaik","Q_Score":38,"Tags":"python,amazon-s3,amazon","A_Id":56126467,"CreationDate":"2012-09-24T18:09:00.000","Title":"How to upload a file to S3 without creating a temporary local file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks","AnswerCount":12,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":52339,"Q_Id":12570465,"Users Score":2,"Answer":"I assume you're using boto. boto's Bucket.set_contents_from_file() will accept a StringIO object, and any code you have written to write data to a file should be easily adaptable to write to a StringIO object. Or if you generate a string, you can use set_contents_from_string().","Q_Score":38,"Tags":"python,amazon-s3,amazon","A_Id":12570568,"CreationDate":"2012-09-24T18:09:00.000","Title":"How to upload a file to S3 without creating a temporary local file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have the below setup\n\n2 node hadoop\/hbase cluster with thirft server running on hbase.\nHbase has a table with 10 million rows.\n\nI need to run aggregate queries like sum() on the hbase table\nto show it on the web(charting purpose).\nFor now I am using python(thrift client) to get the dataset and display.\nI am looking for database(hbase) level aggregation function to use in the web.\nAny thoughts?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1019,"Q_Id":12585286,"Users Score":0,"Answer":"Phoenix is a good solution for low latency result from Hbase tables than Hive.\nIt is good for range scans than Hbase scanners because they use secondary indexes and SkipScan.\nAs in your case , you use Python and phoenix API have only JDBC connectors.\nElse Try Hbase Coprocessors. Which do SUM, MAX, COUNT,AVG functions.\nyou can enable coprocessors while creating table and can USE the Coprocessor functions\nYou can try Impala, which provide an ODBC connector, JDBC connector. Impala uses hive metatable for executing massively parallel batch execution.\nYou need to create a Hive metatable for your Hbase Table.","Q_Score":0,"Tags":"java,python,hadoop,hbase,thrift","A_Id":21502085,"CreationDate":"2012-09-25T14:35:00.000","Title":"Hadoop Hbase query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am finding Neo4j slow to add nodes and relationships\/arcs\/edges when using the REST API via py2neo for Python. I understand that this is due to each REST API call executing as a single self-contained transaction.\nSpecifically, adding a few hundred pairs of nodes with relationships between them takes a number of seconds, running on localhost.\nWhat is the best approach to significantly improve performance whilst staying with Python?\nWould using bulbflow and Gremlin be a way of constructing a bulk insert transaction?\nThanks!","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":12651,"Q_Id":12643662,"Users Score":2,"Answer":"Well, I myself had need for massive performance from neo4j. I end up doing following things to improve graph performance.\n\nDitched py2neo, since there were lot of issues with it. Besides it is very convenient to use REST endpoint provided by neo4j, just make sure to use request sessions.\nUse raw cypher queries for bulk insert, instead of any OGM(Object-Graph Mapper). That is very crucial if you need an high-performant system.\nPerformance was not still enough for my needs, so I ended writing a custom system that merges 6-10 queries together using WITH * AND UNION clauses. That improved performance by a factor of 3 to 5 times.\nUse larger transaction size with atleast 1000 queries.","Q_Score":18,"Tags":"python,neo4j,py2neo","A_Id":31026259,"CreationDate":"2012-09-28T16:15:00.000","Title":"Fastest way to perform bulk add\/insert in Neo4j with Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see plenty of examples of importing a CSV into a PostgreSQL db, but what I need is an efficient way to import 500,000 CSV's into a single PostgreSQL db. Each CSV is a bit over 500KB (so grand total of approx 272GB of data).\nThe CSV's are identically formatted and there are no duplicate records (the data was generated programatically from a raw data source). I have been searching and will continue to search online for options, but I would appreciate any direction on getting this done in the most efficient manner possible. I do have some experience with Python, but will dig into any other solution that seems appropriate.\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10104,"Q_Id":12646305,"Users Score":0,"Answer":"Nice chunk of data you have there. I'm not 100% sure about Postgre, but at least MySQL provides some SQL commands, to feed a csv directly into a table. This bypasses any insert checks and so on and is thatswhy more than a order of magnitude faster than any ordinary insert operations.\nSo the probably fastest way to go is create some simple python script, telling your postgre server, which csv files in which order to hungrily devour into it's endless tables.","Q_Score":9,"Tags":"python,csv,import,postgresql-9.1","A_Id":12646923,"CreationDate":"2012-09-28T19:38:00.000","Title":"Efficient way to import a lot of csv files into PostgreSQL db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site \nand try to install but it gives error that\nPython v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org.\nI am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66\nNeed Help ???","AnswerCount":8,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":19218,"Q_Id":12702146,"Users Score":10,"Answer":"I met the similar problem under Windows 7 when installing mysql-connector-python-1.0.7-py2.7.msi and mysql-connector-python-1.0.7-py3.2.msi.\nAfter changing from \"Install only for yourself\" to \"Install for all users\" when installing Python for windows, the \"python 3.2 not found\" problem disappear and mysql-connector-python-1.0.7-py3.2.msi was successfully installed.\nI guess the problem is that mysql connector installer only looks for HKEY_LOCAL_MACHINE entries, and the things it looks for might be under HKEY_CURRENT_USER etc. So the solution that change the reg table directly also works.","Q_Score":12,"Tags":"python,mysql,python-2.7,mysql-connector-python","A_Id":13899478,"CreationDate":"2012-10-03T04:57:00.000","Title":"mysql for python 2. 7 says Python v2.7 not found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site \nand try to install but it gives error that\nPython v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org.\nI am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66\nNeed Help ???","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":19218,"Q_Id":12702146,"Users Score":0,"Answer":"I solved this problem by using 32bit python","Q_Score":12,"Tags":"python,mysql,python-2.7,mysql-connector-python","A_Id":19051115,"CreationDate":"2012-10-03T04:57:00.000","Title":"mysql for python 2. 7 says Python v2.7 not found","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a server which files get uploaded to, I want to be able to forward these on to s3 using boto, I have to do some processing on the data basically as it gets uploaded to s3.\nThe problem I have is the way they get uploaded I need to provide a writable stream that incoming data gets written to and to upload to boto I need a readable stream. So it's like I have two ends that don't connect. Is there a way to upload to s3 with a writable stream? If so it would be easy and I could pass upload stream to s3 and it the execution would chain along.\nIf there isn't I have two loose ends which I need something in between with a sort of buffer, that can read from the upload to keep that moving, and expose a read method that I can give to boto so that can read. But doing this I'm sure I'd need to thread the s3 upload part which I'd rather avoid as I'm using twisted.\nI have a feeling I'm way over complicating things but I can't come up with a simple solution. This has to be a common-ish problem, I'm just not sure how to put it into words very well to search it","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":636,"Q_Id":12714965,"Users Score":3,"Answer":"boto is a Python library with a blocking API. This means you'll have to use threads to use it while maintaining the concurrence operation that Twisted provides you with (just as you would have to use threads to have any concurrency when using boto ''without'' Twisted; ie, Twisted does not help make boto non-blocking or concurrent).\nInstead, you could use txAWS, a Twisted-oriented library for interacting with AWS. txaws.s3.client provides methods for interacting with S3. If you're familiar with boto or AWS, some of these should already look familiar. For example, create_bucket or put_object.\ntxAWS would be better if it provided a streaming API so you could upload to S3 as the file is being uploaded to you. I think that this is currently in development (based on the new HTTP client in Twisted, twisted.web.client.Agent) but perhaps not yet available in a release.","Q_Score":4,"Tags":"python,stream,twisted,boto","A_Id":12716129,"CreationDate":"2012-10-03T18:54:00.000","Title":"Boto reverse the stream","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to select data from multiple tables, so i just want to know that can i used simple SQL queries for that, If yes then please give me an example(means where to use these queries and how).\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":77,"Q_Id":12740424,"Users Score":1,"Answer":"Try this.\nhttps:\/\/docs.djangoproject.com\/en\/dev\/topics\/db\/sql\/","Q_Score":0,"Tags":"python,sql,django,django-queryset","A_Id":12740533,"CreationDate":"2012-10-05T06:05:00.000","Title":"Can I used simple sql commands in django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Background:\nI'm working on dataview, and many of the reports are generated by very long running queries. I've written a small query caching daemon in python that accepts a query, spawns a thread to run it, and stores the result when done as a pickled string. The results are generally various aggregations broken down by month, or other factors, and the result sets are consequently not large. So my caching daemon can check whether it has the result already, and return it immediately, otherwise it sends back a 'pending' message (or 'error' or 'failed' or various other messages). The point being, that the client, which is a django web server would get back 'pending' and query again in 5~10 seconds, in the meanwhile putting up a message for the user saying 'your report is being built, please be patient'. \nThe problem:\nI would like to add the ability for the user to cancel a long running query, assuming it hasn't been cached already. I know I can kill a query thread in MySQL using KILL, but is there a way to get the thread\/query\/process id of the query in a manner similar to getting the id of the last inserted row? I'm doing this through the python MySQLdb module, and I can't see any properties\/methods of the cursor object that would return this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1016,"Q_Id":12743436,"Users Score":2,"Answer":"There is a property of the connection object called thread_id, which returns an id to be passed to KILL. MySQL has a thread for each connection, not for each cursor, so you are not killing queries, but are instead killing connection. To kill an individual query you must run each query in it's own connection, and then kill the connection using the result from thread_id","Q_Score":0,"Tags":"python,mysql","A_Id":12743439,"CreationDate":"2012-10-05T09:32:00.000","Title":"Get process id (of query\/thread) of most recently run query in mysql using python mysqldb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm reading conflicting reports about using PostgreSQL on Amazon's Elastic Beanstalk for python (Django). \nSome sources say it isn't possible: (http:\/\/www.forbes.com\/sites\/netapp\/2012\/08\/20\/amazon-cloud-elastic-beanstalk-paas-python\/). I've been through a dummy app setup, and it does seem that MySQL is the only option (amongst other ones that aren't Postgres).\nHowever, I've found fragments around the place mentioning that it is possible - even if they're very light on detail.\nI need to know the following:\n\nIs it possible to run a PostgreSQL database with a Django app on Elastic Beanstalk?\nIf it's possible, is it worth the trouble?\nIf it's possible, how would you set it up?","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":2422,"Q_Id":12850550,"Users Score":5,"Answer":"Postgre is now selectable from the AWS RDS configurations. Validated through Elastic Beanstalk application setup 2014-01-27.","Q_Score":5,"Tags":"python,django,postgresql,amazon-elastic-beanstalk","A_Id":21391684,"CreationDate":"2012-10-12T00:21:00.000","Title":"PostgreSQL for Django on Elastic Beanstalk","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have 10000 files in a s3 bucket.When I list all the files it takes 10 minutes. I want to implement a search module using BOTO (Python interface to AWS) which searches files based on user input. Is there a way I can search specific files with less time?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":5534,"Q_Id":12904326,"Users Score":3,"Answer":"There are two ways to implement the search...\nCase 1. As suggested by john - you can specify the prefix of the s3 key file in your list method. that will return you result of S3 key files which starts with the given prefix.\nCase 2. If you want to search the S3 key which are end with specific suffix or we can say extension then you can specify the suffix in delimiter. Remember it will give you correct result only in the case if you are giving suffix for the search item which is end with that string. \nElse delimiter is used for path separator.\nI will suggest you Case 1 but if you want to faster search with specific suffix then you can try case 2","Q_Score":2,"Tags":"python,amazon-s3,boto","A_Id":12907767,"CreationDate":"2012-10-15T21:29:00.000","Title":"Search files(key) in s3 bucket takes longer time","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"A user accesses his contacts on his mobile device. I want to send back to the server all the phone numbers (say 250), and then query for any User entities that have matching phone numbers. \nA user has a phone field which is indexed. So I do User.query(User.phone.IN(phone_list)), but I just looked at AppStats, and is this damn expensive. It cost me 250 reads for this one operation, and this is something I expect a user to do often. \nWhat are some alternatives? I suppose I can set the User entity's id value to be his phone number (i.e when creating a user I'd do user = User(id = phone_number)), and then get directly by keys via ndb.get_multi(phones), but I also want to perform this same query with emails too.\nAny ideas?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":176,"Q_Id":12976652,"Users Score":0,"Answer":"I misunderstood part of your problem, I thought you were issuing a query that was giving you 250 entities.\nI see what the problem is now, you're issuing an IN query with a list of 250 phone numbers, behind the scenes, the datastore is actually doing 250 individual queries, which is why you're getting 250 read ops.\nI can't think of a way to avoid this. I'd recommend avoiding searching on long lists of phone numbers. This seems like something you'd need to do only once, the first time the user logs in using that phone. Try to find some way to store the results and avoid the query again.","Q_Score":3,"Tags":"python,google-app-engine","A_Id":12980347,"CreationDate":"2012-10-19T14:43:00.000","Title":"Efficient way to do large IN query in Google App Engine?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Php or python\nUse and connect to our existing postgres databases\nopen source \/ or very low license fees\nCommon features of cms, with admin tools to help manage \/ moderate community\nhave a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings, and video chat would be plus.\nAlready have a team of admins savvy with web apps to help manage it but our developer resources are limited (3-4 programmers) and looking to save time in development as opposed to building our new site from scratch.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5703,"Q_Id":13000007,"Users Score":1,"Answer":"Have you tried Drupal. Drupal supports PostgreSQL and is written in PHP and is open source.","Q_Score":5,"Tags":"postgresql,content-management-system,python-2.7","A_Id":13003890,"CreationDate":"2012-10-21T16:56:00.000","Title":"What is a good cms that is postgres compatible, open source and either php or python based?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I cannot get a connection to a MySQL database if my password contains punctuation characters in particular $ or @. I have tried to escape the characters, by doubling the $$ etc. but no joy.\nI have tried the pymysql library and the _mssql library.\nthe code... \nself.dbConn = _mysql.connect(host=self.dbDetails['site'], port=self.dbDetails['port'], user=self.dbDetails['user'], passwd=self.dbDetails['passwd'], db=self.dbDetails['db'])\nwhere self.dbDetails['passwd'] = \"$abcdef\". \nI have tried '$$abcdef', and re.escape(self.dbDetails['passwd']), and '\\$abcdef' but nothing works until I change the users password to remove the \"$\". Then it connects just fine. The only error I am getting is a failure to connect. I guess I will have to figure out how to print the actual exception message.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":432,"Q_Id":13004789,"Users Score":0,"Answer":"Try to MySQLdb package, you can punctuation in password to connect database through this package.","Q_Score":2,"Tags":"python,mysql,passwords","A_Id":16186975,"CreationDate":"2012-10-22T03:56:00.000","Title":"How to connect with passwords that contains characters like \"$\" or \"@\"?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Django non-rel version with mongodb backends. I am interested in tracking the changes that occur on model instances e.g if someone creates\/edits or deletes a model instance. Backend db is mongo hence models have an associated \"_id\" fields with them in the respective collections\/dbs.\nNow i want to extract this \"_id\" field on which this modif operation took place. The idea is to write this \"_id\" field to another db so someone can pick it up from there and know what object was updated.\nI thought about overriding the save() method from Django \"models.Model\" since all my models are derived from that. However the mongo \"_id\" field is obviously not present there since the mongo-insert has not taken place yet. \nIs there any possibility of a pseudo post-save() method that can be called after the save operation has taken place into mongo? Can django\/django-toolbox\/pymongo provide such a combination?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":132,"Q_Id":13024361,"Users Score":0,"Answer":"After some deep digging into the Django Models i was able to solve the problem. The save() method inturn call the save_base() method. This method saves the returned results, ids in case of mongo, into self.id. This _id field can then be picked by by over riding the save() method for the model","Q_Score":0,"Tags":"python,django,mongodb,django-models,django-nonrel","A_Id":13031452,"CreationDate":"2012-10-23T06:01:00.000","Title":"Django-Nonrel(mongo-backend):Model instance modification tracking","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Which of these two languages interfaces better and delivers a better performance\/toolset for working with sqlite database? I am familiar with both languages but need to choose one for a project I'm developing and so I thought I would ask here. I don't believe this to be opinionated as performance of a language is pretty objective.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":13059142,"Users Score":5,"Answer":"There is no good reason to choose one over the other as far as sqlite performance or usability.\nBoth languages have perfectly usable (and pythonic\/rubyriffic) sqlite3 bindings.\nIn both languages, unless you do something stupid, the performance is bounded by the sqlite3 performance, not by the bindings.\nNeither language's bindings are missing any uncommon but sometimes performance-critical functions (like an \"exec many\", manual transaction management, etc.).\nThere may be language-specific frameworks that are better or worse in how well they integrate with sqlite3, but at that point you're choosing between frameworks, not languages.","Q_Score":0,"Tags":"python,ruby,sqlite","A_Id":13059204,"CreationDate":"2012-10-24T23:00:00.000","Title":"ruby or python for use with sqlite database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have huge tables of data that I need to manipulate (sort, calculate new quantities, select specific rows according to some conditions and so on...). So far I have been using a spreadsheet software to do the job but this is really time consuming and I am trying to find a more efficient way to do the job. \nI use python but I could not figure out how to use it for such things. I am wondering if anybody can suggest something to use. SQL?!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":97,"Q_Id":13060427,"Users Score":1,"Answer":"This is a very general question, but there are multiple things that you can do to possibly make your life easier.\n1.CSV These are very useful if you are storing data that is ordered in columns, and if you are looking for easy to read text files.\n2.Sqlite3 Sqlite3 is a database system that does not require a server to use (it uses a file instead), and is interacted with just like any other database system. However, for very large scale projects that are handling massive amounts of data, it is not recommended.\n3.MySql MySql is a database system that requires a server to interact with, but can be tweaked for very large scale projects, as well as small scale projects.\nThere are many other different types of systems though, so I suggest you search around and find that perfect fit. However, if you want to mess around with Sqlite3 or CSV, both Sqlite3 and CSV modules are supplied in the standard library with python 2.7 and 3.x I believe.","Q_Score":0,"Tags":"python,sql,sorting,select","A_Id":13060535,"CreationDate":"2012-10-25T01:40:00.000","Title":"sorting and selecting data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience.\nI want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. \nThe data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc.\nI have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server.\nOptions I am considering,\n\nSmooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML.\nSQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables.\nPython script to convert the CSV to XML.\nSQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need).\n\nThanks in advance. If someone can point me in the right direction or give me some insights from his\/her personal experience it will help me make an informed decision.\nregards,\n-v-\nPS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2011,"Q_Id":13061800,"Users Score":1,"Answer":"Create a process \/ script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table.\nYou can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output\/processed folder.\nExceptions also can be handled accordingly by logging it or sending out an email. Good Luck.","Q_Score":3,"Tags":"python,csv,etl,sql-loader,smooks","A_Id":14449025,"CreationDate":"2012-10-25T04:54:00.000","Title":"Choice of technology for loading large CSV files to Oracle tables","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience.\nI want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. \nThe data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc.\nI have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server.\nOptions I am considering,\n\nSmooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML.\nSQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables.\nPython script to convert the CSV to XML.\nSQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need).\n\nThanks in advance. If someone can point me in the right direction or give me some insights from his\/her personal experience it will help me make an informed decision.\nregards,\n-v-\nPS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2011,"Q_Id":13061800,"Users Score":2,"Answer":"Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.","Q_Score":3,"Tags":"python,csv,etl,sql-loader,smooks","A_Id":13062737,"CreationDate":"2012-10-25T04:54:00.000","Title":"Choice of technology for loading large CSV files to Oracle tables","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using python's MySQLdb to fetch rows from a MySQL 5.6.7 db, that supports microsecond precision datetime columns. When I read a row with MySQLdb I get \"None\" for the time field. Is there are way to read such time fields with python?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":310,"Q_Id":13068227,"Users Score":1,"Answer":"MySQLdb-1.2.4 (to be released within the next week) and the current release candidate has support for MySQL-5.5 and newer and should solve your problem. Please try 1.2.4c1 from PyPi (pip install MySQL-python)","Q_Score":2,"Tags":"python,mysql,mysql-python","A_Id":13299592,"CreationDate":"2012-10-25T12:08:00.000","Title":"How to read microsecond-precision mysql datetime fields with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something.\nWhat I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null\/default error.\nIs it normal?\nWhy is it so? And why is South checking this null\/default thing anyway?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":13085658,"Users Score":1,"Answer":"If you add a column to a table, which already has some rows populated, then either:\n\nthe column is nullable, and the existing rows simply get a null value for the column\nthe column is not nullable but has a default value, and the existing rows are updated to have that default value for the column\n\nTo produce a non-nullable column without a default, you need to add the column in multiple steps. Either:\n\nadd the column as nullable, populate the defaults manually, and then mark the column as not-nullable\nadd the column with a default value, and then remove the default value\n\nThese are effectively the same, they both will go through updating each row.\nI don't know South, but from what you're describing, it is aiming to produce a single DDL statement to add the column, and doesn't have the capability to add it in multiple steps like this. Maybe you can override that behaviour, or maybe you can use two migrations?\nBy contrast, when you are creating a table, there clearly is no existing data, so you can create non-nullable columns without defaults freely.","Q_Score":0,"Tags":"python,django,postgresql,django-south","A_Id":13085822,"CreationDate":"2012-10-26T11:00:00.000","Title":"South initial migrations are not forced to have a default value?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something.\nWhat I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null\/default error.\nIs it normal?\nWhy is it so? And why is South checking this null\/default thing anyway?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":13085658,"Users Score":0,"Answer":"When you have existing records in your database and you add a column to one of your tables, you will have to tell the database what to put in there, south can't read your mind :-)\nSo unless you mark the new field null=True or opt in a default value it will raise an error. If you had an empty database, there are no values to be set, but a model field would still require basic properties. If you look deeper at the field class you're using you will see django sets some default values, like max_length and null (depending on the field).","Q_Score":0,"Tags":"python,django,postgresql,django-south","A_Id":13085826,"CreationDate":"2012-10-26T11:00:00.000","Title":"South initial migrations are not forced to have a default value?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using python and excel with office 2010 and have no problems there.\nI used python's makepy module in order to bind to the txcel com objects.\nHowever, on a different computer I've installed office 2013 and when I launched makepy no excel option was listed (as opposed to office 2010 where 'Microsoft Excel 14.0 Object Library' is listed by makepy).\nI've searched for 'Microsoft Excel 15.0 Object Library' in the registry and it is there.\nI tried to use : makepy -d 'Microsoft Excel 15.0 Object Library' \nbut that didn't work.\nHelp will be much appreciated.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2318,"Q_Id":13121529,"Users Score":0,"Answer":"wilywampa's answer corrects the problem. However, the combrowse.py at win32com\\client\\combrowse.py can also be used to get the IID (Interface Identifier) from the registered type libraries folder and subsequently integrate it with code as suggested by @cool_n_curious. But as stated before, wilywampa's answer does correct the problem and you can just use the makepy.py utility as usual.","Q_Score":4,"Tags":"python,excel,win32com,office-2013","A_Id":42290194,"CreationDate":"2012-10-29T12:20:00.000","Title":"Python Makepy with Office 2013 (office 15)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a class that can interface with either Oracle or MySQL. The class is initialized with a keyword of either \"Oracle\" or \"MySQL\" and a few other parameters that are standard for both database types (what to print, whether or not to stop on an exception, etc.).\nIt was easy enough to add if Oracle do A, elif MySQL do B as necessary when I began, but as I add more specialized code that only applies to one database type, this is becoming ugly. I've split the class into two, one for Oracle and one for MySQL, with some shared functions to avoid duplicate code.\nWhat is the most Pythonic way to handle calling these new classes? Do I create a wrapper function\/class that uses this same keyword and returns the correct class? Do I change all of my code that calls the old generic class to call the correct DB-specific class?\nI'll gladly mock up some example code if needed, but I didn't think it was necessary. Thanks in advance for any help!","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":126,"Q_Id":13125271,"Users Score":3,"Answer":"Create a factory class which returns an implementation based on the parameter. You can then have a common base class for both DB types, one implementation for each and let the factory create, configure and return the correct implementation to the user based on a parameter.\nThis works well when the two classes behave very similarly; but as soon as you want to use DB specific features, it gets ugly because you need methods like isFeatureXSupported() (good approach) or isOracle() (more simple but bad since it moves knowledge of which DB has which feature from the helper class into the app code).\nAlternatively, you can implement all features for both and throw an exception when one isn't supported. In your code, you can then look for the exception to check this. This makes the code more clean but now, you can really check whether a feature is available without actually using it. That can cause problems in the app code (when you want to disable menus, for example, or when the app could do it some other way).","Q_Score":5,"Tags":"python","A_Id":13125435,"CreationDate":"2012-10-29T16:03:00.000","Title":"Most Pythonic way to handle splitting a class into multiple classes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the modules xlwd, xlwt and xlutil to do some Excel manipulations in Python. I am not able to figure out how to copy the value of cell (X,Y) to cell (A,B) in the same sheet of an Excel file in Python. Could someone let me know how to do that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":472,"Q_Id":13156730,"Users Score":0,"Answer":"Work on 2 cells among tens of thousands...quite meager.\nNormally,one should present an iteration over rows x columns.","Q_Score":1,"Tags":"python,excel,xlrd,xlwt","A_Id":13998563,"CreationDate":"2012-10-31T11:21:00.000","Title":"Copying value of cell (X,Y) to cell (A,B) in same sheet of an Excel file using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have few things to ask for custom queries in Django\n\nDO i need to use the DB table name in the query or just the Model name\nif i need to join the various tables in raw sql. do i need to use db field name or model field name like\n\nPerson.objects.raw('SELECT id, first_name, last_name, birth_date FROM Person A\ninner join Address B on A.address = B.id\n')\nor B.id = A.address_id","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":408,"Q_Id":13172331,"Users Score":3,"Answer":"You need to use the database's table and field names in the raw query--the string you provide will be passed to the database, not interpreted by the Django ORM.","Q_Score":0,"Tags":"python,django","A_Id":13172382,"CreationDate":"2012-11-01T06:54:00.000","Title":"Using raw sql in django python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the most efficient way to delete orphan blobs from a Blobstore?\nApp functionality & scope:\n\nA (logged-in) user wants to create a post containing some normal\ndatastore fields (e.g. name, surname, comments) and blobs (images).\nIn addition, the blobs are uploaded asynchronously before the resto\nof the data is sent via a POST\n\nThis leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale.\n\n\nPossible, yet inefficient solutions:\n\nWhenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of \"used blobs\". Then, you can run a cron every so often and compare all of the blobs with the table of \"used blobs\". Those that have been uploaded over an hour ago yet are still \"not used\" are deleted.\n\nMy understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of \"used blob keys\" is very inefficient.\n\n\nIs there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions.\nThanks in advance!","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1014,"Q_Id":13186494,"Users Score":1,"Answer":"You can create an entity that links blobs to users. When a user uploads a blob, you immediately create a new record with the blob id, user id (or post id), and time created. When a user submits a post, you add a flag to this entity, indicating that a blob is used. \nNow your cron job needs to fetch all entities of this kind where a flag is not equal to \"true\" and time created is more one hour ago. Moreover, you can fetch keys only, which is a more efficient operation that fetching full entities.","Q_Score":2,"Tags":"google-app-engine,python-2.7,google-cloud-datastore,blobstore","A_Id":13187373,"CreationDate":"2012-11-01T22:29:00.000","Title":"Deleting Blobstore orphans","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the most efficient way to delete orphan blobs from a Blobstore?\nApp functionality & scope:\n\nA (logged-in) user wants to create a post containing some normal\ndatastore fields (e.g. name, surname, comments) and blobs (images).\nIn addition, the blobs are uploaded asynchronously before the resto\nof the data is sent via a POST\n\nThis leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale.\n\n\nPossible, yet inefficient solutions:\n\nWhenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of \"used blobs\". Then, you can run a cron every so often and compare all of the blobs with the table of \"used blobs\". Those that have been uploaded over an hour ago yet are still \"not used\" are deleted.\n\nMy understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of \"used blob keys\" is very inefficient.\n\n\nIs there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions.\nThanks in advance!","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1014,"Q_Id":13186494,"Users Score":3,"Answer":"Thank for the comments. However, I understood those solutions well, I find them too inefficient. Querying thousands of entries for those that are flagged as \"unused\" is not ideal.\nI believe I have come up with a better way and would like to hear your thoughts on it:\nWhen a blob is saved, immediately a deferred task is created to delete the same blob in an hour\u2019s time. If the post is created and saved, the deferred task is deleted, thus the blob will not be deleted in an hour\u2019s time.\nI believe this saves you from having to query thousands of entries every single hour.\nWhat are your thoughts on this solution?","Q_Score":2,"Tags":"google-app-engine,python-2.7,google-cloud-datastore,blobstore","A_Id":13247039,"CreationDate":"2012-11-01T22:29:00.000","Title":"Deleting Blobstore orphans","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the most efficient way to delete orphan blobs from a Blobstore?\nApp functionality & scope:\n\nA (logged-in) user wants to create a post containing some normal\ndatastore fields (e.g. name, surname, comments) and blobs (images).\nIn addition, the blobs are uploaded asynchronously before the resto\nof the data is sent via a POST\n\nThis leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale.\n\n\nPossible, yet inefficient solutions:\n\nWhenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of \"used blobs\". Then, you can run a cron every so often and compare all of the blobs with the table of \"used blobs\". Those that have been uploaded over an hour ago yet are still \"not used\" are deleted.\n\nMy understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of \"used blob keys\" is very inefficient.\n\n\nIs there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions.\nThanks in advance!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1014,"Q_Id":13186494,"Users Score":0,"Answer":"Use Drafts! Save as draft after each upload. Then dont do the cleaning! Let the user for himself chose to wipe out.\nIf you're planning on posts in a Facebook style use drafts either or make it private. Why bother deleting users' data?","Q_Score":2,"Tags":"google-app-engine,python-2.7,google-cloud-datastore,blobstore","A_Id":16378785,"CreationDate":"2012-11-01T22:29:00.000","Title":"Deleting Blobstore orphans","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This issue has been occurring on and off for a few weeks now, and it's unlike any that has come up with my project. \nTwo of the models that are used have a timestamp field, which is by default set to timezone.now(). \nThis is the sequence that raises error flags:\n\n\nModel one is created at time 7:30 PM\nModel two is created at time 10:00 PM, but in the \n MySQL database it's stored as 7:30 PM! \n\nEvery model that is created \n has its time stamp saved under 7:30 PM, not the actual time, until a certain\n duration passes. Then a new time is set and all the following models\n have that new time... Bizzare\n\nSome extra details which may help in discovering the issue:\nI have a bunch of methods that I use to strip my timezones of their tzinfo's and replace them with UTC. \nThis is because I'm doing a timezone.now() - creationTime calculation to create a: \"model was posted this long ago\" feature\nin the project. However, this really should not be the cause of the problem.\nI don't think using datetime.datetime.now() will make any difference either.\nAnyway, thanks for the help!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23029,"Q_Id":13225890,"Users Score":66,"Answer":"Just ran into this last week for a field that had default=date.today(). If you remove the parentheses (in this case, try default=timezone.now) then you're passing a callable to the model and it will be called each time a new instance is saved. With the parentheses, it's only being called once when models.py loads.","Q_Score":28,"Tags":"python,django,django-timezone","A_Id":13226368,"CreationDate":"2012-11-05T04:23:00.000","Title":"Django default=timezone.now() saves records using \"old\" time","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently building a web service using python \/ flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph.\nI'm a bit confused by the different technologies offered by neo4j for that case. Especially : \n\ni originally planned on using the REST Api through py2neo , but the lack of transaction is a bit of a problem.\nThe \"embedded database\" neo4j doesn't seem to suit my case very well. I guess it's useful when you're working with batch and one-time analytics, and don't need to store the database on a different server from the web server.\nI've stumbled upon the neo4django project, but i'm not sure this one offers transaction support (since there are no native client to neo4j for python), and if it would be a problem to use it outside django itself. In fact, after having looked at the project's documentation, i feel like it has exactly the same limitations, aka no transaction (but then, how can you build a real-world service when you can corrupt your model upon a single connection timeout ?). I don't even understand what is the use for that project.\n\nCould anyone could recommend anything ? I feel completely stuck.\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1075,"Q_Id":13233107,"Users Score":5,"Answer":"None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however, my general approach for client applications is to try to build code which can gracefully handle partially complete data, removing the need for explicit transaction control.\nOften, this approach will make heavy use of unique indexing and this is one reason that I have provided a large number of \"get_or_create\" type methods within py2neo. Cypher itself is incredibly powerful and also provides uniqueness capabilities, in particular through the CREATE UNIQUE clause. Using these, you can make your writes idempotent and you can err on the side of \"doing it more than once\" safe in the knowledge that you won't end up with duplicate data.\nAgreed, this approach doesn't give you transactions per se but in most cases it can give you an equivalent end result. It's certainly worth challenging yourself as to where in your application transactions are truly necessary.\nHope this helps\nNigel","Q_Score":2,"Tags":"python,flask,neo4j,py2neo","A_Id":13234558,"CreationDate":"2012-11-05T13:27:00.000","Title":"using neo4J (server) from python with transaction","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Newbie here trying to use python to do some database analysis. I keep getting the error:\n\"error: cannot locate an Oracle software installation\" When installing CX_oracle (via easy_install).\nThe problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case.\nAny suggestions?","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":25818,"Q_Id":13234196,"Users Score":1,"Answer":"Tip for Ubuntu users\nAfter configuring .bashrc environment variables, like it was explained in other answers, don't forget to reload your terminal window, typing $SHELL.","Q_Score":12,"Tags":"python,oracle,cx-oracle","A_Id":58120873,"CreationDate":"2012-11-05T14:32:00.000","Title":"\"error: cannot locate an Oracle software installation\" When trying to install cx_Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie here trying to use python to do some database analysis. I keep getting the error:\n\"error: cannot locate an Oracle software installation\" When installing CX_oracle (via easy_install).\nThe problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case.\nAny suggestions?","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":25818,"Q_Id":13234196,"Users Score":2,"Answer":"I got this message when I was trying to install the 32 bit version while having the 64bit Oracle client installed.\nWhat worked for me: reinstalled python with 64 bit (had 32 for some reason), installed cx_Oracle (64bit version) with the Windows installer and it worked perfectly.","Q_Score":12,"Tags":"python,oracle,cx-oracle","A_Id":28741244,"CreationDate":"2012-11-05T14:32:00.000","Title":"\"error: cannot locate an Oracle software installation\" When trying to install cx_Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie here trying to use python to do some database analysis. I keep getting the error:\n\"error: cannot locate an Oracle software installation\" When installing CX_oracle (via easy_install).\nThe problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case.\nAny suggestions?","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":25818,"Q_Id":13234196,"Users Score":2,"Answer":"I installed cx_Oracle, but I also had to install an Oracle client to use it (the cx_Oracle module is just a common and pythonic way to interface with the Oracle client in Python).\nSo you have to set the variable ORACLE_HOME to your Oracle client folder (on Unix: via a shell, for instance; on Windows: create a new variable if it does not exist in the Environment variables of the Configuration Panel). Your folder $ORACLE_HOME\/network\/admin (%ORACLE_HOME%\\network\\admin on Windows) is the place where you would place your tnsnames.ora file.","Q_Score":12,"Tags":"python,oracle,cx-oracle","A_Id":13234377,"CreationDate":"2012-11-05T14:32:00.000","Title":"\"error: cannot locate an Oracle software installation\" When trying to install cx_Oracle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns?\nI can write the CREATE TABLE syntax just fine, and I can read the rows\/values of a CSV file(s), but does a method already exist to inspect the CSV file(s) and accurately determine the column type? Before I build my own, I wanted to check if this already existed.\nIf it doesn't exist already, my idea would be to use Python, CSV module, and psycopg2 module to build a python script that would:\n\nRead the CSV file(s).\nBased upon a subset of records (10-100 rows?), iteratively inspect each column of each row to automatically determine the right column type of the data in the CSV. Therefore, if row 1, column A had a value of 12345 (int), but row 2 of column A had a value of ABC (varchar), the system would automatically determine it should be a format varchar(5) based upon the combination of the data it found in the first two passes. This process could go on as many times as the user felt necessary to determine the likely type and size of the column.\nBuild the CREATE TABLE query as defined by the column inspection of the CSV.\nExecute the create table query.\nLoad the data into the new table.\n\nDoes a tool like this already exist within either SQL, PostgreSQL, Python, or is there another application I should be be using to accomplish this (similar to pgAdmin3)?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7124,"Q_Id":13239004,"Users Score":0,"Answer":"Although this is quite an old question, it doesn't seem to have a satisfying answer and I was struggling with the exact samen issue. With the arrival of SQL Server Management Studio 2018 edition - and probably somewhat before that - a pretty good solution was offered by Microsoft.\n\nIn SSMS on a database node in the object explorer, right-click, select 'Tasks' and choose 'Import data';\nChoose 'Flat file' as source and, in the General section, browse to your .csv file. An important note here: make sure there's no table in your target SQL server matching the files name;\nIn the Advanced section, click on 'Suggest types' and in the next dialog, enter preferrably the total number of rows in your file or, if that's too much, a large enough number to cover all possible values (this takes a while);\nClick next, and in the subsquent step, connect to your SQL server. Now, every brand has their own flavour of data types, but you should get a nice set of relevant pointers for your taste later on. I've tested this using the SQL Server Native Client 11.0. Please leave your comments for other providers as a reply to this solution;\nHere it comes... click 'Edit Mappings'...;\nclick 'Edit SQL' et voila, a nice SQL statement with all the discovered data types;\nClick through to the end, selecting 'Run immediately' to see all of your .csv columns created with appopriate types in your SQL server.\n\nExtra:\nIf you run the above steps twice, exactly the same way with the same file, the first loop will use the 'CREATE TABLE...' statement, but the second go will skip table creation. If you save the second run as an SSIS (Integration Services) file, you can later re-run the entire setup without scanning the .csv file.","Q_Score":11,"Tags":"python,sql,postgresql,pgadmin","A_Id":52581750,"CreationDate":"2012-11-05T19:30:00.000","Title":"Create SQL table with correct column types from CSV","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns?\nI can write the CREATE TABLE syntax just fine, and I can read the rows\/values of a CSV file(s), but does a method already exist to inspect the CSV file(s) and accurately determine the column type? Before I build my own, I wanted to check if this already existed.\nIf it doesn't exist already, my idea would be to use Python, CSV module, and psycopg2 module to build a python script that would:\n\nRead the CSV file(s).\nBased upon a subset of records (10-100 rows?), iteratively inspect each column of each row to automatically determine the right column type of the data in the CSV. Therefore, if row 1, column A had a value of 12345 (int), but row 2 of column A had a value of ABC (varchar), the system would automatically determine it should be a format varchar(5) based upon the combination of the data it found in the first two passes. This process could go on as many times as the user felt necessary to determine the likely type and size of the column.\nBuild the CREATE TABLE query as defined by the column inspection of the CSV.\nExecute the create table query.\nLoad the data into the new table.\n\nDoes a tool like this already exist within either SQL, PostgreSQL, Python, or is there another application I should be be using to accomplish this (similar to pgAdmin3)?","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":7124,"Q_Id":13239004,"Users Score":7,"Answer":"I have been dealing with something similar, and ended up writing my own module to sniff datatypes by inspecting the source file. There is some wisdom among all the naysayers, but there can also be reasons this is worth doing, particularly when we don't have any control of the input data format (e.g. working with government open data), so here are some things I learned in the process:\n\nEven though it's very time consuming, it's worth running through the entire file rather than a small sample of rows. More time is wasted by having a column flagged as numeric that turns out to have text every few thousand rows and therefore fails to import.\nIf in doubt, fail over to a text type, because it's easier to cast those to numeric or date\/times later than to try and infer the data that was lost in a bad import.\nCheck for leading zeroes in what appear otherwise to be integer columns, and import them as text if there are any - this is a common issue with ID \/ account numbers.\nGive yourself some way of manually overriding the automatically detected types for some columns, so that you can blend some semantic awareness with the benefits of automatically typing most of them.\nDate\/time fields are a nightmare, and in my experience generally require manual processing.\nIf you ever add data to this table later, don't attempt to repeat the type detection - get the types from the database to ensure consistency.\n\nIf you can avoid having to do automatic type detection it's worth avoiding it, but that's not always practical so I hope these tips are of some help.","Q_Score":11,"Tags":"python,sql,postgresql,pgadmin","A_Id":21917162,"CreationDate":"2012-11-05T19:30:00.000","Title":"Create SQL table with correct column types from CSV","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build an online Python Shell. I execute commands by creating an instance of InteractiveInterpreter and use the command runcode. For that I need to store the interpreter state in the database so that variables, functions, definitions and other values in the global and local namespaces can be used across commands. Is there a way to store the current state of the object InteractiveInterpreter that could be retrieved later and passed as an argument local to InteractiveInterpreter constructor or If I can't do this, what alternatives do I have to achieve the mentioned functionality?\nBelow is the pseudo code of what I am trying to achieve\n\n\ndef fun(code, sessionID):\n session = Session()\n # get the latest state of the interpreter object corresponding to SessionID\n vars = session.getvars(sessionID)\n it = InteractiveInterpreter(vars)\n it.runcode(code)\n #save back the new state of the interpreter object\n session.setvars(it.getState(),sessionID)\n\n\nHere, session is an instance of table containing all the necessary information.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":13254044,"Users Score":0,"Answer":"I believe the pickle package should work for you. You can use pickle.dump or pickle.dumps to save the state of most objects. (then pickle.load or pickle.loads to get it back)","Q_Score":0,"Tags":"python,interactive-shell,python-interactive","A_Id":13254202,"CreationDate":"2012-11-06T15:19:00.000","Title":"How to store the current state of InteractiveInterpreter Object in a database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have started a retrival job for an archive stored in one of my vaults on \nGlacier AWS.\nIt turns out that I do not need to resurrect and download that archive any more.\nIs there a way to stop and\/or delete my Glacier job?\nI am using boto and I cannot seem to find a suitable function.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1164,"Q_Id":13274197,"Users Score":9,"Answer":"The AWS Glacier service does not provide a way to delete a job. You can:\n\nInitiate a job\nDescribe a job\nGet the output of a job\nList all of your jobs\n\nThe Glacier service manages the jobs associated with an vault.","Q_Score":7,"Tags":"python,amazon-web-services,boto,amazon-glacier","A_Id":13275014,"CreationDate":"2012-11-07T16:42:00.000","Title":"AWS glacier delete job","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run \"import MySQLdb\" in the python prompt from outside of the virtualenv, it works, inside it says \"ImportError: No module named MySQLdb\".\nI'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run \"sudo pip install mysql-python\" but it just says \"Requirement already satisfied: mysql-python in \/usr\/lib\/pymodules\/python2.7\"\nAny help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":6817,"Q_Id":13288013,"Users Score":1,"Answer":"source $ENV_PATH\/bin\/activate\npip uninstall MySQL-python\npip install MySQL-python\n\nthis worked for me.","Q_Score":9,"Tags":"python,virtualenv,mysql-python","A_Id":43866023,"CreationDate":"2012-11-08T11:17:00.000","Title":"Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run \"import MySQLdb\" in the python prompt from outside of the virtualenv, it works, inside it says \"ImportError: No module named MySQLdb\".\nI'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run \"sudo pip install mysql-python\" but it just says \"Requirement already satisfied: mysql-python in \/usr\/lib\/pymodules\/python2.7\"\nAny help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6817,"Q_Id":13288013,"Users Score":14,"Answer":"If you have created the virtualenv with the --no-site-packages switch (the default), then system-wide installed additions such as MySQLdb are not included in the virtual environment packages.\nYou need to install MySQLdb with the pip command installed with the virtualenv. Either activate the virtualenv with the bin\/activate script, or use bin\/pip from within the virtualenv to install the MySQLdb library locally as well.\nAlternatively, create a new virtualenv with system site-packages included by using the --system-site-package switch.","Q_Score":9,"Tags":"python,virtualenv,mysql-python","A_Id":13288095,"CreationDate":"2012-11-08T11:17:00.000","Title":"Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are developing application for which we going to use a NoSql database. We have evaluated couchdb and mongodb. Our application is in python and read-speed is most critical for our application. And application is reading a large number of documents. \nI want ask:\n\nIs reading large number of documents is faster in bson than json?\nWhich is better when we want to read say 100 documents, parse them & print result: python+mongodb+pymongo or python+couchdb+couchdbkit (database going to be on ec2 & accessible over internet)?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":468,"Q_Id":13298480,"Users Score":-1,"Answer":"bson\nTry LogoDb from 1985 logo programming language for trs-80","Q_Score":0,"Tags":"python-2.7,pymongo,couchdbkit","A_Id":13641512,"CreationDate":"2012-11-08T21:54:00.000","Title":"CouchDB vs mongodb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm filtering the twitter streaming API by tracking for several keywords. \nIf for example I only want to query and return from my database tweet information that was filtered by tracking for the keyword = 'BBC' how could this be done?\nDo the tweet information collected have a key:value relating to that keyword by which it was filtered?\nI'm using python, tweepy and MongoDB.\nWould an option be to search for the keyword in the returned json 'text' field? Thus generate a query where it searches for that keyword = 'BBC' in the text field of the returned json data?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":374,"Q_Id":13352796,"Users Score":0,"Answer":"Unfortunately, the Twitter API doesn't provide a way to do this. You can try searching through receive tweets for the keywords you specified, but it might not match exactly.","Q_Score":1,"Tags":"python,mongodb,twitter,tweepy","A_Id":22388827,"CreationDate":"2012-11-12T22:42:00.000","Title":"Querying twitter streaming api keywords from a database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"so I discovered Sets in Python a few days ago and am surprised that they never crossed my mind before even though they make a lot of things really simple. I give an example later.\nSome things are still unclear to me. The docs say that Sets can be created from iterables and that the operators always return new Sets but do they always copy all data from one set to another and from the iterable? I work with a lot of data and would love to have Sets and set operators that behave much like itertools. So Sets([iterable]) would be more like a wrapper and the operators union, intersection and so on would return \"iSets\" and would not copy any data. They all would evaluate once I iter my final Set. In the end I really much would like to have \"iSet\" operators.\nPurpose:\nI work with MongoDB using mongoengine. I have articles saved. Some are associated with a user, some are marked as read others were shown to the user and so on. Wrapping them in Sets that do not load all data would be a great way to combine, intersect etc. them. Obviously I could make special queries but not always since MongoDB does not support joins. So I end up doing joins in Python. I know I could use a relational database then, however, I don't need joins that often and the advantages of MongoDB outweigh them in my case.\nSo what do you think? Is there already a third party module? Would a few lines combining itertools and Sets do?\nEDIT:\nI accepted the answer by Martijn Pieters because it is obviously right. I ended up loading only IDs into sets to work with them. Also, the sets in Python have a pretty good running time.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":707,"Q_Id":13358955,"Users Score":4,"Answer":"Sets are just like dict and list; on creation they copy the references from the seeding iterable.\nIterators cannot be sets, because you cannot enforce the uniqueness requirement of a set. You cannot know if a future value yielded by an iterator has already been seen before.\nMoreover, in order for you to determine what the intersection is between two iterables, you have to load all data from at least one of these iterables to see if there are any matches. For each item in the second iterable, you need to test if that item has been seen in the first iterable. To do so efficiently, you need to have loaded all the items from the first iterable into a set. The alternative would be to loop through the first iterable from start to finish for each item from the second iterable, leading to exponential performance degradation.","Q_Score":2,"Tags":"python,memory-management,set,itertools","A_Id":13358975,"CreationDate":"2012-11-13T10:20:00.000","Title":"Python: Combining itertools and sets to save memory","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":17118,"Q_Id":13369795,"Users Score":1,"Answer":"Databases are, by default, stored in \/data\/db (some environments override this and use \/var\/lib\/mongodb, however). You can see the total db size by looking at db.stats() (specifically fileSize) in the MongoDB shell.","Q_Score":7,"Tags":"python,macos,mongodb,amazon-ec2","A_Id":13369827,"CreationDate":"2012-11-13T22:13:00.000","Title":"where is mongo db database stored on local hard drive?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":17118,"Q_Id":13369795,"Users Score":4,"Answer":"I believe on OSX the default location would be \/data\/db. But you can check your config file for the dbpath value to verify.","Q_Score":7,"Tags":"python,macos,mongodb,amazon-ec2","A_Id":13369857,"CreationDate":"2012-11-13T22:13:00.000","Title":"where is mongo db database stored on local hard drive?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in learning more about node.js and utilizing it in a new project. The problem I am having is envisioning where I could enhance my web stack with it and what role it would play. All I have really done with it is followed a tutorial or two where you make something like a todo app in all JS. That is all fine and dandy but where do I leverage this is in a more complex web architecture.\nso here is an example of how I plan on setting up my application\nweb server for serving views:\n\nPython (flask\/werkzeug) \nJinja\nnginx\nhtml\/css\/js\n\nAPI sever:\n\nPython (flask\/werkzeug)\nSQLAlchemy (ORM)\nnginx\nsupervisor + gunicorn\n\nDB Server\n\nPostgres\n\nSo is there any part of this stack that could be replaced or enhanced by introducing nodeJS I would assume it would be best used on the API server but not exactly sure how.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":228,"Q_Id":13382262,"Users Score":0,"Answer":"It would replace Python (flask\/werkzeug) in both your view server and your API server.","Q_Score":0,"Tags":"python,node.js,web-applications","A_Id":13384050,"CreationDate":"2012-11-14T15:51:00.000","Title":"Where does node.js fit in a stack or enhance it","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm new to web development and I'm trying to get my mac set up for doing Django tutorials and helping some developers with a project that uses postgres. I will try to specify my questions as much as possible. However, it seems that there are lots of floating parts to this question and I'm not quite understanding some parts of the connection between an SQL Shell, virtual environments, paths, databases, terminals (which seem to be necessary to get running on this web development project). I will detail what I did and the error messages that appear. If you could help me with the error messages or simply post links to tutorials that help me better understand how these floating parts work together, I would very much appreciate it. \nI installed postgres and pgAdmin III and set it up on the default port. I created a test database. Now when I try to open it on the local server, I get an error message: 'ERROR: column \"datconfig\" does not exist LINE1:...b.dattablespace AS spcoid, spcname, datallowconn, dataconfig,...\nHere is what I did before I closed pgAdmin and then reopened it:\nInstallation: The Setup told me that an existing data directory was found at \/Library\/PostgreSQL\/9.2\/data set to use port 5433.\nI loaded an .sql file that I wanted to test (I saved it on my desktop and loaded it into the database from there).\nI'm not sure whether this is related to the problem or not, but I also have virtual environments in a folder ~\/Sites\/django_test (i.e. when I tell the bash Terminal to \u201cactivate\u201d this folder, it puts me in a an (env)). \nI read in a forum that I need to do the Django tutorials by running \u201cpython manage.py runserver\" at the bash Terminal command line. When I do this, I get an error message saying \u201ccan't open file 'manage.py': [Errno 2] No such file or directory\u201d. \nEven when I run the command in the (env), I get the error message: \/Library\/Frameworks\/Python.framework\/Versions\/3.2\/Resources\/Python.app\/Contents\/MacOS\/Python: can't open file 'manage.py': [Errno 2] No such file or directory (Which I presume is telling me that the path is still set on an incorrect version of Python (3.2), even though I want to use version 2.7 and trashed the 3.2 version from my system. )\nI think that there are a few gaps in my understanding here:\n\nI don\u2019t understand the difference between typing in commands into my bash Terminal versus my SQL shell\nIs running \u201cpython manage.py runserver\u201d the same as running Python\nprograms with an IDE like IDLE?\nHow and where do I adjust your $PATH environment variable so that the\ncorrect python occurs first on the path?\nI think that I installed the correct Python version into the virtual\nenvironment using pip install. Why am I still receiving a \u201cNo such\nfile or directory\u201d error?\nWhy does Python version 3.2 still appear in the path indicated by my\nerror message is I trashed it?\n\nIf you could help me with these questions, or simply list links with any tutorials that explain this, that would be much appreciated. And again, sorry for not being more specific. But I thought that it would be more helpful to list the problems that I have with these different pieces rather than just one, since its their interrelatedness that seems to be causing the error messages. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":291,"Q_Id":13495135,"Users Score":1,"Answer":"Er, not sure how we can help you with that. One is for bash, one is for SQL. \nNo, that's for running the development webserver, as the tutorial explains.\nThere's no need to do that, that's what the virtualenv is for.\nThis has nothing to do with Python versions, you simply don't seem to be in the right directory. Note that, again as the tutorial explains, manage.py isn't created until you've run django-admin.py startproject myprojectname. Have you done that?\nYou presumably created the virtualenv using 3.2. Delete it and recreate it with 2.7.\n\nYou shouldn't be \"reading in a forum\" about how to do the Django tutorial. You should just be following the tutorial.","Q_Score":1,"Tags":"python,django,postgresql","A_Id":13495557,"CreationDate":"2012-11-21T14:12:00.000","Title":"postgres installation error on Mac 10.6.8","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file.\nI am trying to write a forensics tool and want to search sqlite files for a specific string.","AnswerCount":4,"Available Count":2,"Score":0.2449186624,"is_accepted":false,"ViewCount":15981,"Q_Id":13514509,"Users Score":5,"Answer":"Just dump the db and search it.\n% sqlite3 file_name .dump | grep 'my_search_string'\nYou could instead pipe through less, and then use \/ to search:\n% sqlite3 file_name .dump | less","Q_Score":13,"Tags":"python,sqlite,search","A_Id":65373519,"CreationDate":"2012-11-22T14:11:00.000","Title":"Search Sqlite Database - All Tables and Columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file.\nI am trying to write a forensics tool and want to search sqlite files for a specific string.","AnswerCount":4,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":15981,"Q_Id":13514509,"Users Score":4,"Answer":"@MrWorf's answer didn't work for my sqlite file (an .exb file from Evernote) but this similar method worked:\n\nOpen the file with DB Browser for SQLite sqlitebrowser mynotes.exb\nFile \/ Export to SQL file (will create mynotes.exb.sql)\ngrep 'STRING I WANT\" mynotes.exb.sql","Q_Score":13,"Tags":"python,sqlite,search","A_Id":59407127,"CreationDate":"2012-11-22T14:11:00.000","Title":"Search Sqlite Database - All Tables and Columns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created an app using web2py and have declared certain new table in it using the syntax\ndb.define_table() but the tables created are not visible when I run the app in Google App Engine even on my local server. The tables that web2py creates by itself like auth_user and others in auth are available.\nWhat am I missing here?\nI have declared the new table in db.py in my application.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":100,"Q_Id":13548590,"Users Score":0,"Answer":"App Engine datastore doesn't really have tables. That said, if web2py is able to make use of the datastore (I'm not familiar with it), then Kinds (a bit like tables) will only show up in the admin-console (\/_ah\/admin locally) once an entity has been created (i.e. tables only show up once one row has been inserted, you'll never see empty tables).","Q_Score":1,"Tags":"python,google-app-engine,web2py","A_Id":13551914,"CreationDate":"2012-11-25T05:29:00.000","Title":"New tables created in web2py not seen when running in Google app Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error:\n\n... File \"\/****\/****\/.local\/lib\/python\/django\/utils\/importlib.py\", line 35, in import_module\n __import__(name)\n\nFile \"\/****\/****\/.local\/lib\/python\/django\/db\/backends\/mysql\/base.py\", line 14, in \n raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\n\nImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory\n\nOf course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages.\nDoes anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2446,"Q_Id":13573359,"Users Score":2,"Answer":"I think you upgraded your OS installation which in turn upgraded libmysqlclient and broke native extension. What you can do is reinstall libmysqlclient16 again (how to do it depends your particular OS) and that should fix your issue.\nOther approach would be to uninstall MySQLdb module and reinstall it again, forcing python to compile it against a newer library.","Q_Score":2,"Tags":"python,linux,mysql-python,bluehost","A_Id":13573647,"CreationDate":"2012-11-26T21:20:00.000","Title":"Python module issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error:\n\n... File \"\/****\/****\/.local\/lib\/python\/django\/utils\/importlib.py\", line 35, in import_module\n __import__(name)\n\nFile \"\/****\/****\/.local\/lib\/python\/django\/db\/backends\/mysql\/base.py\", line 14, in \n raise ImproperlyConfigured(\"Error loading MySQLdb module: %s\" % e)\n\nImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory\n\nOf course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages.\nDoes anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2446,"Q_Id":13573359,"Users Score":0,"Answer":"You were right. Bluehost upgraded MySQL. Here is what I did:\n1) remove the \"build\" directory in the \"MySQL-python-1.2.3\" directory\n2) remove the egg\n3) build the module again \"python setup.py build\"\n4) install the module again \"python setup.py install --prefix=$HOME\/.local\"\nMorale of the story for me is to remove the old stuff when reinstalling module","Q_Score":2,"Tags":"python,linux,mysql-python,bluehost","A_Id":13591200,"CreationDate":"2012-11-26T21:20:00.000","Title":"Python module issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm starting a Django project and need to shard multiple tables that are likely to all be of too many rows. I've looked through threads here and elsewhere, and followed the Django multi-db documentation, but am still not sure how that all stitches together. My models have relationships that would be broken by sharding, so it seems like the options are to either drop the foreign keys of forgo sharding the respective models.\nFor argument's sake, consider the classic Authot, Publisher and Book scenario, but throw in book copies and users that can own them. Say books and users had to be sharded. How would you approach that? A user may own a copy of a book that's not in the same database.\nIn general, what are the best practices you have used for routing and the sharding itself? Did you use Django database routers, manually selected a database inside commands based on your sharding logic, or overridden some parts of the ORM to achive that?\nI'm using PostgreSQL on Ubuntu, if it matters.\nMany thanks.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1300,"Q_Id":13620867,"Users Score":1,"Answer":"I agree with @DanielRoseman. Also, how many is too many rows. If you are careful with indexing, you can handle a lot of rows with no performance problems. Keep your indexed values small (ints). I've got tables in excess of 400 million rows that produce sub-second responses even when joining with other many million row tables. \nIt might make more sense to break user up into multiple tables so that the user object has a core of commonly used things and then the \"profile\" info lives elsewhere (std Django setup). Copies would be a small table referencing books which has the bulk of the data. Considering how much ram you can put into a DB server these days, sharding before you have too seems wrong.","Q_Score":3,"Tags":"python,django,postgresql,sharding","A_Id":13639532,"CreationDate":"2012-11-29T07:32:00.000","Title":"Sharding a Django Project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am connecting my python software to a remote msql server.\ni have had to add an access host on cPanel just for my computer but the problem is the access host, which is my IP, is dynamic.\nHow can i connect to the remote server without having to change the access host everytime?\nthanks guys, networking is my weakness.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1475,"Q_Id":13657404,"Users Score":0,"Answer":"Your best option is probably to find a [dynamic DNS] provider. The idea is to have a client running on your machine which updates a DNS entry on a remote server. Then you can use the hostname provided instead of your IP address in cPanel.","Q_Score":0,"Tags":"python,networking,cpanel","A_Id":13657435,"CreationDate":"2012-12-01T07:31:00.000","Title":"Configuring Remote MYSQL with a Dynamic IP","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit.\nWe're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux.\n\nThe reason that we want to use OpenBSD is that it's well known for\nit's security.\nThe reason we chose Python is that it's fast.\nThe reason we want to use Nginx is that it's known to be able to\nhandle more http request when compared to Apache.\nThe reason we want to use NoSQL is that MySQL is known to have\nproblems in scalability when the databases grows.\n\nWe want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP).\nWe're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":1447,"Q_Id":13675440,"Users Score":4,"Answer":"My advice - if you don't know how to use these technologies - don't do it. Few servers will cost you less than the time spent mastering technologies you don't know. If you want to try them out - do it. One by one, not everything at once. There is no magic solution on how to use them.","Q_Score":1,"Tags":"python,nginx,nosql,openbsd","A_Id":13675611,"CreationDate":"2012-12-03T00:05:00.000","Title":"How to utilize OpenBSD, Nginx, Python and NoSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit.\nWe're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux.\n\nThe reason that we want to use OpenBSD is that it's well known for\nit's security.\nThe reason we chose Python is that it's fast.\nThe reason we want to use Nginx is that it's known to be able to\nhandle more http request when compared to Apache.\nThe reason we want to use NoSQL is that MySQL is known to have\nproblems in scalability when the databases grows.\n\nWe want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP).\nWe're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1447,"Q_Id":13675440,"Users Score":1,"Answer":"I agree with wdev, the time it takes to learn this is not worth the money you will save. First of all, MySQL databases are not hard to scale. WordPress utilizes MySQL databases, and some of the world's largest websites use MySQL (google for a list). I can also say the same of linux and PHP. \nIf you design your site using best practices (CSS sprites) Apache versus Nginx will not make a considerable difference in load times if you utilize a CDN and best practices (caching, gzip, etc).\nI strongly urge you to reconsider your decisions. They seem very ill-advised.","Q_Score":1,"Tags":"python,nginx,nosql,openbsd","A_Id":13676002,"CreationDate":"2012-12-03T00:05:00.000","Title":"How to utilize OpenBSD, Nginx, Python and NoSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of times in h:m format in an Excel spreadsheet, and I'm trying to do some manipulation with DataNitro but it doesn't seem to like the way Excel formats times.\nFor example, in Excel the time 8:32 is actually just the decimal number .355556 formatted to appear as 8:32. When I access that time with DataNitro it sees it as the decimal, not the string 8:32. If I change the format in Excel from Time to General or Number, it converts it to the decimal (which I don't want). The only thing I've found that works is manually going through each cell and placing ' in front of each one, then going through and changing the format type to General. \nIs there any way to convert these times in Excel into strings so I can extract the info with DataNitro (which is only viewing it as a decimal)?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":488,"Q_Id":13725567,"Users Score":3,"Answer":"If .355556 (represented as 8:32) is in A1 then =HOUR(A1)&\":\"&MINUTE(A1) and Copy\/Paste Special Values should get you to a string.","Q_Score":1,"Tags":"python,excel,time,number-formatting,datanitro","A_Id":13725706,"CreationDate":"2012-12-05T14:35:00.000","Title":"Converting time with Python and DataNitro in Excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've implemented a breadth first search with a PyMongo social network. It's breadth first to reduce the number of connections. Now I get queries like coll.find({\"_id\":{\"$in\":[\"id1\", \"id2\", ...]}} with a huge number of ids. PyMongo does not process some of these big queries due to their size.\nIs there a technical solution around it? Or do you suggest another approach to such kind of queries where I need to select all docs with one of a huge set of ids?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":13728955,"Users Score":0,"Answer":"If this is an inescapable problem, you could split the array of ids across multiple queries and then merge the results client-side.","Q_Score":0,"Tags":"python,mongodb","A_Id":13729295,"CreationDate":"2012-12-05T17:27:00.000","Title":"Large size query with PyMongo?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows:\nDatabase: testDB\nTable: sensors\nFirst record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor data. It is in the following format:\n23432.32 112343.3 53454.322 34563.32 76653.44 000.000 333.2123\nI am completely lost on how to complete this project. I have read many pages showing examples dont really understand them. They provide source code, but I am not sure where that code goes. I installed httpd on my server and that is where I stand. Does anyone know of a good how-to from beginning to end that I could follow? Or could someone post a good step by step for me to follow?\nThanks for your help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":428,"Q_Id":13772857,"Users Score":0,"Answer":"Install a httpd server\nInstall php\nWrite a php script to fetch the data from the database and render it\nas a webpage.\n\nThis is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.","Q_Score":0,"Tags":"python,mysql,flot","A_Id":13774224,"CreationDate":"2012-12-07T23:52:00.000","Title":"Plotting data using Flot and MySQL","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to obtain path from a FileField, in order to check it against a given file system path, to know if the file I am inserting into mongo database is already present.\nIs it possible?\nAll I get is a GridFSProxy, but I am unable to understand how to handle it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":430,"Q_Id":13791542,"Users Score":1,"Answer":"You can't since it stores the data into database. If you need to store the original path then you can create an EmbeddedDocument which contains a FileField and a StringField with the path string. But remember that the stored file and the file you might find on that path are not the same","Q_Score":0,"Tags":"python,mongodb,path,mongoengine,filefield","A_Id":13962502,"CreationDate":"2012-12-09T20:34:00.000","Title":"How to get filesystem path from mongoengine FileField","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions):\nI am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous.\nMy problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements.\nI haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either.\nIs there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate\/re-run my statements?\nThank you all in advance,\nGeorge.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1119,"Q_Id":13838231,"Users Score":2,"Answer":"If you are committing your transactions after every 5000 record interval, it seems like you could do a little bit of preprocessing of your input data and actually break it out into a list of 5000 record chunks, i.e. [[[row1_data],[row2_data]...[row4999_data]],[[row5000_data],[row5001_data],...],[[....[row1000000_data]]]\nThen run your inserts, and keep track of which chunk you are processing as well as which record you are currently inserting. When you get the error, you rerun the chunk, but skip the the offending record.","Q_Score":1,"Tags":"python,postgresql,transactions,commit,psycopg2","A_Id":13849917,"CreationDate":"2012-12-12T10:58:00.000","Title":"How can I commit all pending queries until an exception occurs in a python connection object","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions):\nI am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous.\nMy problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements.\nI haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either.\nIs there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate\/re-run my statements?\nThank you all in advance,\nGeorge.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1119,"Q_Id":13838231,"Users Score":3,"Answer":"I doubt you'll find a fast cross-database way to do this. You just have to optimize the balance between the speed gains from batch size and the speed costs of repeating work when an entry causes a batch to fail.\nSome DBs can continue with a transaction after an error, but PostgreSQL can't. However, it does allow you to create subtransactions with the SAVEPOINT command. These are far from free, but they're lower cost than a full transaction. So what you can do is every (say) 100 rows, issue a SAVEPOINT and then release the prior savepoint. If you hit an error, ROLLBACK TO SAVEPOINT, commit, then pick up where you left off.","Q_Score":1,"Tags":"python,postgresql,transactions,commit,psycopg2","A_Id":13838751,"CreationDate":"2012-12-12T10:58:00.000","Title":"How can I commit all pending queries until an exception occurs in a python connection object","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a file hosting app that will store all client files within a folder on an S3 bucket. I then want to track the amount of usage on S3 recursively per top folder to charge back the cost of storage and bandwidth to each corresponding client. \nFront-end is django but the solution can be python for obvious reasons.\nIs it better to create a bucket per client programmatically?\nIf I do go with the approach of creating a bucket per client, is it then possible to get the cost of cloudfront exposure of the bucket if enabled?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":818,"Q_Id":13873119,"Users Score":0,"Answer":"No its not possible to create a bucket for each user as Amazon allows only 100 buckets per account. So unless you are sure not to have more than 100 users, it will be a very bad idea.\nThe ideal solution will be to remember each user's storage in you Django app itself in database. I guess you would be using S3 boto library for storing the files, than it returns the byte size after each upload. You can use that to store that.\nThere is also another way out, you could create many folders inside a bucket with each folder specific to an user. But still the best way to remember the storage usage in your app","Q_Score":0,"Tags":"python,django,amazon-s3","A_Id":13892252,"CreationDate":"2012-12-14T05:20:00.000","Title":"How can I track s3 bucket folder usage with python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How would I extend the sqlite3 module so if I import Database I can do Database.connect() as an alias to sqlite3.connect(), but define extra non standard methods?","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":206,"Q_Id":13881533,"Users Score":4,"Answer":"You can create a class which wraps sqlite3. It takes its .connect() method and maybe others and exposes it to the outside, and then you add your own stuff.\nAnother option would be subclassing - if that works.","Q_Score":1,"Tags":"python,sqlite","A_Id":13881814,"CreationDate":"2012-12-14T15:24:00.000","Title":"How do I extend a python module to include extra functionality? (sqlite3)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB.\nInput data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil.\nI cannot use other DB except sqlite due to project restrictions.\nMy question is :\nwhat is a proper logic to write such loader?\nAt first try I`ve tried to write set of encapsulated generators, that will take one row from Postgres, slightly ammend it and put it into sqlite. I ended up with the fact that for each row, i create separate sqlite connection and cursor. And that looks awfull.\nAt second try , i moved sqlite connection and cursor out of the generator , to the body of the script and it became clear that i do not commit data to sqlite untill i fetch and process all 20mils records. And this possibly could crash all my hardware.\nAt third try I strated to consider to keep Sqlite connection away from the loops , but create\/close cursor each time i process and push one row to Sqlite. This is better but i think also have some overhead.\nI also considered to play with transactions : One connection, one cursor, one transaction and commit called in generator each time row is being pushed to Sqlite. Is this i right way i`m going? \nIs there some widely-used pattern to write such a component in python? Because I feel as if I am inventing a bicycle.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":293,"Q_Id":13919448,"Users Score":1,"Answer":"SQLite can handle huge transactions with ease, so why not commit at the end? Have you tried this at all?\nIf you do feel one transaction is a problem, why not commit ever n transactions? Process rows one by one, insert as needed, but every n executed insertions add a connection.commit() to spread the load.","Q_Score":0,"Tags":"python,sqlite,python-2.7","A_Id":13919496,"CreationDate":"2012-12-17T17:56:00.000","Title":"How to write proper big data loader to sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB.\nInput data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil.\nI cannot use other DB except sqlite due to project restrictions.\nMy question is :\nwhat is a proper logic to write such loader?\nAt first try I`ve tried to write set of encapsulated generators, that will take one row from Postgres, slightly ammend it and put it into sqlite. I ended up with the fact that for each row, i create separate sqlite connection and cursor. And that looks awfull.\nAt second try , i moved sqlite connection and cursor out of the generator , to the body of the script and it became clear that i do not commit data to sqlite untill i fetch and process all 20mils records. And this possibly could crash all my hardware.\nAt third try I strated to consider to keep Sqlite connection away from the loops , but create\/close cursor each time i process and push one row to Sqlite. This is better but i think also have some overhead.\nI also considered to play with transactions : One connection, one cursor, one transaction and commit called in generator each time row is being pushed to Sqlite. Is this i right way i`m going? \nIs there some widely-used pattern to write such a component in python? Because I feel as if I am inventing a bicycle.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":293,"Q_Id":13919448,"Users Score":0,"Answer":"Finally i managed to resolve my problem. Main issue was in exessive amount of insertions in sqlite. After i started to load all data from postgress to memory, aggregate it proper way to reduce amount of rows, i was able to decrease processing time from 60 hrs to 16 hrs.","Q_Score":0,"Tags":"python,sqlite,python-2.7","A_Id":13976529,"CreationDate":"2012-12-17T17:56:00.000","Title":"How to write proper big data loader to sqlite","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm attempting to install MySQL-python on a machine running CentOS 5.5 and python 2.7. This machine isn't running a mysql server, the mysql instance this box will be using is hosted on a separate server. I do have a working mysql client. On attempting sudo pip install MySQL-python, I get an error of EnvironmentError: mysql_config not found, which as far as I can tell is a command that just references \/etc\/my.cnf, which also isn't present. Before I go on some wild goose chase creating spurious my.cnf files, is there an easy way to get MySQL-python installed?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":27898,"Q_Id":13922955,"Users Score":21,"Answer":"So it transpires that mysql_config is part of mysql-devel. mysql-devel is for compiling the mysql client, not the server. Installing mysql-devel allows the installation of MySQL-python.","Q_Score":13,"Tags":"centos,mysql-python","A_Id":13932070,"CreationDate":"2012-12-17T22:01:00.000","Title":"Installing MySQL-python without mysql-server on CentOS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"trying to figure out whether this is a bug or by design. when no query_string is specified for a query, the SearchResults object is NOT sorted by the requested column. for example, here is some logging to show the problem:\nResults are returned unsorted on return index.search(query):\nquery_string = ''\nsort_options string: search.SortOptions(expressions=[search.SortExpression(expression=u'firstname', direction='ASCENDING', default_value=u'')], limit=36)\nResults are returned sorted on return index.search(query):\nquery_string = 'test'\nsort_options string: search.SortOptions(expressions=[search.SortExpression(expression=u'firstname', direction='ASCENDING', default_value=u'')], limit=36)\nThis is how I'm constructing my query for both cases (options has limit, offset and sort_options parameters):\nquery = search.Query(query_string=query_string, options=options)","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":103,"Q_Id":13953039,"Users Score":-2,"Answer":"Could be a bug in the way you build your query, since it's not shown.\nCould be that you don't have an index for the case that isn't working.","Q_Score":8,"Tags":"python,google-app-engine,gae-search","A_Id":13954922,"CreationDate":"2012-12-19T13:02:00.000","Title":"sort_options only applied when query_string is not empty?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Im getting this error when trying to run python \/ django after installing psycopg2:\nError: dlopen(\/Users\/macbook\/Envs\/medint\/lib\/python2.7\/site-packages\/psycopg2\/_psycopg.so, 2): Symbol not found: _PQbackendPID\n Referenced from: \/Users\/macbook\/Envs\/medint\/lib\/python2.7\/site-packages\/psycopg2\/_psycopg.so\n Expected in: dynamic lookup\nAnyone?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3182,"Q_Id":14001116,"Users Score":6,"Answer":"on Mojave macOS, I solved it by running below steps:\n\npip uninstall psycopg2 \npip install psycopg2-binary","Q_Score":1,"Tags":"python,django,postgresql,heroku,psycopg2","A_Id":59063813,"CreationDate":"2012-12-22T08:02:00.000","Title":"Psycopg2 Symbol not found: _PQbackendPID Expected in: dynamic lookup","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that).\nNote: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow.\nNote: A brand new PostgreSQL 9.1 database will be it's final destination.\nSo, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. \nAgain, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric.\nI know from reading\/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year.\nAlso, I know this is not a simple stackoverflow-kind of question (something like \"how can I reverse a list in python\"). But, what I'm hoping for is someone to read this and \"say, I do something similar and use XYZ... it's great!\"\nI guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) \nThanks for reading. I look forward to any suggestions.","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1819,"Q_Id":14006363,"Users Score":2,"Answer":"I often use a combination of SQS\/S3\/EC2 for this type of batch work. Queue up messages in SQS for all of the work that needs to be performed (chunked into some reasonably small chunks). Spin up N EC2 instances that are configured to start reading messages from SQS, performing the work and putting results into S3, and then, and only then, delete the message from SQS.\nYou can scale this to crazy levels and it has always worked really well for me. In your case, I don't know if you would store results in S3 or go right to PostgreSQL.","Q_Score":5,"Tags":"python,fabric,boto,data-processing","A_Id":14012685,"CreationDate":"2012-12-22T20:30:00.000","Title":"Processing a large amount of data in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that).\nNote: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow.\nNote: A brand new PostgreSQL 9.1 database will be it's final destination.\nSo, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. \nAgain, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric.\nI know from reading\/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year.\nAlso, I know this is not a simple stackoverflow-kind of question (something like \"how can I reverse a list in python\"). But, what I'm hoping for is someone to read this and \"say, I do something similar and use XYZ... it's great!\"\nI guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) \nThanks for reading. I look forward to any suggestions.","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":1819,"Q_Id":14006363,"Users Score":1,"Answer":"You might benefit from hadoop in form of Amazon Elastic Map Reduce. Without getting too deep it can be seen as a way to apply some logic to massive data volumes in parralel (Map stage).\nThere is also hadoop technology called hadoop streaming - which enables to use scripts \/ executables in any languages (like python). \nAnother hadoop technology you can find useful is sqoop - which moves data between HDFS and RDBMS.","Q_Score":5,"Tags":"python,fabric,boto,data-processing","A_Id":14009860,"CreationDate":"2012-12-22T20:30:00.000","Title":"Processing a large amount of data in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that).\nNote: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow.\nNote: A brand new PostgreSQL 9.1 database will be it's final destination.\nSo, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. \nAgain, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric.\nI know from reading\/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year.\nAlso, I know this is not a simple stackoverflow-kind of question (something like \"how can I reverse a list in python\"). But, what I'm hoping for is someone to read this and \"say, I do something similar and use XYZ... it's great!\"\nI guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) \nThanks for reading. I look forward to any suggestions.","AnswerCount":5,"Available Count":4,"Score":0.1194272985,"is_accepted":false,"ViewCount":1819,"Q_Id":14006363,"Users Score":3,"Answer":"Did you do some performance measurements: Where are the bottlenecks? Is it CPU bound, IO bound, DB bound?\nWhen it is CPU bound, you can try a python JIT like pypy.\nWhen it is IO bound, you need more HDs (and put some striping md on them).\nWhen it is DB bound, you can try to drop all the indexes and keys first.\nLast week I imported the Openstreetmap DB into a postgres instance on my server. The input data were about 450G. The preprocessing (which was done in JAVA here) just created the raw data files which could be imported with postgres 'copy' command. After importing the keys and indices were generated.\nImporting all the raw data took about one day - and then it took several days to build keys and indices.","Q_Score":5,"Tags":"python,fabric,boto,data-processing","A_Id":14006535,"CreationDate":"2012-12-22T20:30:00.000","Title":"Processing a large amount of data in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that).\nNote: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow.\nNote: A brand new PostgreSQL 9.1 database will be it's final destination.\nSo, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. \nAgain, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric.\nI know from reading\/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year.\nAlso, I know this is not a simple stackoverflow-kind of question (something like \"how can I reverse a list in python\"). But, what I'm hoping for is someone to read this and \"say, I do something similar and use XYZ... it's great!\"\nI guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) \nThanks for reading. I look forward to any suggestions.","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":1819,"Q_Id":14006363,"Users Score":2,"Answer":"I did something like this some time ago, and my setup was like\n\none multicore instance (x-large or more), that converts raw source files (xml\/csv) into an intermediate format. You can run (num-of-cores) copies of the convertor script on it in parallel. Since my target was mongo, I used json as an intermediate format, in your case it will be sql.\nthis instance has N volumes attached to it. Once a volume becomes full, it gets detached and attached to the second instance (via boto).\nthe second instance runs a DBMS server and a script which imports prepared (sql) data into the db. I don't know anything about postgres, but I guess it does have a tool like mysql or mongoimport. If yes, use that to make bulk inserts instead of making queries via a python script.","Q_Score":5,"Tags":"python,fabric,boto,data-processing","A_Id":14006466,"CreationDate":"2012-12-22T20:30:00.000","Title":"Processing a large amount of data in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, a friend and I are currently writing a panel (in python\/django) for managing gameservers.\nEach client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'.\nThe passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password.\nResetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes.\nWhat would be a secure (but without sacrificing ease of use for the clients) way to go about this?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":408,"Q_Id":14008232,"Users Score":1,"Answer":"Your question embodies a contradiction in terms. Either you don't want reversibility or you do. You will have to choose.\nThe usual technique is to hash the passwords and to provide a way for the user to reset his own password on sufficient alternative proof of identity. You should never display a password to anybody, for legal non-repudiability reasons. If you don't know what that means, ask a lawyer.","Q_Score":2,"Tags":"python,mysql,django,security,encryption","A_Id":14008320,"CreationDate":"2012-12-23T02:46:00.000","Title":"Storing MySQL Passwords","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So, a friend and I are currently writing a panel (in python\/django) for managing gameservers.\nEach client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'.\nThe passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password.\nResetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes.\nWhat would be a secure (but without sacrificing ease of use for the clients) way to go about this?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":408,"Q_Id":14008232,"Users Score":4,"Answer":"Though this is not the answer you were looking for, you only have three possibilities\n\nstore the passwords plaintext (ugh!)\nstore with a reversible encryption, e.g. RSA (http:\/\/stackoverflow.com\/questions\/4484246\/encrypt-and-decrypt-text-with-rsa-in-php)\ndo not store it; clients can only reset password, not view it\n\nThe second choice is a secure way, as RSA is also used for TLS encryption within the HTTPS protocol used by your bank of choice ;)","Q_Score":2,"Tags":"python,mysql,django,security,encryption","A_Id":14008264,"CreationDate":"2012-12-23T02:46:00.000","Title":"Storing MySQL Passwords","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing myself a blog in python, and am to put it up to GitHub.\nOne of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure.\nIs it dangerous if I do so? \nIf yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":599,"Q_Id":14039877,"Users Score":3,"Answer":"It's not dangerous if you secure access to database. You are exposing only your know-how. Once somebody gains access to database, it's easy to list database structure.","Q_Score":2,"Tags":"python,database,open-source,schema,database-schema","A_Id":14039904,"CreationDate":"2012-12-26T11:20:00.000","Title":"Is it dangerous if I expose my database schema in an open source project?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing myself a blog in python, and am to put it up to GitHub.\nOne of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure.\nIs it dangerous if I do so? \nIf yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":599,"Q_Id":14039877,"Users Score":0,"Answer":"There is a difference between sharing database and database schema.\nYou can comment the values of database machine\/username\/password in your code and publish the code on github.\nAs a proof of concept, you can host your application on cloud(without disclosing its database credentials) and add its link to your github readme file.","Q_Score":2,"Tags":"python,database,open-source,schema,database-schema","A_Id":14039945,"CreationDate":"2012-12-26T11:20:00.000","Title":"Is it dangerous if I expose my database schema in an open source project?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing myself a blog in python, and am to put it up to GitHub.\nOne of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure.\nIs it dangerous if I do so? \nIf yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":599,"Q_Id":14039877,"Users Score":0,"Answer":"I think it is dangerous, as if a SQL injection vulnerability exists in your website, the scheme will help the attacker to retrieve all important data easier.","Q_Score":2,"Tags":"python,database,open-source,schema,database-schema","A_Id":21087156,"CreationDate":"2012-12-26T11:20:00.000","Title":"Is it dangerous if I expose my database schema in an open source project?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku.\nRight now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any objects that it might be acting on, like this:\nAction:\n (Clay - actor) wrote a (comment - action object) on (Andrew's review of Starbucks - target)\nAs we've scaled, its become way too slow, which is understandable because it relies on big, expensive SQL joins.\nI see at least two options: \n\nPut a Redis layer on top of the SQL database and get activity feeds from there. \nTry to circumvent the Django ORM and do all the queries in raw SQL, which I understand can improve performance.\n\nAny one have thoughts on either of these two, or other ideas, I'd love to hear them.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":499,"Q_Id":14073030,"Users Score":1,"Answer":"You said redis? Everything is better with redis.\nCaching is one of the best ideas in software development, no mather if you use Materialized Views you should also consider trying to cache those, believe me your users will notice the difference.","Q_Score":4,"Tags":"python,sql,django,redis,feed","A_Id":14074169,"CreationDate":"2012-12-28T17:04:00.000","Title":"Good way to make a SQL based activity feed faster","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku.\nRight now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any objects that it might be acting on, like this:\nAction:\n (Clay - actor) wrote a (comment - action object) on (Andrew's review of Starbucks - target)\nAs we've scaled, its become way too slow, which is understandable because it relies on big, expensive SQL joins.\nI see at least two options: \n\nPut a Redis layer on top of the SQL database and get activity feeds from there. \nTry to circumvent the Django ORM and do all the queries in raw SQL, which I understand can improve performance.\n\nAny one have thoughts on either of these two, or other ideas, I'd love to hear them.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":499,"Q_Id":14073030,"Users Score":1,"Answer":"Went with an approach that sort of combined the two suggestions.\nWe created a master list of every action in the database, which included all the information we needed about the actions, and stuck it in Redis. Given an action ID, we can now do a Redis look up on it and get a dictionary object that is ready to be returned to the front end.\nWe also created action id lists that correspond to all the different types of activity streams that are available to a user. So given a user id, we have his friends' activity, his own activity, favorite places activity, etc, available for look up. (These I guess correspond somewhat to materialized views, although they are in Redis, not in PSQL.)\nSo we get a user's feed as a list of action ids. Then we get the details of those actions by look ups on the ids in the master action list. Then we return the feed to the front end.\nThanks for the suggestions, guys.","Q_Score":4,"Tags":"python,sql,django,redis,feed","A_Id":14201647,"CreationDate":"2012-12-28T17:04:00.000","Title":"Good way to make a SQL based activity feed faster","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1?\ni.e if we create a record for a one2many table,this record will be actually\nsaved to the database table only after saving the record of the main table\nassociated with this, even though we can create many records(rows) for one2many\ntable.\nWhere are these rows stored?\nAre they stored in any OpenERP memory variable? if so which is that variable\nor function with which we can access those..\nPlease help me out on this.\nThanks in Advance!!!","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1493,"Q_Id":14119208,"Users Score":2,"Answer":"When saving a new record in openerp, a dictionary will be generated with all the fields having data as keys and its data as values. If the field is a one2many and have many lines, then a list of dictionaries will be the value for the one2many field. You can modify it by overriding the create and write functions in openerp.","Q_Score":2,"Tags":"python,openerp","A_Id":14119351,"CreationDate":"2013-01-02T08:57:00.000","Title":"Where is the value stored for a one2many table initially in OpenERP6.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1?\ni.e if we create a record for a one2many table,this record will be actually\nsaved to the database table only after saving the record of the main table\nassociated with this, even though we can create many records(rows) for one2many\ntable.\nWhere are these rows stored?\nAre they stored in any OpenERP memory variable? if so which is that variable\nor function with which we can access those..\nPlease help me out on this.\nThanks in Advance!!!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1493,"Q_Id":14119208,"Users Score":0,"Answer":"One2Many field is child parent relation in OpenERP. One2Many is just logical field there is no effect in database for that.\nIf you are creating Sale order then Sale order line is One2Many in Sale order model. But if you will not put Many2One in Sale order line then One2Many in Sale order will not work.\nMany2One field put foreign key for the related model in the current table.","Q_Score":2,"Tags":"python,openerp","A_Id":14120545,"CreationDate":"2013-01-02T08:57:00.000","Title":"Where is the value stored for a one2many table initially in OpenERP6.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to compare the value of a given column at each row against another value, and if the values are equal, I want to copy the whole row to another spreadsheet.\nHow can I do this using Python?\nTHANKS!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14135,"Q_Id":14188923,"Users Score":0,"Answer":"For \"xls\" files it's possible to use the xlutils package. It's currently not possible to copy objects between workbooks in openpyxl due to the structure of the Excel format: there are lots of dependencies all over the place that need to be managed. It is, therefore, the responsibility of client code to copy everything required manually. If time permits we might try and port some of the xlutils functionality to openpyxl.","Q_Score":3,"Tags":"python,excel,xlrd,xlwt,openpyxl","A_Id":30048138,"CreationDate":"2013-01-07T02:04:00.000","Title":"How to copy a row of Excel sheet to another sheet using Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like \"You are using old GAE SDK 1.4.\" So, to get rid of that I have done following things:\n\nI removed old version of GAE and installed GAE 1.7. Along with that I have\nalso changed my djangoappengine folder with latest version.\nI have copied new version of GAE to \/usr\/local directory since my ~\/bashrc file PATH variable pointing to GAE to this directory.\n\nNow, I am getting error\ndjango.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend.\n Try using django.db.backends.XXX, where XXX is one of:\n 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3'\nError was: No module named utils\nI don't think there is any problem of directory structure since earlier it was running fine.\nDoes anyone has any idea ? \nYour help will be highly appreciated.\n-Sunil\n.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":191,"Q_Id":14307581,"Users Score":1,"Answer":"Did you update djangoappengine without updating django-nonrel and djangotoolbox?\nWhile I haven't upgraded to GAE 1.7.4 yet, I'm running 1.7.2 with no problems. I suspect your problem is not related to the GAE SDK but rather your django-nonrel installation has mismatching pieces.","Q_Score":0,"Tags":"python,google-app-engine,django-nonrel","A_Id":14368275,"CreationDate":"2013-01-13T20:03:00.000","Title":"Django-nonrel broke after installing new version of Google App Engine SDK","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like \"You are using old GAE SDK 1.4.\" So, to get rid of that I have done following things:\n\nI removed old version of GAE and installed GAE 1.7. Along with that I have\nalso changed my djangoappengine folder with latest version.\nI have copied new version of GAE to \/usr\/local directory since my ~\/bashrc file PATH variable pointing to GAE to this directory.\n\nNow, I am getting error\ndjango.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend.\n Try using django.db.backends.XXX, where XXX is one of:\n 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3'\nError was: No module named utils\nI don't think there is any problem of directory structure since earlier it was running fine.\nDoes anyone has any idea ? \nYour help will be highly appreciated.\n-Sunil\n.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":191,"Q_Id":14307581,"Users Score":0,"Answer":"Actually I changed the google app engine path in \/.bashrc file and restarted the system. It solved the issue. I think since I was not restarting the system after .bashrc changes, hence it was creating problem.","Q_Score":0,"Tags":"python,google-app-engine,django-nonrel","A_Id":14382654,"CreationDate":"2013-01-13T20:03:00.000","Title":"Django-nonrel broke after installing new version of Google App Engine SDK","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm working on an NDB based Google App Engine application that needs to keep track of the day\/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are:\n\nTo precompute a year's worth of sunrises into datetime objects, put\nthem into a list, pickle the list and put it into a PickleProperty\n, but put the list into a JsonProperty\nGo with DateTimeProperty and set repeated=True\n\nNow, I'd like the very next sunrise\/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed. \nDoes anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing?\nThanks,\nDave","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":537,"Q_Id":14343871,"Users Score":0,"Answer":"I would say precompute those structures and output them into hardcoded python structures that you save in a generated python file.\nJust read those structures into memory as part of your instance startup.\nFrom your description, there's no reason to compute these values at runtime, and there's no reason to store it in the datastore since that has a cost associated with it, as well as some latency for the RPC.","Q_Score":1,"Tags":"python,google-app-engine,python-2.7","A_Id":14365980,"CreationDate":"2013-01-15T17:59:00.000","Title":"Best strategy for storing precomputed sunrise\/sunset data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm working on an NDB based Google App Engine application that needs to keep track of the day\/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are:\n\nTo precompute a year's worth of sunrises into datetime objects, put\nthem into a list, pickle the list and put it into a PickleProperty\n, but put the list into a JsonProperty\nGo with DateTimeProperty and set repeated=True\n\nNow, I'd like the very next sunrise\/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed. \nDoes anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing?\nThanks,\nDave","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":537,"Q_Id":14343871,"Users Score":1,"Answer":"For 2000 immutable data points - just calculate them when instance starts or on first use, then keep it in memory. This will be the cheapest and fastest.","Q_Score":1,"Tags":"python,google-app-engine,python-2.7","A_Id":14345283,"CreationDate":"2013-01-15T17:59:00.000","Title":"Best strategy for storing precomputed sunrise\/sunset data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have income table which contain recurrence field. Now if user select recurrence_type as \"Monthly\" or \"Daily\" then I have to add row into income table \"daily\" or \"monthly\" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":214,"Q_Id":14344473,"Users Score":0,"Answer":"Used django-celery package and created job in it to update the data periodically","Q_Score":1,"Tags":"python,mysql,django","A_Id":27122957,"CreationDate":"2013-01-15T18:33:00.000","Title":"add data to table periodically in mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have income table which contain recurrence field. Now if user select recurrence_type as \"Monthly\" or \"Daily\" then I have to add row into income table \"daily\" or \"monthly\" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":214,"Q_Id":14344473,"Users Score":1,"Answer":"As I know there is no such function in MySQL. Even if MySQL could do it, this should not be its job. Such functions should be part of the business logic in your application.\nThe normal way is to setup the cron job in server. The cron job will wake up at the time you set, and then call your python script or SQL to fulfil the adding data work. And scripts are much better than direct SQL.","Q_Score":1,"Tags":"python,mysql,django","A_Id":14344610,"CreationDate":"2013-01-15T18:33:00.000","Title":"add data to table periodically in mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. \nHow can I make the voting secure? I was thinking of sending a POST\/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1105,"Q_Id":14347244,"Users Score":1,"Answer":"If voting is only for subscribed users, then enable voting after members log in to your site. \nIf not, then you can track users' IP addresses so one IP address can vote once for a single article in a day.\nBy the way, what kind of security do you need?","Q_Score":4,"Tags":"python,mysql,google-app-engine,jinja2","A_Id":14347324,"CreationDate":"2013-01-15T21:27:00.000","Title":"How to implement a 'Vote up' System for posts in my blog?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. \nHow can I make the voting secure? I was thinking of sending a POST\/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1105,"Q_Id":14347244,"Users Score":4,"Answer":"First, keep in mind that there is no such thing as \"secure\", just \"secure enough for X\". There's always a tradeoff\u2014more secure means more annoying for your legitimate users and more expensive for you.\nGetting past these generalities, think about your specific case. There is nothing that has a 1-to-1 relationship with users. IP addresses or computers are often shared by multiple people, and at the same time, people often have multiple addresses or computers. Sometimes, something like this is \"good enough\", but from your question, it doesn't sound like it would be.\nHowever, with user accounts, the only false negatives come from people intentionally creating multiple accounts or hacking others' accounts, and there are no false positives. And there's a pretty linear curve in the annoyance\/cost vs. security tradeoff, all the way from \"\"Please don't create sock puppets\" to CAPTCHA to credit card checks to web of trust\/reputation score to asking for real-life info and hiring an investigator to check it out.\nIn real life, there's often a tradeoff between more than just these two things. For example, if you're willing to accept more cheating if it directly means more money for you, you can just charge people real money to vote (as with those 1-900 lines that many TV shows use).\n\n\nHow do Reddit and Digg check multiple voting from a single registered user?\n\nI don't know exactly how Reddit or Digg does things, but the general idea is simple: Keep track of individual votes.\nNormally, you've got your users stored in a SQL RDBMS of some kind. So, you just add a Votes table with columns for user ID, question ID, and answer. (If you're using some kind of NoSQL solution, it should be easy to translate appropriately. For example, maybe there's a document for each question, and the document is a dictionary mapping user IDs to answers.) When a user votes, just INSERT a row into the database.\nWhen putting together the voting interface, whether via server-side template or client-side AJAX, call a function that checks for an existing vote. If there is one, instead of showing the vote controls, show some representation of \"You already voted Yes.\" You also want to check again at vote-recording time, to make sure someone doesn't hack the system by opening 200 copies of the page, all of which allow voting (because the user hasn't voted yet), and then submitting 200 Yes votes, but with a SQL database, this is as simple as making Question, User into a multi-column unique key.\nIf you want to allow vote changing or undoing, just add more controls to the interface, and handle them with UPDATE and DELETE calls. If you want to get really fancy\u2014like this site, which allows undoing if you have enough rep and if either your original vote was in the past 5 minutes or the answer has been edited since your vote (or something like that)\u2014you may have to keep some extra info, like record a row for each voting action, with a timestamp, instead of just a single answer for each user.\nThis design also means that, instead of keeping a count somewhere, you generate the vote tally on the fly by, e.g., SELECT COUNT(*) FROM Votes WHERE Question=? GROUP BY Answer. But, as usual, if this is too slow, you can always optimize-by-denormalizing and keep the totals along with the actual votes. Similarly, if your user base is huge, you may want to archive votes on old questions and get them out of the operational database. And so on.","Q_Score":4,"Tags":"python,mysql,google-app-engine,jinja2","A_Id":14349144,"CreationDate":"2013-01-15T21:27:00.000","Title":"How to implement a 'Vote up' System for posts in my blog?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My Question is a bit complex and iam new to OpenERP.\nI have an external database and an OpenERP. the external one isn't PostgreSQL. \nMY job is that I need to synchronize the partners in the two databases. \nExternal one being the more important. This means that if the external one's data change so does the OpenERp's, but if OpenERP's data changes nothing changes onthe external one.\n\nI can access to the external database, and using XML RCP I have acces\nto OpenERP's as well. \nI can import data from the external database simply with XML RCP but \nthe problem is the sync.\nI can't just INSERT the modified partner and delete the old one\nbecause i have no way to identify the old one.\nI need to UPDATE it. But then i need an id that says which is which. \nand external ID.\nTo my knowledge OpenERP can handle external IDs.\n\nHow does this work? and how can i add an external ID to my res.partner using this?\nI was told that I cant create a new module for this alone I need to use the internal ID works.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4853,"Q_Id":14356218,"Users Score":0,"Answer":"Add an integer field in res partner table for storing external id on both database. When data is retrived from the external server and adding to your openerp database, store the external id in the record of res partner in local server and also save the id of the newly created partner record in the external server's partner record. So next time when the external partner record is updated, we can search the external id in our local server and update that record. \nPlease check the openerp module base_synchronization and read the codes, which will be helpful for you.","Q_Score":6,"Tags":"python,xml-rpc,openerp","A_Id":14356856,"CreationDate":"2013-01-16T10:27:00.000","Title":"Adding external Ids to Partners in OpenERP withouth a new module","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on a web-app that's very heavily database driven. I'm nearing the initial release and so I've locked down the features for this version, but there are going to be lots of other features implemented after release. These features will inevitably require some modification to the database models, so I'm concerned about the complexity of migrating the database on each release. What I'd like to know is how much should I concern myself with locking down a solid database design now so that I can release quickly, against trying to anticipate certain features now so that I can build it into the database before release? I'm also anticipating finding flaws with my current model and would probably then want to make changes to it, but if I release the app and then data starts coming in, migrating the data would be a difficult task I imagine. Are there conventional methods to tackle this type of problem? A point in the right direction would be very useful.\nFor a bit of background I'm developing an asset management system for a CG production pipeline. So lots of pieces of data with lots of connections between them. It's web-based, written entirely in Python and it uses SQLAlchemy with a SQLite engine.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":120,"Q_Id":14364214,"Users Score":2,"Answer":"Some thoughts for managing databases for a production application:\n\nMake backups nightly. This is crucial because if you try to do an update (to the data or the schema), and you mess up, you'll need to be able to revert to something more stable.\nCreate environments. You should have something like a local copy of the database for development, a staging database for other people to see and test before going live and of course a production database that your live system points to.\nMake sure all three environments are in sync before you start development locally. This way you can track changes over time.\nStart writing scripts and version them for releases. Make sure you store these in a source control system (SVN, Git, etc.) You just want a historical record of what has changed and also a small set of scripts that need to be run with a given release. Just helps you stay organized.\nDo your changes to your local database and test it. Make sure you have scripts that do two things, 1) Scripts that modify the data, or the schema, 2) Scripts that undo what you've done in case things go wrong. Test these over and over locally. Run the scripts, test and then rollback. Are things still ok?\nRun the scripts on staging and see if everything is still ok. Just another chance to prove your work is good and that if needed you can undo your changes.\nOnce staging is good and you feel confident, run your scripts on the production database. Remember you have scripts to change data (update, delete statements) and scripts to change schema (add fields, rename fields, add tables).\n\nIn general take your time and be very deliberate in your actions. The more disciplined you are the more confident you'll be. Updating the database can be scary, so don't rush things, write out your plan of action, and test, test, test!","Q_Score":1,"Tags":"python,database,migration,sqlalchemy","A_Id":14364804,"CreationDate":"2013-01-16T17:29:00.000","Title":"How to approach updating an database-driven application after release?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have been trying to get my head around Django over the last week or two. Its slowly starting to make some sense and I am really liking it. \nMy goal is to replace a fairly messy excel spreadsheet with a database and frontend for my users. This would involve pulling the data out of a table, presenting it in a web tabular format, and allowing changes to be made through text fields and drop down menus, with a simple update button that will update all changes to the DB.\nMy question is, will the built in Django Forms functionality be the best solution? Or would I create some sort of for loop for my objects and wrap them around html form syntax in my template? I'm just not too sure how to approach the solution.\nApologies if this seems like an simple question, I just feel like there is maybe a few ways to do it but maybe there is one perfect way.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2045,"Q_Id":14370576,"Users Score":1,"Answer":"Exporting the excel sheet in Django and have the them rendered as text fields , is not as easy as 2 step process.\nyou need to know how Django works.\nFirst you need to export the data in mysql in database using either some language or some ready made tools.\nThen you need to make a Model for that table and then you can use Django admin to edit them","Q_Score":3,"Tags":"python,database,django,frontend","A_Id":14371043,"CreationDate":"2013-01-17T01:00:00.000","Title":"Custom Django Database Frontend","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm wrinting a webapp in bottle.\nI have a small interface that lets user run sql statements.\nSometimes it takes about 5 seconds until the user get's a result because the DB is quite big and old. \nWhat I want to do is the following:\n1.Starte the query in a thread\n2.Give the user a response right away and have ajax poll for the result \nThere is one thing that I'm not sure of....Where do I store the result of the query?\nShould I store it in a DB ?\nShould I store it in a variable inside my webapp ?\nWhat do you guys think would be best ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":105,"Q_Id":14377250,"Users Score":0,"Answer":"This would be a good use for something like memcached.","Q_Score":1,"Tags":"python,database,multithreading","A_Id":14377893,"CreationDate":"2013-01-17T10:43:00.000","Title":"Python 3 - SQL Result - where to store it","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I try installing mysql-python using below command,\nmacbook-user$ sudo pip install MYSQL-python\nI get these messages:\n\n\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/include\/python2.7\/pyconfig.h:891:1: warning: this is the location of the previous definition\n\/usr\/bin\/lipo: \/tmp\/_mysql-LtlmLe.o and \/tmp\/_mysql-thwkfu.o have the same architectures (i386) and can't be in the same fat output file\nclang: error: lipo command failed with exit code 1 (use -v to see invocation)\nerror: command 'clang' failed with exit status 1\n\nDoes anyone know how to solve this problem? Help me please!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":506,"Q_Id":14399223,"Users Score":0,"Answer":"At first glance it looks like damaged pip package. Have you tried easy_install instead with the same package?","Q_Score":1,"Tags":"python,mysql,django,pip,mysql-python","A_Id":14399388,"CreationDate":"2013-01-18T12:41:00.000","Title":"clang error when installing MYSQL-python on Lion-mountain (Mac OS X 10.8)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I fired redis-py's bgsave() command, the return value was False, but I'm pretty sure the execution was successful because I've checked with lastsave().\nHowever, if I use save() the return value would be True after successful execution.\nCould anyone please explain what False indicates for bgsave()? Not sure if it has anything to do with bgsave() being executed in the background.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":778,"Q_Id":14417846,"Users Score":2,"Answer":"Thanks to Pavel Anossov, after reading the code of client.py, I found out that responses from 2 commands (BGSAVE and BGREWRITEAOF) were not converted from bytes to str, and this caused the problem in Python 3.\nTo fix this issue, just change lambda r: r == to lambda r: nativestr(r) == for these two commands in RESPONSE_CALLBACKS.","Q_Score":1,"Tags":"python,redis","A_Id":14418853,"CreationDate":"2013-01-19T19:10:00.000","Title":"Why does redis-py's bgsave() command return False after successful execution?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a chat bot that uses past conversations to generate its responses. Currently I use text files to store all the data but I want to use a database instead so that multiple instances of the bot can use it at the same time.\nHow should I structure this database?\nMy first idea was to keep a main table like create table Sessions (startTime INT,ip INT, botVersion REAL, length INT, tableName TEXT). Then for each conversation I create table (timestamp INT, message TEXT) with all the messages that were sent or received during that conversation. When the conversation is over, I insert the name of the new table into Sessions(tableName). Is it ok to programmatically create tables in this manner? I am asking because most SQL tutorials seem to suggest that tables are created when the program is initialized.\nAnother way to do this is to have a huge create table Messages(id INT, message TEXT) table that stores every message that was sent or received. When a conversation is over, I can add a new entry to Sessions that includes the id used during that conversation so that I can look up all the messages sent during a certain conversation. I guess one advantage of this is that I don't need to have hundreds or thousands of tables. \nI am planning on using SQLite despite its low concurrency since each instance of the bot may make thousands of reads before generating a response (which will result in one write). Still, if another relational database is better suited for this task, please comment.\nNote: There are other questions on SO about storing chat logs in databases but I am specifically looking for how it should be structured and feedback on the above ideas.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1198,"Q_Id":14430856,"Users Score":1,"Answer":"Don't use a different table for each conversation. Instead add a \"conversation\" column to your single table.","Q_Score":1,"Tags":"python,sql,database,sqlite,database-design","A_Id":14430911,"CreationDate":"2013-01-21T00:13:00.000","Title":"Storing chat logs in relational database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is my program\n\nimport MySQLdb as mdb\nfrom MySQLdb import IntegrityError\nconn = mdb.connect(\"localhost\", \"asdf\", \"asdf\", \"asdf\")\n\nwhen the connect function is called python prints some text (\"h\" in the shell).\nThis happens only if I execute the script file from a particular folder.\nIf I copy the same script file to some other folder \"h\" is not printed.\nactually i had this line previously in the same script for testing\n\nprint \"h\"\n\nbut now i have removed the line from the script. But still it is printed. What happen to my folder?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":14434712,"Users Score":1,"Answer":"Try deleting *.pyc files. Secondly use script with -v option so that you can view from where the file is being imported","Q_Score":0,"Tags":"python,mysql","A_Id":14434772,"CreationDate":"2013-01-21T08:18:00.000","Title":"python mysqldb printing text even if no print statement in the code","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have populated a combobox with an QSqlQueryModel. It's all working fine as it is, but I would like to add an extra item to the combobox that could say \"ALL_RECORDS\". This way I could use the combobox as a filtering device. \nI obviously don't want to add this extra item in the database, how can I add it to the combobox after it's been populated by a model?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":243,"Q_Id":14455871,"Users Score":1,"Answer":"You could use a proxy model that takes gets it's data from two models, one for your default values, the other for your database, and use it to populate your QComboBox.","Q_Score":1,"Tags":"python,qt,pyqt,pyqt4,pyside","A_Id":14540595,"CreationDate":"2013-01-22T10:02:00.000","Title":"Adding an item to an already populated combobox","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a finance application in Python to do time series analysis on security prices (among other things). The heavy lifting will be done in Python mainly using Numpy, SciPy, and pandas (pandas has an interface for SQLite and MySQL). With a web interface to present results. There will be a few hundred GB of data.\nI'm curious what is the better option for database in terms of performance, ease of accessing the data (queries), and interface with Python. I've seen the posts about the general pros and cons of SQLite v. MySQL but I'm looking for feedback that's more specific to a Python application.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1382,"Q_Id":14509517,"Users Score":0,"Answer":"SQLite is great for embedded databases, but it's not really great for anything that requires access by more than one process at a time. For this reason it cannot be taken seriously for your application.\nMySQL is a much better alternative. I'm also in agreement that Postgres would be an even better option.","Q_Score":0,"Tags":"python,mysql,sqlite,pandas","A_Id":14509945,"CreationDate":"2013-01-24T19:49:00.000","Title":"MySQL v. SQLite for Python based financial web app","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a finance application in Python to do time series analysis on security prices (among other things). The heavy lifting will be done in Python mainly using Numpy, SciPy, and pandas (pandas has an interface for SQLite and MySQL). With a web interface to present results. There will be a few hundred GB of data.\nI'm curious what is the better option for database in terms of performance, ease of accessing the data (queries), and interface with Python. I've seen the posts about the general pros and cons of SQLite v. MySQL but I'm looking for feedback that's more specific to a Python application.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1382,"Q_Id":14509517,"Users Score":0,"Answer":"For many 'research' oriented time series database loads, it is far faster to do as much analysis in the database than to copy the data to a client and analyze it using a regular programming language. Copying 10G across the network is far slower than reading it from disk.\nRelational databases do not natively support time series operations, so generating something as simple as security returns from security prices is either impossible or very difficult in both MySQL and SQLite.\nPostgres has windowing operations, as do several other relational-like databases; the trade-off is that that they don't do as many transactions per second. Many others use K or Q.\nThe financial services web apps that I've seen used multiple databases; the raw data was stored in 'research' databases that were multiply indexed and designed for flexibility, while the web-apps interacted directly with in-memory caches and higher-speed RDBs; the tradeoff was that data had to be copied from the 'research' databases to the 'production' databases.","Q_Score":0,"Tags":"python,mysql,sqlite,pandas","A_Id":14514661,"CreationDate":"2013-01-24T19:49:00.000","Title":"MySQL v. SQLite for Python based financial web app","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Celery standalone (not within Django). I am planning to have one worker task type running on multiple physical machines. The task does the following\n\nAccept an XML document.\nTransform it.\nMake multiple database reads and writes.\n\nI'm using PostgreSQL, but this would apply equally to other store types that use connections. In the past, I've used a database connection pool to avoid creating a new database connection on every request or avoid keeping the connection open too long. However, since each Celery worker runs in a separate process, I'm not sure how they would actually be able to share the pool. Am I missing something? I know that Celery allows you to persist a result returned from a Celery worker, but that is not what I'm trying to do here. Each task can do several different updates or inserts depending on the data processed.\nWhat is the right way to access a database from within a Celery worker?\nIs it possible to share a pool across multiple workers\/tasks or is there some other way to do this?","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":24474,"Q_Id":14526249,"Users Score":2,"Answer":"You can override the default behavior to have threaded workers instead of a worker per process in your celery config:\nCELERYD_POOL = \"celery.concurrency.threads.TaskPool\"\nThen you can store the shared pool instance on your task instance and reference it from each threaded task invocation.","Q_Score":47,"Tags":"python,postgresql,connection-pooling,celery","A_Id":14526700,"CreationDate":"2013-01-25T16:38:00.000","Title":"Celery Worker Database Connection Pooling","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Celery standalone (not within Django). I am planning to have one worker task type running on multiple physical machines. The task does the following\n\nAccept an XML document.\nTransform it.\nMake multiple database reads and writes.\n\nI'm using PostgreSQL, but this would apply equally to other store types that use connections. In the past, I've used a database connection pool to avoid creating a new database connection on every request or avoid keeping the connection open too long. However, since each Celery worker runs in a separate process, I'm not sure how they would actually be able to share the pool. Am I missing something? I know that Celery allows you to persist a result returned from a Celery worker, but that is not what I'm trying to do here. Each task can do several different updates or inserts depending on the data processed.\nWhat is the right way to access a database from within a Celery worker?\nIs it possible to share a pool across multiple workers\/tasks or is there some other way to do this?","AnswerCount":6,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":24474,"Q_Id":14526249,"Users Score":3,"Answer":"Have one DB connection per worker process. Since celery itself maintains a pool of worker processes, your db connections will always be equal to the number of celery workers. \nFlip side, sort of, it will tie up db connection pooling to celery worker process management. But that should be fine given that GIL allows only one thread at a time in a process.","Q_Score":47,"Tags":"python,postgresql,connection-pooling,celery","A_Id":14549811,"CreationDate":"2013-01-25T16:38:00.000","Title":"Celery Worker Database Connection Pooling","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"There is a sqlite3 library that comes with python 2.7.3, but it is hardly the latest version.\nI would like to upgrade it within a virtualenv environment. In other words, the upgrade only applies to the version of python installed within this virtualenv.\nWhat is the correct way to do so?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":9594,"Q_Id":14541869,"Users Score":1,"Answer":"I was stuck in the same problem once. This solved it for me:\n\nDownload and untar the python version required\nmkdir local\nuntar sqlite after downloading its package\n.\/configure --prefix=\/home\/aanuj\/local\nmake\nmake install\n.\/configure --prefix=\/home\/anauj\/local LDFLAGS='-L\/home\/aaanuj\/local\/lib' CPPFLAGS='-I\/home\/aanuj\/local\/include'\nmake\nFind the sqlite3.so and copy to home\/desired loc\nExtract beaver \nSetup the virtual env with the python version needed\nActivate the env\nunalias python \nexport PYTHONPATH=\/home\/aanuj(location of _sqlite3.so)\nEnjoy","Q_Score":4,"Tags":"python,sqlite,virtualenv","A_Id":17417792,"CreationDate":"2013-01-26T21:42:00.000","Title":"How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a sqlite3 library that comes with python 2.7.3, but it is hardly the latest version.\nI would like to upgrade it within a virtualenv environment. In other words, the upgrade only applies to the version of python installed within this virtualenv.\nWhat is the correct way to do so?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":9594,"Q_Id":14541869,"Users Score":4,"Answer":"The below works for me, but please comment if there is any room for improvement:\n\nActivate the virtualenv to which you are going to install the latest sqlite3\nGet the latest source of pysqlite package from google code: wget http:\/\/pysqlite.googlecode.com\/files\/pysqlite-2.6.3.tar.gz\nCompile pysqlite from source and together with the latest sqlite database: python setup.py build_static\nInstall it to the site-packages directory of the virtualenv: python setup.py install\nThe above will actually install the pysqlite into path-to-virtualenv\/lib\/python2.7\/site-packages, which is where all other pip-installed libraries are.\n\nNow, I have the latest version of sqlite (compiled into pysqlite) installed within a virtualenv, so I can do: from pysqlite2 import dbapi2 as sqlite","Q_Score":4,"Tags":"python,sqlite,virtualenv","A_Id":14550136,"CreationDate":"2013-01-26T21:42:00.000","Title":"How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have couple OpenERP modules implemented for OpenERP 6.1 version. When I installed OpenERP 7.0, i copied these modules into addons folder for OpenERP 7. After that, I tried to update modules list trough web interface, but nothings changed. Also, I started server again with options --database=mydb --update=all, but modules list didn't change. Did I miss something? Is it possible in OpenERP version 7, usage of modules from version 6.1? \nThanks for advice. \nUPDATE:\nI already exported my database from version 6.1 in *.sql file. Will it OpenERP 7 work, if I just import these data in new database, which I created with OpenERP 7?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3217,"Q_Id":14563801,"Users Score":6,"Answer":"Openerp 6.1 modules directly can not be used in openerp 7. You have to do some basic changes \nin openerp 6.1 modules. Like tree, form tag compulsory string and verision=\"7\" include in form. If you have inherited some basic modules like sale, purchase then you have to do changes in inherit xpath etc. Some objects res.parter.address removed then you have take care of this and replace with res.partner. \nThanks","Q_Score":2,"Tags":"python,openerp,erp","A_Id":14564692,"CreationDate":"2013-01-28T14:06:00.000","Title":"OpenERP 7 with modules from OpenERP 6.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed.\nHowever the SQL is not valid SQL! The actual query is select * from app_model where name = %s with some parameters passed in (e.g. \"admin\"), however the logging message doesn't quote the params, so the sql is select * from app_model where name = admin, which is wrong. This also happens using django.db.connection.queries. AFAIK the django debug toolbar has a complex custom cursor to handle this.\nUpdate For those suggesting the Django debug toolbar: I am aware of that tool, it is great. However it does not do what I need. I want to run a sample interaction of our application, and aggregate the SQL that's used. DjDT is great for showing and shallow learning. But not great for aggregating and summarazing the interaction of dozens of pages.\nIs there any easy way to get the real, legit, SQL that is run?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":189,"Q_Id":14567172,"Users Score":0,"Answer":"select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like \"select * from app_model where name = %s\" % quote_string(\"user\") or more general query % map(quote_string, params). \nPlease note that quote_string is DB specific and the DB 2.0 API does not define a quote_string method. So you need to write one yourself. For logging purposes I'd recommend keeping the queries and parameters separate as it allows for far better profiling as you can easily group the queries without taking the actual values into account.","Q_Score":0,"Tags":"python,sql,django,django-database","A_Id":14567526,"CreationDate":"2013-01-28T17:00:00.000","Title":"How to retrieve the real SQL from the Django logger?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to use a python set as a filter for ids from a mysql table.\nThe python set stores all the ids to filter (about 30 000 right now) this number will grow slowly over time and I am concerned about the maximum capacity of a python set. Is there a limit to the number of elements it can contain?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2460,"Q_Id":14577790,"Users Score":0,"Answer":"I don't know if there is an arbitrary limit for the number of items in a set. More than likely the limit is tied to the available memory.","Q_Score":2,"Tags":"python,set","A_Id":14577827,"CreationDate":"2013-01-29T07:31:00.000","Title":"Is there a limit to the number of values that a python set can contain?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"No code examples here. Just running into an issue with Microsoft Excel 2010 where I have a python script on linux that pulls data from csv files, pushes it into excel, and emails that file to a certain email address as an attachment.\nMy problem is that I'm using formulas in my excel file, and when it first opens up it goes into \"Protected View\". My formulas don't load until after I click \"Enable Editing\". Is there anyway to get my numbers to show up even if Protected Mode is on?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":853,"Q_Id":14592328,"Users Score":0,"Answer":"Figured this out. Just used the for loop to keep a running total. Sorry for the wasted question.","Q_Score":0,"Tags":"python,linux,excel,view,protected","A_Id":14592481,"CreationDate":"2013-01-29T21:08:00.000","Title":"Protected View in Microsoft Excel 2010 and Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For 100k+ entities in google datastore, ndb.query().count() is going to cancelled by deadline , even with index. I've tried with produce_cursors options but only iter() or fetch_page() will returns cursor but count() doesn't. \nHow can I count large entities?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":2669,"Q_Id":14673642,"Users Score":2,"Answer":"This is indeed a frustrating issue. I've been doing some work in this area lately to get some general count stats - basically, the number of entities that satisfy some query. count() is a great idea, but it is hobbled by the datastore RPC timeout.\nIt would be nice if count() supported cursors somehow so that you could cursor across the result set and simply add up the resulting integers rather than returning a large list of keys only to throw them away. With cursors, you could continue across all 1-minute \/ 10-minute boundaries, using the \"pass the baton\" deferred approach. With count() (as opposed to fetch(keys_only=True)) you can greatly reduce the waste and hopefully increase the speed of the RPC calls, e.g., it takes a shocking amount of time to count to 1,000,000 using the fetch(keys_only=True) approach - an expensive proposition on backends.\nSharded counters are a lot of overhead if you only need\/want periodic count statistics (e.g., a daily count of all my accounts in the system by, e.g., country).","Q_Score":4,"Tags":"python,google-app-engine,app-engine-ndb,bigtable","A_Id":14713169,"CreationDate":"2013-02-03T14:41:00.000","Title":"ndb.query.count() failed with 60s query deadline on large entities","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I need some help with d3 and MySQL. Below is my question: \nI have data stored in MySQL (eg: keywords with their frequencies). I now want to visualize it using d3. As far as my knowledge of d3 goes, it requires json file as input. My question is: How do I access this MySQL database from d3 script? One way which i could think of is: \n\nUsing Python, connect with database and convert the data in json format. Save this in some .json file. \nIn d3, read this json file as input and use it in visualization.\n\nIs there any other way to convert the data in MySQL into .json format directly using d3? Can we connect to MySQL from d3 and read the data?\nThanks a lot!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":8185,"Q_Id":14679610,"Users Score":1,"Answer":"d3 is a javascript library that run on client-side, while MySQL database is supposed to run on server-side.\nd3 can't connect to MySQL database, let alone conversion to json format. The way you thought it was possible (steps 1 and 2) is what you should do.","Q_Score":4,"Tags":"javascript,python,mysql,d3.js,data-visualization","A_Id":14679748,"CreationDate":"2013-02-04T02:22:00.000","Title":"Accessing MySQL database in d3 visualization","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))\nIn my understanding, there are three ways to implement it.\n\nUse 'parent' argument\nUse 'repeated' Structured property\nUse 'repeated' Key property\n\nI choose a way based on the logic below usually, but does it make sense to you? \nIf you have better logic, please teach me.\n\nUse 'parent' argument\n\nTransactional operation is required between these entities\nBidirectional reference is required between these entities\nStrongly intend 'Parent-Child' relationship\n\nUse 'repeated' Structured property\n\nDon't need to use 'many' entity individually (Always, used with 'one' entity)\n'many' entity is only referred by 'one' entity\nNumber of 'repeated' is less than 100\n\nUse 'repeated' Key property\n\nNeed to use 'many' entity individually\n'many' entity can be referred by other entities\nNumber of 'repeated' is more than 100\n\n\nNo.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.\nI really appreciate your opinion.","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1389,"Q_Id":14739044,"Users Score":6,"Answer":"One thing that most GAE users will come to realize (sooner or later) is that the datastore does not encourage design according to the formal normalization principles that would be considered a good idea in relational databases. Instead it often seems to encourage design that is unintuitive and anathema to established norms. Although relational database design principles have their place, they just don't work here. \nI think the basis for the datastore design instead falls into two questions:\n\nHow am I going to read this data and how do I read it with the minimum number of read operations?\nIs storing it that way going to lead to an explosion in the number of write and indexing operations?\n\nIf you answer these two questions with as much foresight and actual tests as you can, I think you're doing pretty well. You could formalize other rules and specific cases, but these questions will work most of the time.","Q_Score":11,"Tags":"python,google-app-engine,app-engine-ndb","A_Id":14749034,"CreationDate":"2013-02-06T21:22:00.000","Title":"Effective implementation of one-to-many relationship with Python NDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))\nIn my understanding, there are three ways to implement it.\n\nUse 'parent' argument\nUse 'repeated' Structured property\nUse 'repeated' Key property\n\nI choose a way based on the logic below usually, but does it make sense to you? \nIf you have better logic, please teach me.\n\nUse 'parent' argument\n\nTransactional operation is required between these entities\nBidirectional reference is required between these entities\nStrongly intend 'Parent-Child' relationship\n\nUse 'repeated' Structured property\n\nDon't need to use 'many' entity individually (Always, used with 'one' entity)\n'many' entity is only referred by 'one' entity\nNumber of 'repeated' is less than 100\n\nUse 'repeated' Key property\n\nNeed to use 'many' entity individually\n'many' entity can be referred by other entities\nNumber of 'repeated' is more than 100\n\n\nNo.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.\nI really appreciate your opinion.","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1389,"Q_Id":14739044,"Users Score":7,"Answer":"A key thing you are missing: How are you reading the data?\nIf you are displaying all the tasks for a given person on a request, 2 makes sense: you can query the person and show all his tasks.\nHowever, if you need to query say a list of all tasks say due at a certain time, querying for repeated structured properties is terrible. You will want individual entities for your Tasks.\nThere's a fourth option, which is to use a KeyProperty in your Task that points to your Person. When you need a list of Tasks for a person you can issue a query.\nIf you need to search for individual Tasks, then you probably want to go with #4. You can use it in combination with #3 as well.\nAlso, the number of repeated properties has nothing to do with 100. It has everything to do with the size of your Person and Task entities, and how much will fit into 1MB. This is potentially dangerous, because if your Task entity can potentially be large, you might run out of space in your Person entity faster than you expect.","Q_Score":11,"Tags":"python,google-app-engine,app-engine-ndb","A_Id":14740062,"CreationDate":"2013-02-06T21:22:00.000","Title":"Effective implementation of one-to-many relationship with Python NDB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Pretty simple question but haven't been able to find a good answer.\nIn Excel, I am generating files that need to be automatically read. They are read by an ID number, but the format I get is setting it as text. When using xlrd, I get this format:\n\n5.5112E+12\n\nWhen I need it in this format:\n\n5511195414392\n\nWhat is the best way to achieve this? I would like to avoid using xlwt but if it is necessary I could use help on getting started in that process too","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1158,"Q_Id":14751806,"Users Score":1,"Answer":"I used the CSV module to figure this out, as it read the cells correctly.","Q_Score":2,"Tags":"python,xlrd","A_Id":14854783,"CreationDate":"2013-02-07T13:05:00.000","Title":"Reading scientific numbers in xlrd","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have really big database which I want write to xlsx\/xls file. I already tried to use xlwt, but it allows to write only 65536 rows (some of my tables have more than 72k rows). I also found openpyxl, but it works too slow, and use huge amount of memory for big spreadsheets. Are there any other possibilities to write excel files?\nedit:\nFollowing kennym's advice i used Optimised Reader and Writer. It is less memory consuming now, but still time consuming. Exporting takes more than hour now (for really big tables- up to 10^6 rows). Are there any other possibilities? Maybe it is possible to export whole table from HDF5 database file to excel, instead of doing it row after row- like it is now in my code?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":5375,"Q_Id":14754090,"Users Score":1,"Answer":"XlsxWriter work for me. I try openpyxl but it error. 22k*400 r*c","Q_Score":8,"Tags":"python,excel,hdf5","A_Id":31982266,"CreationDate":"2013-02-07T14:56:00.000","Title":"How to write big set of data to xls file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would one go about connecting to a different database based on which module is being used? Our scenario is as follows:\nWe have a standalone application with its own database on a certain server and OpenERP running on different server. We want to create a module in OpenERP which can utilise entities on the standalone application server rather than creating its own entities in its own database, is this possible? How can we change the connection parameters that the ORM uses to connect to its own database to point to a different database?\nOfcourse, one way is to use the base_synchro module to synchronise the required entities between both database but considering the large amount of data, we don't want duplication. Another way is to use xmlrpc to get data into OpenERP but that still requires entities to be present in OpenERP database.\nHow can we solve this problem without data duplication? How can a module in OpenERP be created based on a different database?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1877,"Q_Id":14756365,"Users Score":1,"Answer":"One way to connect to an external application is to create a connector module. There are already several connector modules that you can take a look at:\n\nthe thunderbird and outlook plugins\nthe joomla and magento modules\nthe 'event moodle' module\n\nFor example, the joomla connector uses a joomla plugin to handle the communication between OpenERP and joomla. The communication protocol used is XML-RPC but you can choose any protocol you want. You can even choose to connect directly to the external database using the psycopg2 modules (if the external database is using Postgresql) but this is not recommended. But perhaps you don't have the choice if this external application has no connection API.\nYou need to know what are the available ways to connect to this external application and choose one of these. Once you have chosen the right protocol, you can create your OpenERP module.\nYou can map entities stored on the external application using osv.TransientModel objects (formerly known as osv memory). The tables related to these objects will still be created in the OpenERP database but the data is volatile (deleted after some time).","Q_Score":1,"Tags":"python,xml-rpc,openerp","A_Id":14796657,"CreationDate":"2013-02-07T16:45:00.000","Title":"How to connect to a different database in OpenERP?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have pip installed psycopg2, but when I try to runserver or syncdb in my Django project, it raises an error saying there is \"no module named _psycopg\".\nEDIT: the \"syncdb\" command now raises:\ndjango.core.exceptions.ImproperlyConfigured: ImportError django.contrib.admin: No module named _psycopg\nThanks for your help","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1586,"Q_Id":14758024,"Users Score":1,"Answer":"This was solved by performing a clean reinstall of django. There was apparently some dependecies missing that the recursive pip install did not seem to be able to solve.","Q_Score":2,"Tags":"python,django,pip,psycopg2,psycopg","A_Id":15337328,"CreationDate":"2013-02-07T18:10:00.000","Title":"Psycopg missing module in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"TLDR; Are there drawbacks to putting two different types of documents into the same collection to save a round-trip to the database?\nSo I have documents with children, and a list of keys in the parent referencing the children, and almost whenever we want a parent, we also want the children to come along. The naive way to do this is to fetch the parent, and then get the children using the list of child keys with $IN (in SQL, we would use a join). However, this means making 2 round trips for a fairly frequent operation. We have a few options to improve this, especially since we can retrieve the child keys at the same time as the parent keys:\n\nPut the children in the parent document\nWhile this would play to mongo's strength, we also want to keep this data normalized\nPipeline database requests in threads\nWhich may or may not improve performance once we factor in the connection pool. It also means dealing with threading in a python app, which isn't terrible, but isn't great.\nKeep the parent\/child documents in the same collection (not embedded)\nThis way we can do one query for all the keys at once; this does mean some conceptual overhead in the wrapper for accessing the database, and forcing all indexes to be sparse, but otherwise seems straightforward.\n\nWe could profile all these options, but it does feel like someone out there should already have experience with this despite not finding anything online. So, is there something I am missing in my analysis?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":339,"Q_Id":14780381,"Users Score":1,"Answer":"I'll address the three points separately. You should know that it absolutely depends on the situation on what works best. There is no \"theoretically correct\" answer as it depends on your data store\/access patterns.\n\nIt is always a fairly complex decision on how you store your data. I think the main rule should be \"How do I query my data?\", and not \"We want to have all data normalised\". Data normalisation is something you do for a relational database, not for MongoDB. If you almost always query the children with the parent, and you don't have an unbound list of children, then that is how you should store them. Just be aware that a document in MongoDB is limited to 16MB (which is a lot more than you think). \nAvoid threading. You will just be better off running two queries in sequence, from two different collections. Less complex is a good thing!\nThis works, but it is a fairly ugly way. But then again, ugly isn't always a bad thing if it makes things go a lot faster. I don't quite know how distinct your parent and child documents are of course, so it's a difficult to say whether this is a good solution. A sparse index, which I assume you will do on a specific field depending on whether it is a parent or child, is a good idea. But perhaps you can get away with one index as well. I'd be happy to update your answer after you've shown your suggested schemas.\n\nI would recommend you do some benchmarking, but forget about option 2.","Q_Score":0,"Tags":"python,performance,mongodb","A_Id":14780990,"CreationDate":"2013-02-08T19:58:00.000","Title":"Put different \"schemas\" into same MongoDB collection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My python project involves an externally provided database: A text file of approximately 100K lines.\nThis file will be updated daily.\nShould I load it into an SQL database, and deal with the diff daily? Or is there an effective way to \"query\" this text file?\nADDITIONAL INFO:\n\nEach \"entry\", or line, contains three fields - any one of which can be used as an index.\nThe update is is the form of the entire database - I would have to manually generate a diff\nThe queries are just looking up records and displaying the text.\nQuerying the database will be a fundamental task of the application.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":336,"Q_Id":14795810,"Users Score":0,"Answer":"What I've done before is create SQLite databases from txt files which were created from database extracts, one SQLite db for each day.\nOne can query across SQLite db to check the values etc and create additional tables of data.\nI added an additional column of data that was the SHA1 of the text line so that I could easily identify lines that were different. \nIt worked in my situation and hopefully may form the barest sniff of an acorn of an idea for you.","Q_Score":2,"Tags":"python,sql,database,text","A_Id":14797390,"CreationDate":"2013-02-10T07:53:00.000","Title":"Large text database: Convert to SQL or use as is","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My python project involves an externally provided database: A text file of approximately 100K lines.\nThis file will be updated daily.\nShould I load it into an SQL database, and deal with the diff daily? Or is there an effective way to \"query\" this text file?\nADDITIONAL INFO:\n\nEach \"entry\", or line, contains three fields - any one of which can be used as an index.\nThe update is is the form of the entire database - I would have to manually generate a diff\nThe queries are just looking up records and displaying the text.\nQuerying the database will be a fundamental task of the application.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":336,"Q_Id":14795810,"Users Score":1,"Answer":"How often will the data be queried? On the one extreme, if once per day, you might use a sequential search more efficiently than maintaining a database or index.\nFor more queries and a daily update, you could build and maintain your own index for more efficient queries. Most likely, it would be worth a negligible (if any) sacrifice in speed to use an SQL database (or other database, depending on your needs) in return for simpler and more maintainable code.","Q_Score":2,"Tags":"python,sql,database,text","A_Id":14795870,"CreationDate":"2013-02-10T07:53:00.000","Title":"Large text database: Convert to SQL or use as is","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I dropped my database that I had previously created for django using :\ndropdb \nbut when I go to the psql prompt and say \\d, I still see the relations there :\nHow do I remove everything from postgres so that I can do everything from scratch ?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":88,"Q_Id":14869718,"Users Score":1,"Answer":"Most likely somewhere along the line, you created your objects in the template1 database (or in older versions the postgres database) and every time you create a new db i thas all those objects in it. You can either drop the template1 \/ postgres database and recreate it or connect to it and drop all those objects by hand.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":14880796,"CreationDate":"2013-02-14T07:23:00.000","Title":"postgres : relation there even after dropping the database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I dropped my database that I had previously created for django using :\ndropdb \nbut when I go to the psql prompt and say \\d, I still see the relations there :\nHow do I remove everything from postgres so that I can do everything from scratch ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":14869718,"Users Score":0,"Answer":"Chances are that you never created the tables in the correct schema in the first place. Either that or your dropdb failed to complete.\nTry to drop the database again and see what it says. If that appears to work then go in to postgres and type \\l, putting the output here.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":14870374,"CreationDate":"2013-02-14T07:23:00.000","Title":"postgres : relation there even after dropping the database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We have a Python application with over twenty modules, most of which are shared by several web and console applications.\nI've never had a clear understanding of the best practice for establishing and managing database connection in multi module Python apps. Consider this example:\nI have a module defining an object class for Users. It has many defs for creating\/deleting\/updating users in the database. The users.py module is imported into a) a console based utility, 2) a web.py based web application and 3) a constantly running daemon process.\nEach of these three application have different life cycles. The daemon can open a connection and keep it open. The console utility connects, does work, then dies. Of course the http requests are atomic, however the web server is a daemon.\nI am currently opening, using then closing a connection inside each function in the Users class. This seems the most inefficient, but it works in all examples. An alternative used as a test is to declare and open a global connection for the entire module. Another option would be to create the connection at the top application layer and pass references when instantiating classes, but this seems the worst idea to me.\nI know every application architecture is different. I'm just wondering if there's a best practice, and what it would be?","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":6022,"Q_Id":14883346,"Users Score":4,"Answer":"MySQL connections are relatively fast, so this might not be a problem (i.e. you should measure). Most other databases take much more resources to create a connection.\nCreating a new connection when you need one is always the safest, and is a good first choice. Some db libraries, e.g. SqlAlchemy, have connection pools built in that transparently will re-use connections for you correctly.\nIf you decide you want to keep a connection alive so that you can re-use it, there are a few points to be aware of:\n\nConnections that are only used for reading are easier to re-use than connections that that you've used to modify database data.\nWhen you start a transaction on a connection, be careful that nothing else can use that connection for something else while you're using it.\nConnections that sit around for a long time get stale and can be closed from underneath you, so if you're re-using a connection you'll need to check if it is still \"alive\", e.g. by sending \"select 1\" and verifying that you get a result.\n\nI would personally recommend against implementing your own connection pooling algorithm. It's really hard to debug when things go wrong. Instead choose a db library that does it for you.","Q_Score":17,"Tags":"python,mysql","A_Id":14883719,"CreationDate":"2013-02-14T20:20:00.000","Title":"How should I establish and manage database connections in a multi-module Python app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We have a Python application with over twenty modules, most of which are shared by several web and console applications.\nI've never had a clear understanding of the best practice for establishing and managing database connection in multi module Python apps. Consider this example:\nI have a module defining an object class for Users. It has many defs for creating\/deleting\/updating users in the database. The users.py module is imported into a) a console based utility, 2) a web.py based web application and 3) a constantly running daemon process.\nEach of these three application have different life cycles. The daemon can open a connection and keep it open. The console utility connects, does work, then dies. Of course the http requests are atomic, however the web server is a daemon.\nI am currently opening, using then closing a connection inside each function in the Users class. This seems the most inefficient, but it works in all examples. An alternative used as a test is to declare and open a global connection for the entire module. Another option would be to create the connection at the top application layer and pass references when instantiating classes, but this seems the worst idea to me.\nI know every application architecture is different. I'm just wondering if there's a best practice, and what it would be?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6022,"Q_Id":14883346,"Users Score":16,"Answer":"The best method is to open a connection when you need to do some operations (like getting and\/or updating data); manipulate the data; write it back to the database in one query (very important for performance), and then close the connection. Opening a connection is a fairly light process. \nSome pitfalls for performance include\n\nopening the database when you won't definitely interact with it\nusing selectors that take more data than you need (e.g., getting data about all users and filtering it in Python, instead of asking MySQL to filter out the useless data)\nwriting values that haven't changed (e.g. updating all values of a user profile, when just their email has changed)\nhaving each field update the server individually (e.g., open the db, update the user email, close the db, open the db, update the user password, close the db, open th... you get the idea)\n\nThe bottom line is that it doesn't matter how many times you open the database, it's how many queries you run. If you can get your code to join related queries, you've won the battle.","Q_Score":17,"Tags":"python,mysql","A_Id":14883590,"CreationDate":"2013-02-14T20:20:00.000","Title":"How should I establish and manage database connections in a multi-module Python app?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a MySQL database with around 10,000 articles in it, but that number will probably go up with time. I want to be able to search through these articles and pull out the most relevent results based on some keywords. I know there are a number of projects that I can plug into that can essentially do this for me. However, the application for this is very simple, and it would be nice to have direct control and working knowledge of how the whole thing operates. Therefore, I would like to look into building a very simple search engine from scratch in Python.\nI'm not even sure where to start, really. I could just dump everything from the MySQL DB into a list and try to sort that list based on relevance, however that seems like it would be slow, and get slower as the amount of database items increase. I could use some basic MySQL search to get the top 100 most relevant results from what MySQL thinks, then sort those 100. But that is a two step process which may be less efficient, and I might risk missing an article if it is just out of range.\nWhat are the best approaches I can take to this?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":643,"Q_Id":14889206,"Users Score":3,"Answer":"The best bet for you to do \"Search Engine\" for the 10,000 Articles is to read \"Programming Collective Intelligence\" by Toby Segaran. Wonderful read and to save your time go to Chapter 4 of August 2007 issue.","Q_Score":0,"Tags":"python,mysql,search,search-engine","A_Id":14889522,"CreationDate":"2013-02-15T06:10:00.000","Title":"Search engine from scratch","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a situation where my script parse approx 20000 entries and save them to db. I have used transaction which takes around 35 seconds to save and also consume high memory since until committed queries are saved in memory.\nI have Found another way to write CSV then load into postgres using \"copy_from\" which is very fast. If anyone can suggest that if I should open file once at start then close file while loading to postgres or open file when single entry is ready to write then close.\nwhat will be the best approach to save memory utilization?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":14890211,"Users Score":1,"Answer":"Reduce the size of your transactions?","Q_Score":0,"Tags":"python,file,postgresql,csv","A_Id":14890240,"CreationDate":"2013-02-15T07:45:00.000","Title":"File writing in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If we have a json format data file which stores all of our database data content, such as table name, row, and column, etc content, how can we use DB-API object to insert\/update\/delete data from json file into database, such as sqlite, mysql, etc. Or please share if you have better idea to handle it. People said it is good to save database data information into json format, which will be much convenient to work with database in python. \nThanks so much! Please give advise!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":454,"Q_Id":14942462,"Users Score":1,"Answer":"There's no magic way, you'll have to write a Python program to load your JSON data in a database. SQLAlchemy is a good tool to make it easier.","Q_Score":0,"Tags":"python,database,json,sqlalchemy,python-db-api","A_Id":14951638,"CreationDate":"2013-02-18T17:57:00.000","Title":"how will Python DB-API read json format data into an existing database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Example scenario:\nMySQL running a single server -> HOSTNAME\nTwo MySQL databases on that server -> USERS , GAMES .\nTask -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins )\nIn Django as well as Python MySQLdb , why is having one cursor for each database more preferable ?\nWhat is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying \"use USERS;\" ), and then work on corresponding database \nMySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1351,"Q_Id":14986129,"Users Score":10,"Answer":"A shorter answer would be, \"MySQL doesn't support that type of cursor\", so neither does Python-MySQL, so the reason one connection command is preferred is because that's the way MySQL works. Which is sort of a tautology.\nHowever, the longer answer is:\n\nA 'cursor', by your definition, would be some type of object accessing tables and indexes within an RDMS, capable of maintaining its state.\nA 'connection', by your definition, would accept commands, and either allocate or reuse a cursor to perform the action of the command, returning its results to the connection.\nBy your definition, a 'connection' would\/could manage multiple cursors.\nYou believe this would be the preferred\/performant way to access a database as 'connections' are expensive, and 'cursors' are cheap.\n\nHowever:\n\nA cursor in MySQL (and other RDMS) is not a the user-accessible mechanism for performing operations. MySQL (and other's) perform operations in as \"set\", or rather, they compile your SQL command into an internal list of commands, and do numerous, complex bits depending on the nature of your SQL command and your table structure.\nA cursor is a specific mechanism, utilized within stored procedures (and there only), giving the developer a way to work with data in a procedural way.\nA 'connection' in MySQL is what you think of as a 'cursor', sort of. MySQL does not expose it's internals for you as an iterator, or pointer, that is merely moving over tables. It exposes it's internals as a 'connection' which accepts SQL and other commands, translates those commands into an internal action, performs that action, and returns it's result to you.\nThis is the difference between a 'set' and a 'procedural' execution style (which is really about the granularity of control you, the user, is given access to, or at least, the granularity inherent in how the RDMS abstracts away its internals when it exposes them via an API).","Q_Score":6,"Tags":"python,mysql,django,mysql-python","A_Id":15328753,"CreationDate":"2013-02-20T17:27:00.000","Title":"Why django and python MySQLdb have one cursor per database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Example scenario:\nMySQL running a single server -> HOSTNAME\nTwo MySQL databases on that server -> USERS , GAMES .\nTask -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins )\nIn Django as well as Python MySQLdb , why is having one cursor for each database more preferable ?\nWhat is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying \"use USERS;\" ), and then work on corresponding database \nMySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":1351,"Q_Id":14986129,"Users Score":2,"Answer":"As you say, MySQL connections are cheap, so for your case, I'm not sure there is a technical advantage either way, outside of code organization and flow. It might be easier to manage two cursors than to keep track of which database a single cursor is currently talking to by painstakingly tracking SQL 'USE' statements. Mileage with other databases may vary -- remember that Django strives to be database-agnostic.\nAlso, consider the case where two different databases, even on the same server, require different access credentials. In such a case, two connections will be necessary, so that each connection can successfully authenticate.","Q_Score":6,"Tags":"python,mysql,django,mysql-python","A_Id":15302237,"CreationDate":"2013-02-20T17:27:00.000","Title":"Why django and python MySQLdb have one cursor per database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Example scenario:\nMySQL running a single server -> HOSTNAME\nTwo MySQL databases on that server -> USERS , GAMES .\nTask -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins )\nIn Django as well as Python MySQLdb , why is having one cursor for each database more preferable ?\nWhat is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying \"use USERS;\" ), and then work on corresponding database \nMySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1351,"Q_Id":14986129,"Users Score":0,"Answer":"One cursor per database is not necessarily preferable, it's just the default behavior.\nThe rationale is that different databases are more often than not on different servers, use different engines, and\/or need different initialization options. (Otherwise, why should you be using different \"databases\" in the first place?)\nIn your case, if your two databases are just namespaces of tables (what should be called \"schemas\" in SQL jargon) but reside on the same MySQL instance, then by all means use a single connection. (How to configure Django to do so is actually an altogether different question.)\nYou are also right that a single connection is better than two, if you only have a single thread and don't actually need two database workers at the same time.","Q_Score":6,"Tags":"python,mysql,django,mysql-python","A_Id":15421235,"CreationDate":"2013-02-20T17:27:00.000","Title":"Why django and python MySQLdb have one cursor per database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm in the process of building a Django powered site that is backed by a MySQL server. This MySQL server is going to be accessed from additional sources, other than the website, to read and write table data; such as a program that users run locally which connects to the database.\nCurrently the program running locally is using the MySQL\/C Connector library to connect directly to the sql server and execute queries. In a final release to the public this seems insecure, since I would be exposing the connection string to the database in the code or in a configuration file.\nOne alternative I'm considering is having all queries be sent to the Django website (authenticated with a user's login and password) and then the site will sanitize and execute the queries on the user's behalf and return the results to them.\nThis has a number of downsides that I can think of. The webserver will be under a much larger load by processing all the SQL queries and this could potentially exceed the limit of my host. Additionally, I would have to figure out some way of serializing and transmitting the sql results in Python and then unserializing them in C\/C++ on the client side. This would be a decent amount of custom code to write and maintain.\nAny other downsides to this approach people can think of?\nDoes this sound reasonable and if it does, anything that could ease working on it; such as Python or C libraries to help develop the proxy interface?\nIf it sounds like a bad idea, any suggestions for alternative solutions i.e. a Python library that specializes in this type of proxy sql server logic, a method of encrypting sql connection strings so I can securely use my current solution, etc...?\nLastly, is this a valid concern? The database currently doesn't hold any terribly sensitive information about users (most sensitive would be their email and their site password which they may have reused from another source) but it could in the future which is my cause for concern if it's not secure.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":377,"Q_Id":14991783,"Users Score":1,"Answer":"This is a completely valid concern and a very common problem. You have described creating a RESTful API. I guess it could be considered a proxy to a database but is not usually referred to as a proxy.\nDjango is a great tool to use to use to accomplish this. Django even has a couple packages that will assist in speedy development, Django REST Framework, Tastiepy, and django-piston are the most popular. Of course you could just use plain old Django. \nYour Django project would be the only thing that interfaces with the database and clients can send authenticated requests to Django; so clients will never connect directly to your database. This will give you fine grained permission control on a per client, per resource basis. \n\nThe webserver will be under a much larger load by processing all the\n SQL queries and this could potentially exceed the limit of my host\n\nI believe scaling a webservice is going to be a lot easier then scaling direct connections from your clients to your database. There are many tried and true methods for scaling apps that have hundreds of requests per seconds to their databases. Because you have Django between you and the webserver you can implement caching for frequently requested resources.\n\nAdditionally, I would have to figure out some way of serializing and\n transmitting the SQL results in Python and then unserializing them in\n C\/C++ on the client side\n\nThis should be a moot issue. There are lots of extremely popular data interchange formats. I have never used C\/C++ but a quick search I saw a couple of c\/c++ json serializers. python has JSON built in for free, there shouldn't be any custom code to maintain regarding this if you use a premade C\/C++ JSON library.\n\nAny other downsides to this approach people can think of?\n\nI don't think there are any downsides, It is a tried and true method. It has been proven for a decade and the most popular sites in the world expose themselves through restful apis\n\nDoes this sound reasonable and if it does, anything that could ease\n working on it; such as Python or C libraries to help develop the proxy\n interface?\n\nIt sounds very reasonable, the Django apps I mentioned at the beginning of the answer should provide some boiler plate to allow you to get started on your API quicker.","Q_Score":3,"Tags":"c++,python,mysql,c,django","A_Id":14992070,"CreationDate":"2013-02-20T23:21:00.000","Title":"Django as a mysql proxy server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a postgre database with a timestamp column and I have a REST service in Python that executes a query in the database and returns data to a JavaScript front-end to plot a graph using flot.\nNow the problem I have is that flot can automatically handle the date using JavaScript's TIMESTAMP, but I don't know how to convert the Postgre timestamps to JavaScript TIMESTAMP (YES a timestamp, not a date stop editing if you don't know the answer) in Python. I don't know if this is the best approach (maybe the conversion can be done in JavaScript?). Is there a way to do this?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":4296,"Q_Id":15031856,"Users Score":3,"Answer":"You can't send a Python or Javascript \"datetime\" object over JSON. JSON only accepts more basic data types like Strings, Ints, and Floats.\nThe way I usually do it is send it as text, using Python's datetime.isoformat() then parse it on the Javascript side.","Q_Score":8,"Tags":"javascript,python,postgresql,flot","A_Id":15032100,"CreationDate":"2013-02-22T19:33:00.000","Title":"Converting postgresql timestamp to JavaScript timestamp in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The first element of arrays (in most programming languages) has an id (index) of 0. The first element (row) of MySQL tables has an (auto incremented) id of 1. The latter seems to be the exception.","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":2295,"Q_Id":15055175,"Users Score":4,"Answer":"The better question to ask is \"why are arrays zero-indexed?\" The reason has to do with pointer arithmetic. The index of an array is an offset relative to the pointer address. In C++, given array char x[5], the expressions x[1] and *(x + 1) are equivalent, given that sizeof(char) == 1.\nSo auto increment fields starting at 1 make sense. There is no real correlation between arrays and these fields.","Q_Score":5,"Tags":"php,python,mysql,ruby","A_Id":15056205,"CreationDate":"2013-02-24T18:40:00.000","Title":"Why does MySQL count from 1 and not 0?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The first element of arrays (in most programming languages) has an id (index) of 0. The first element (row) of MySQL tables has an (auto incremented) id of 1. The latter seems to be the exception.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2295,"Q_Id":15055175,"Users Score":0,"Answer":"The main reason I suppose is that a row in a database isnt an array and the autoincrement value isnt an index in the sense that an array index is. The primary key id can be any value and to a great extent it is simply essential it is unique and is not guaranteed to be anything else (for example you can delete a row and it won't renumber).\nThis is a little like comparing apples and oranges!\nArray start at 0 because that's the first number. Autoinc fields start at whatever number you want them too, and in that case we would all rather it was 1.","Q_Score":5,"Tags":"php,python,mysql,ruby","A_Id":15055977,"CreationDate":"2013-02-24T18:40:00.000","Title":"Why does MySQL count from 1 and not 0?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The DBF files are updated every few hours. We need to import new records into MySQL and skip duplicates. I don't have any experience with DBF files but as far as I can tell a handful of the one's we're working with don't have unique IDs. \nI plan to use Python if there are no ready-made utilities that do this.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2974,"Q_Id":15059749,"Users Score":-1,"Answer":"When you say you are using dBase, I presume you have access to the (.) dot prompt.\nAt dot prompt convert the .dbf file into a delimited text file.\nReconvert the delimited text file into a MySql data file with the necessary command in \nMySql. I do not know the actual command for it. All DBMS will have commands to do that\nwork.\nFor eliminiating the duplicates you will have to do it at the time of populating the \ndata to the .dbf file through a programme written in dBase.","Q_Score":0,"Tags":"python,mysql,dbf,dbase","A_Id":16302184,"CreationDate":"2013-02-25T03:45:00.000","Title":"What's the best way to routinely import DBase (dbf) files into MySQL tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a small web application which might need to eventually scale somewhat, and am curious about Google App Engine. However, I am experiencing a problem with the development server (dev_appserver.py):\nAt seemingly random, requests will take 20-30 seconds to complete, even if there is no hard computation or data usage. One request might be really quick, even after changing a script of static file, but the next might be very slow. It seems to occur more systematically if the box has been left for a while without activity, but not always.\nCPU and disk access is low during the period. There is not allot of data in my application either. \nDoes anyone know what could cause such random slowdowns? I've Google'd and searched here, but need some pointers.. \/: I've also tried --clear_datastore and --use_sqlite, but the latter gives an error: DatabaseError('file is encrypted or is not a database',). Looking for the file, it does not seem to exist.\nI am on Windows 8, python 2.7 and the most recent version of the App Engine SDK.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":237,"Q_Id":15098051,"Users Score":2,"Answer":"Don't worry about it. It (IIRC) keeps the whole DB (datastore) in memory using a \"emulation\" of the real thing. There are lots of other issues that you won't see when deployed. \nI'd suggest that your hard drive is spinning down and the delay you see is it taking a few seconds to wake back up. \nIf this becomes a problem, develop using the deployed version. It's not so different.","Q_Score":2,"Tags":"python,google-app-engine","A_Id":15098634,"CreationDate":"2013-02-26T19:54:00.000","Title":"Google App Engine development server random (?) slowdowns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm doing a small web application which might need to eventually scale somewhat, and am curious about Google App Engine. However, I am experiencing a problem with the development server (dev_appserver.py):\nAt seemingly random, requests will take 20-30 seconds to complete, even if there is no hard computation or data usage. One request might be really quick, even after changing a script of static file, but the next might be very slow. It seems to occur more systematically if the box has been left for a while without activity, but not always.\nCPU and disk access is low during the period. There is not allot of data in my application either. \nDoes anyone know what could cause such random slowdowns? I've Google'd and searched here, but need some pointers.. \/: I've also tried --clear_datastore and --use_sqlite, but the latter gives an error: DatabaseError('file is encrypted or is not a database',). Looking for the file, it does not seem to exist.\nI am on Windows 8, python 2.7 and the most recent version of the App Engine SDK.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":237,"Q_Id":15098051,"Users Score":0,"Answer":"Does this happen in all web browsers? I had issues like this when viewing a local app engine dev site in several browsers at the same time for cross-browser testing. IE would then struggle, with requests taking about as long as you describe.\nIf this is the issue, I found the problems didn't occur with IETester.\nSorry if it's not related, but I thought this was worth mentioning just in case.","Q_Score":2,"Tags":"python,google-app-engine","A_Id":15106246,"CreationDate":"2013-02-26T19:54:00.000","Title":"Google App Engine development server random (?) slowdowns","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How do I save an open excel file using python= I currently read the excel workbook using XLRD but I need to save the excel file so any changes the user inputs are read. \nI have done this using a VBA script from within excel which saves the workbook every x seconds, but this is not ideal.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":758,"Q_Id":15114329,"Users Score":0,"Answer":"It looks like XLRD is used for reading the data, not interfacing with excel. So no, unless you use a different library using python is not the best way to do this, what is wrong with the VBA script?","Q_Score":0,"Tags":"python,excel,xlrd","A_Id":15114556,"CreationDate":"2013-02-27T14:17:00.000","Title":"Save open excel file using python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Django needs MySQL-python package to manipulate MySQL, but MySQL-python doesn't support Python 3.3. I have tried MySQL-for-Python-3, but it doesn't work.\nPlease help! Thanks a lot!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":713,"Q_Id":15202503,"Users Score":0,"Answer":"As others have noted, Python 3 support in Django 1.5 is \"experimental\" and, as such, not everything should be expected to work. \nThat being said, if you absolutely need to get this working, you may be able to run the 2to3 tool on a source version of MySQL-python to translate it to Python 3 (and build against Python 3 headers if required).","Q_Score":1,"Tags":"python,mysql,django","A_Id":15203056,"CreationDate":"2013-03-04T13:18:00.000","Title":"How can I use MySQL with Python 3.3 and Django 1.5?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So, in order to avoid the \"no one best answer\" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.\nThe application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something \"weird\" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.\nThe ideal answer would point to example code, of course.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":14722,"Q_Id":15254538,"Users Score":14,"Answer":"Tornado designed to be stateless and don't have session support out of the box. \nUse secure cookies to store sensitive information like user_id. \nUse standard cookies to store not critical information. \nFor storing large objects - use standard scheme - MySQL + memcache.","Q_Score":14,"Tags":"python,tornado","A_Id":15265556,"CreationDate":"2013-03-06T17:55:00.000","Title":"standard way to handle user session in tornado","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"So, in order to avoid the \"no one best answer\" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.\nThe application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something \"weird\" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.\nThe ideal answer would point to example code, of course.","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":14722,"Q_Id":15254538,"Users Score":17,"Answer":"Here's how it seems other micro frameworks handle sessions (CherryPy, Flask for example):\n\nCreate a table holding session_id and whatever other fields you'll want to track on a per session basis. Some frameworks will allow you to just store this info in a file on a per user basis, or will just store things directly in memory. If your application is small enough, you may consider those options as well, but a database should be simpler to implement on your own.\nWhen a request is received (RequestHandler initialize() function I think?) and there is no session_id cookie, set a secure session-id using a random generator. I don't have much experience with Tornado, but it looks like setting a secure cookie should be useful for this. Store that session_id and associated info in your session table. Note that EVERY user will have a session, even those not logged in. When a user logs in, you'll want to attach their status as logged in (and their username\/user_id, etc) to their session.\nIn your RequestHandler initialize function, if there is a session_id cookie, read in what ever session info you need from the DB and perhaps create your own Session object to populate and store as a member variable of that request handler.\n\nKeep in mind sessions should expire after a certain amount of inactivity, so you'll want to check for that as well. If you want a \"remember me\" type log in situation, you'll have to use a secure cookie to signal that (read up on this at OWASP to make sure it's as secure as possible, thought again it looks like Tornado's secure_cookie might help with that), and upon receiving a timed out session you can re-authenticate a new user by creating a new session and transferring whatever associated info into it from the old one.","Q_Score":14,"Tags":"python,tornado","A_Id":16320593,"CreationDate":"2013-03-06T17:55:00.000","Title":"standard way to handle user session in tornado","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"So, in order to avoid the \"no one best answer\" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.\nThe application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something \"weird\" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.\nThe ideal answer would point to example code, of course.","AnswerCount":4,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":14722,"Q_Id":15254538,"Users Score":4,"Answer":"The key issue with sessions is not where to store them, is to how to expire them intelligently. Regardless of where sessions are stored, as long as the number of stored sessions is reasonable (i.e. only active sessions plus some surplus are stored), all this data is going to fit in RAM and be served fast. If there is a lot of old junk you may expect unpredictable delays (the need to hit the disk to load the session).","Q_Score":14,"Tags":"python,tornado","A_Id":16346968,"CreationDate":"2013-03-06T17:55:00.000","Title":"standard way to handle user session in tornado","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a problem in kettle connecting python. In kettle, I only find the js script module.\nDoes kettle support python directly? I mean, can I call a python script in kettle without using js or others?\nBy the way, I want to move data from Oracle to Mongo regularly. I choose to use python to implement the transformation. So without external files, does it have some easy methods to keep the synchronization between a relational db and a no-rdb?\nThanks a lot.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6043,"Q_Id":15263196,"Users Score":2,"Answer":"It doesnt support it directly from what I've seen.\nHowever there is a mongodb input step. And a lot of work has been done on it recently ( and still ongoing. \nSo given there is a mongodb input step, if you're using an ETL tool already then why would you want to make it execute a python script to do the job??","Q_Score":3,"Tags":"python,kettle","A_Id":15274794,"CreationDate":"2013-03-07T04:39:00.000","Title":"how to call python script in kettle","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a Django application that needs to interact with a mongoDB instance ( preferably through django's ORM) The meat of the application still uses a relational database - but I just need to interact with mongo for a single specific model. \nWhich mongo driver\/subdriver for python will suite my needs best ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":258,"Q_Id":15314025,"Users Score":0,"Answer":"You could use django-nonrel which is a fork of Django and will let you use the same ORM.\nIf you dont want a forked Django you could use MongoEngine which has a similar syntax otherwise just raw pymongo.","Q_Score":0,"Tags":"python,django,mongodb","A_Id":15498874,"CreationDate":"2013-03-09T18:00:00.000","Title":"Use MongoDB with Django but also use relational database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have created a cronjob in Python. The purpose is to insert data into a table from another one based on certain conditions. There is more than 65000 record to be inserted. \nI have executed the cronjob and has seen more than 25000 records inserted. But after that the record are getting automatically deleted from that table. Even the records that has already inserted into the table that day before executing the cronjob is getting deleted.\n \"The current database is hosted in Xeround cloud.\"\nIs MySQL doing so, i.e some kind of rollback or something\nDoes anybody have any idea about this. Please give me a solution.\nThanks in advance..","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":637,"Q_Id":15332618,"Users Score":1,"Answer":"Run your django orm statement in the django shell and print the traceback. Look for delete statements in the django traceback sql.","Q_Score":4,"Tags":"mysql,django,python-2.7,xeround","A_Id":15388969,"CreationDate":"2013-03-11T06:40:00.000","Title":"Records getting deleted from Mysql table automatically","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently exploring using python to develop my server-side implementation. I've decided to use SQLAlchemy for database stuff.\nWhat I'm not currently to sure about is how it should be set up so that more than one developer can work on the project. For the code it is not a problem but how do I handle the database modifications? How do the users sync databases and how should potential data be set up? Should\/can each developer use their own sqlite db for development?\nFor production postgresql will be used but the developers must be able to work offline.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":15345864,"Users Score":0,"Answer":"Make sure you have a python programs or programs to fill databases with test data from scratch. It allows each developer to work from different starting points, but also test with the same environment.","Q_Score":0,"Tags":"python,database,development-environment","A_Id":15346132,"CreationDate":"2013-03-11T18:28:00.000","Title":"Multi developer environment python and sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I know how to download Excel files from Google Drive in .csv format. However, since .csv files do not support multiple sheets, I have developed a system in a for loop to add the '&grid=tab_number' to the file download url so that I can download each sheet as its own .csv file. The problem I have run into is finding out how many sheets are in the excel workbook on the Google Drive so I know how many times to set the for loop for.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":15456709,"Users Score":0,"Answer":"Ended up just downloading with xlrd and using that. Thanks for the link Rob.","Q_Score":0,"Tags":"python,excel,google-drive-api","A_Id":15505507,"CreationDate":"2013-03-17T01:39:00.000","Title":"Complicated Excel Issue with Google API and Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow. \nCan point me some example python script or some idea how can I do the same?\nThank you","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4583,"Q_Id":15592980,"Users Score":0,"Answer":"You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix\/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to.\nOnce it is set up, you can create NZ control files to point to your data files and execute a load. The Netezza Data Loading guide has detailed instructions on how to do all of this (it can be obtained through IBM).\nYou can do it through aginity as well if you have the CREATE EXTERNAL TABLE privledge - you can do a INSERT INTO FROM EXTERNAL ... REMOTESOURCE ODBC to load the file from an ODBC connection.","Q_Score":2,"Tags":"python,netezza","A_Id":15643468,"CreationDate":"2013-03-23T22:45:00.000","Title":"How to use NZ Loader (Netezza Loader) through Python Script?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow. \nCan point me some example python script or some idea how can I do the same?\nThank you","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4583,"Q_Id":15592980,"Users Score":1,"Answer":"you can use nz_load4 to load the data,This is the support utility \/nz\/support\/contrib\/bin\nthe syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option\nfor more details use nz_load4 -h \nThis will create the log files based on the number of thread,like if","Q_Score":2,"Tags":"python,netezza","A_Id":17522337,"CreationDate":"2013-03-23T22:45:00.000","Title":"How to use NZ Loader (Netezza Loader) through Python Script?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For my app, I need to determine the nearest points to some other point and I am looking for a simple but relatively fast (in terms of performance) solution. I was thinking about using PostGIS and GeoDjango but I think my app is not really that \"geographic\" (I still don't really know what that means though). The geographic part (around 5 percent of the whole) is that I need to keep coordinates of objects (people and places) and then there is this task to find the nearest points. To put it simply, PostGIS and GeoDjango seems to be an overkill here. \nI was also thinking of django-haystack with SOLR or Elasticsearch because I am going to need a strong, strong text search capabilities and these engines have also these \"geographic\" features. But not sure about it either as I am afraid of core db <-> search engine db synchronisation and hardware requirements for these engines. At the moment I am more akin to use posgreSQL trigrams and some custom way to do that \"find near points problem\". Is there any good one?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":689,"Q_Id":15593572,"Users Score":0,"Answer":"You're probably right, PostGIS\/GeoDjango is probably overkill, but making your own Django app would not be too much trouble for your simple task. Django offers a lot in terms of templating, etc. and with the built in admin makes it pretty easy to enter single records. And GeoDjango is part of contrib, so you can always use it later if your project needs it.","Q_Score":0,"Tags":"python,django,postgresql,postgis,geodjango","A_Id":15593621,"CreationDate":"2013-03-24T00:09:00.000","Title":"Django + postgreSQL: find near points","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In my company we want to build an application in Google app engine which will manage user provisioning to Google apps. But we do not really know what data source to use?\nWe made two propositions :\n\nspreadsheet which will contains users' data and we will use spreadsheet API to get this data and use it for user provisioning\nDatastore which will contains also users' data and this time we will use Datastore API.\n\nPlease note that my company has 3493 users and we do not know too many advantages and disadvantages of each solution.\nAny suggestions please?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":248,"Q_Id":15671591,"Users Score":0,"Answer":"If you use the Datastore API, you will also need to build out a way to manage users data in the system.\nIf you use Spreadsheets, that will serve as your way to manage users data, so in that way managing the data would be taken care of for you.\nThe benefits to use the Datastore API would be if you'd like to have a seamless integration of managing the user data into your application. Spreadsheet integration would remain separate from your main application.","Q_Score":0,"Tags":"python,google-app-engine,google-sheets,google-cloud-datastore","A_Id":15671792,"CreationDate":"2013-03-27T23:37:00.000","Title":"Datastore vs spreadsheet for provisioning Google apps","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"If I've been given a Query object that I didn't construct, is there a way to directly modify its WHERE clause? I'm really hoping to be able remove some AND statements or replace the whole FROM clause of a query instead of starting from scratch. \nI'm aware of the following methods to modify the SELECT clause:\nQuery.with_entities(), Query.add_entities(), Query.add_columns(), Query.select_from()\nwhich I think will also modify the FROM. And I see that I can view the WHERE clause with Query.whereclause, but the docs say that it's read-only. \nI realize I'm thinking in SQL terms, but I'm more familiar with those concepts than the ORM, at this point. Any help is very appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":489,"Q_Id":15705511,"Users Score":2,"Answer":"you can modify query._whereclause directly, but I'd seek to find a way to not have this issue in the first place - whereever it is that the Query is generated should be factored out so that the non-whereclause version is made available.","Q_Score":2,"Tags":"python,orm,sqlalchemy,where-clause","A_Id":15707037,"CreationDate":"2013-03-29T14:41:00.000","Title":"SQLAlchemy ORM: modify WHERE clause","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installd Python 2.7.3 on Linux 64 bit machine. I have Oracle 11g client(64bit) as well installed. And I set ORACLE_HOME, PATH, LD_LIBRARY_PATH, and installed cx_oracle 5.1.2 version for Python 2.7 & Oracle 11g. But ldd command on cx_oracle is unable to find libclntsh.so.11.1.\nI tried creating symlinks to libclntsh.so.11.1 under \/usr\/lib64, updated oracle.conf file under \/etc\/ld.so.conf.d\/. Tried all possible solutions that have been discussed on this issue on the forums, but no luck.\nPlease let me know what am missing.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":412,"Q_Id":15740464,"Users Score":0,"Answer":"The issue with me was that I installed python, cx_oracle as root but Oracle client installation was done by \"oracle\" user. I got my own oracle installation and that fixed the issue. \nLater I ran into PyUnicodeUCS4_DecodeUTF16 issues with Python and for that I had to install python with \u2014enable-unicode=ucs4 option","Q_Score":0,"Tags":"python,cx-oracle","A_Id":15745441,"CreationDate":"2013-04-01T09:00:00.000","Title":"cx_oracle unable to find Oracle Client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I try to connect to a remote oracle server by cx_Oracle:\ndb = cx_Oracle.connect('username', 'password', dsn_tns)\nbut it says databaseError: ORA-12541 tns no listener","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":17494,"Q_Id":15772351,"Users Score":1,"Answer":"In my case it was due to the fact that my server port was wrong:\n\n.\/install_database_new.sh localhost:1511 XE full \n\nI changed the port to \"1521\" and I could connect.","Q_Score":6,"Tags":"python,cx-oracle","A_Id":46728202,"CreationDate":"2013-04-02T19:10:00.000","Title":"ocx_Oracle ORA-12541 tns no listener","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Sqlalchemy in a multitenant Flask application and need to create tables on the fly when a new tenant is added. I've been using Table.create to create individual tables within a new Postgres schema (along with search_path modifications) and this works quite well.\nThe limitation I've found is that the Table.create method blocks if there is anything pending in the current transaction. I have to commit the transaction right before the .create call or it will block. It doesn't appear to be blocked in Sqlalchemy because you can't Ctrl-C it. You have to kill the process. So, I'm assuming it's something further down in Postgres.\nI've read in other answers that CREATE TABLE is transactional and can be rolled back, so I'm presuming this should be working. I've tried starting a new transaction with the current engine and using that for the table create (vs. the current Flask one) but that hasn't helped either.\nDoes anybody know how to get this to work without an early commit (and risking partial dangling data)?\nThis is Python 2.7, Postgres 9.1 and Sqlalchemy 0.8.0b2.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1872,"Q_Id":15774899,"Users Score":3,"Answer":"(Copy from comment)\nAssuming sess is the session, you can do sess.execute(CreateTable(tenantX_tableY)) instead.\nEDIT: CreateTable is only one of the things being done when calling table.create(). Use table.create(sess.connection()) instead.","Q_Score":0,"Tags":"python,postgresql,sqlalchemy,ddl,flask-sqlalchemy","A_Id":15775816,"CreationDate":"2013-04-02T21:38:00.000","Title":"How do you create a table with Sqlalchemy within a transaction in Postgres?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a python app that connects to perforce on a daily basis. The app gets the contents of an excel file on perfoce, parses it, and copies some data to a database. The file is rather big, so I would like to keep track of which revision of the file the app last read on the database, this way i can check to see if the revision number is higher and avoid reading the file if it has not changed.\nI could make do with getting the revision number, or the changelist number when the file was last checked in \/ changed. Or if you have any other suggestion on how to accomplish my goal of avoiding doing an unnecessary read of the file.\nI'm using python 2.7 and the perforce-python API","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1797,"Q_Id":15795038,"Users Score":2,"Answer":"Several options come to mind.\n\nThe simplest approach would be to always let your program use the same client and let it sync the file. You could let your program call p4 sync and see if you get a new version or not. Let it continue if you get a new version. This approach has the advantage that you don't need to remember any states\/version from the previous run of your program.\nIf you don't like using a fixed client you could let your program always check the current head revision of the file in question: \np4 fstat \/\/depot\/path\/yourfile |grep headRev | sed 's\/.*headRev \\(.*\\)\/\\1\/'\nYou could store that version for the next run of your program in some temp file and compare versions each time.\nIf you run your program at fixed times (e.g. via cron) you could check the last modification time (either with p4 filelog or with p4 fstat) and if the time is between the time of the last run and the current time then you need to process the file. This option is a bit intricate since you need to parse those different time formats.","Q_Score":0,"Tags":"python,python-2.7,perforce","A_Id":15806216,"CreationDate":"2013-04-03T18:23:00.000","Title":"How to get head revision number of a file, or the changelist number when it was checked in \/ changed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am searching for a persistent data storage solution that can handle heterogenous data stored on disk. PyTables seems like an obvious choice, but the only information I can find on how to append new columns is a tutorial example. The tutorial has the user create a new table with added column, copy the old table into the new table, and finally delete the old table. This seems like a huge pain. Is this how it has to be done?\nIf so, what are better alternatives for storing mixed data on disk that can accommodate new columns with relative ease? I have looked at sqlite3 as well and the column options seem rather limited there, too.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2350,"Q_Id":15797163,"Users Score":5,"Answer":"Yes, you must create a new table and copy the original data. This is because Tables are a dense format. This gives it a huge performance benefits but one of the costs is that adding new columns is somewhat expensive.","Q_Score":5,"Tags":"python,pytables","A_Id":19470951,"CreationDate":"2013-04-03T20:13:00.000","Title":"Is the only way to add a column in PyTables to create a new table and copy?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a web application in Python (on Apache server on a Linux system) that needs to connect to a Postgres database. It therefore needs a valid password for the database server. It seems rather unsatisfactory to hard code the password in my Python files.\nI did wonder about using a .pgpass file, but it would need to belong to the www-data user, right? By default, there is no \/home\/www-data directory, which is where I would have expected to store the .pgpass file. Can I just create such a directory and store the .pgpass file there? And if not, then what is the \"correct\" way to enable my Python scripts to connect to the database?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1239,"Q_Id":15895788,"Users Score":1,"Answer":"No matter what approach you use, other apps running as www-data will be able to read your password and log in as you to the database. Using peer auth won't help you out, it'll still trust all apps running under www-data.\nIf you want your application to be able to isolate its data from other databases you'll need to run it as a separate user ID. The main approaches with this are:\n\nUse the apache suexec module to run scripts as a separate user;\nUse fast-cgi (fcgi) or scgi to run the cgi as a different user; or\nHave the app run its own minimal HTTP server and have Apache reverse proxy for it\n\nOf these, by far the best option is usually to use scgi\/fcgi. It lets you easily run your app as a different unix user but avoids the complexity and overhead of reverse proxying.","Q_Score":3,"Tags":"python,apache,postgresql,mod-wsgi","A_Id":15897981,"CreationDate":"2013-04-09T07:23:00.000","Title":"\"Correct\" way to store postgres password in python website","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having trouble finding this answer anywhere on the internet. I want to be able to monitor a row in a MySQL table for changes and when this occurs, run a Python function. This Python function I want to run has nothing to do with MySQL; it just enables a pin on a Raspberry Pi. I have tried looking into SQLAlchemy; however, I can't tell if it is a trigger or a data mapping. Is something like this even possible?\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":6558,"Q_Id":15903357,"Users Score":4,"Answer":"What about a cron job instead of create a loop? I think it's a bit nicer.","Q_Score":1,"Tags":"python,sql,sqlalchemy,raspberry-pi","A_Id":15904750,"CreationDate":"2013-04-09T13:31:00.000","Title":"How to execute Python function when value in SQL table changes?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a couple of python scripts which I plan to put up on a server and run them repeatedly once a day. This python script does some calculation and finally uploads the data to a central database. Of course to connect to the database a password and username is required. Is it safe to input this username and password on my python script. If not is there any better way to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":284,"Q_Id":15905113,"Users Score":0,"Answer":"Create a DB user with limited access rights, for example, to that only table where it uploads data to. Hardcode that user in your script or pass it as command line arguments. There is little else you can do for a automated script because it has to use some username and password to connect to the DB somehow.\nYou could encrypt the credentials and decrypt them in your script, but once a sufficiently determined attacker gets access to your user account and script extracting the username and password from a plain text script should not be too hard. You could use a compiled script to hide the credentials from the prying eyes, but again, it depends on how valuable access to your database is.","Q_Score":0,"Tags":"python","A_Id":15907470,"CreationDate":"2013-04-09T14:47:00.000","Title":"Connecting to a database using python and running it as a cron job","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's a reasonable default for pool_size in a ZODB.DB call in a multi-threaded web application?\nLeaving the actual default value 7 gives me some connection WARNINGs even when I'm the only one navigating through db-interacting handlers. Is it possible to set a number that's too high? What factors play into deciding what exactly to set it to?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":485,"Q_Id":15914198,"Users Score":4,"Answer":"The pool size is only a 'guideline'; the warning is logged when you exceed that size; if you were to use double the number of connections an CRITICAL log message would be registed instead. These are there to indicate you may be using too many connections in your application.\nThe pool will try to reduce the number of retained connections to the pool size as you close connections.\nYou need to set it to the maximum number of threads in your application. For Tornado, which I believe uses asynchronous events instead of threading almost exclusively, that might be harder to determine; if there is a maximum number of concurrent connections configurable in Tornado, then the pool size needs to be set to that number.\nI am not sure how the ZODB will perform when your application scales to hundreds or thousands of concurrent connections, though. I've so far only used it with at most 100 or so concurrent connections spread across several processes and even machines (using ZEO or RelStorage to serve the ZODB across those processes).\nI'd say that if most of these connections only read, you should be fine; it's writing on the same object concurrently that is ZODB's weak point as far as scalability is concerned.","Q_Score":2,"Tags":"python,connection-pooling,zodb","A_Id":15919692,"CreationDate":"2013-04-09T23:12:00.000","Title":"Reasonable settings for ZODB pool_size","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to implement nosetests for Python code using a MongoDB store. Is there any python library which permits me initializing a mock in-memory MongoDB server?\nI am using continuous integration. So, I want my tests to be independent of any MongoDB running server. \nIs there a way to mock mongoDM Server in memory to test the code independently of connecting to a Mongo server?\nThanks in advance!","AnswerCount":4,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":14649,"Q_Id":15915031,"Users Score":4,"Answer":"I don\u2019t know about Python, but I had a similar concern with C#. I decided to just run a real instance of Mongo on my workstation pointed at an empty directory. It\u2019s not great because the code isn\u2019t isolated but it\u2019s fast and easy.\nOnly the data access layer actually calls Mongo during the test. The rest can rely on the mocks of the data access layer. I didn\u2019t feel like faking Mongo was worth the effort when really I want to verify the interaction with Mongo is correct anyway.","Q_Score":17,"Tags":"python,mongodb,python-2.7,pymongo","A_Id":15915744,"CreationDate":"2013-04-10T00:42:00.000","Title":"Use mock MongoDB server for unit test","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We need to store a text field ( say 2000 characters) and its unique hash ( say SHA1 ) in a MySQL table.\nTo test that text already exists in the MySQL table, we generate SHA1 of the text , and find whether it exists in the unique field hash .\nNow lets assume there are two texts:\n\n\"This is the text which will be stored in the database, and its hash will be generated\"\n\"This is the text,which will be stored in the database and its hash will be generated.\"\n\nNotice the minor differences.\nLets say 1 has already been added to the database, the check for 2 will not work as their SHA1 hashes will be drastically different.\nOne obvious solution is to use Leveinstein distance, or difflib to iterate over all already added text fields to fine near matches from the MySQL table.\nBut that is not performance oriented.\nIs there a good hashing algorithm which has a correlation with the text content ? i.e. Two hashes generated for very similar texts will be very similar in themselves.\nThat way it would be easier to detect possible duplicates before adding them in the MySQL table.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":496,"Q_Id":15919063,"Users Score":1,"Answer":"I highly doubt anything you're looking for exists, so I propose a simpler solution:\nCome up with a simple algorithm for normalizing your text, e.g.:\n\nNormalize whitespace\nRemove punctuation\n\nThen, calculate the hash of that and store it in a separate column (normalizedHash) or store an ID to a table of normalized hashes. Then you can compare the two different entries by their normalized content.","Q_Score":1,"Tags":"python,mysql,string-matching","A_Id":15919118,"CreationDate":"2013-04-10T07:00:00.000","Title":"Good hashing algorithm with proximity to original text input , less avalanche effect?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wrote a little script that copies files from bucket on one S3 account to the bucket in another S3 account.\nIn this script I use bucket.copy_key() function to copy key from one bucket in another bucket.\nI tested it, it works fine, but the question is: do I get charged for copying files between S3 to S3 in same region? \nWhat I'm worry about that may be I missed something in boto source code, and I hope it's not store the file on my machine, than send it to another S3. \nAlso (sorry, if its to much questions in one topic) if I upload and run this script from EC2 instance will I get charge for bandwidth?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":322,"Q_Id":15956099,"Users Score":3,"Answer":"If you are using the copy_key method in boto then you are doing server-side copying. There is a very small per-request charge for COPY operations just as there are for all S3 operations but if you are copying between two buckets in the same region, there is no network transfer charges. This is true whether you run the copy operations on your local machine or on an EC2 instance.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,boto,data-transfer","A_Id":15957021,"CreationDate":"2013-04-11T18:24:00.000","Title":"Will I get charge for transfering files between S3 accounts using boto's bucket.copy_key() function?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Things to note in advance:\n\nI am using wampserver 2.2\nIve forwarded port 80\nI added a rule to my firewall to accept traffic through port 3306\nI have added \"Allow from all\" in directory of \"A file i forget\"\nMy friend can access my phpmyadmin server through his browser \nI am quite the novice, so bear with me.\n\nI am trying to get my friend to be able to alter my databases on my phpmyadmin server through \npython. I am able to do so on the host machine using \"127.0.0.1\" as the HOST. My Question is, does he have to use my external ip as the HOST or my external ip\/phpmyadmin\/ as the HOST? And if using the external ip iscorrect...What could the problem be?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":15958249,"Users Score":0,"Answer":"If your phpmyadmin runs on the same machine as mysql-server, 127.0.0.1 is enough (and safer if your mysql server binds to 127.0.0.1, rather than 0.0.0.0) if you use tcp(rather than unix socket).","Q_Score":0,"Tags":"python,sql,phpmyadmin,mysql-python,host","A_Id":16370493,"CreationDate":"2013-04-11T20:27:00.000","Title":"What do I use for HOST to connect to a remote server with mysqldb python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using\/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach\/alternative to this?\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":6703,"Q_Id":15958678,"Users Score":1,"Answer":"If you're using an object oriented approach, then abamet's suggestion of attaching the database connection parameters as class attributes makes sense to me. The class can then establish a single database connection which all methods of the class refer to as self.db_connection, for example.\nIf you're not using an object oriented approach, a separate database connection module can provide a functional-style equivalent. Devote a module to establishing a database connection, and simply import that module everywhere you want to use it. Your code can then refer to the connection as db.connection, for example. Since modules are effectively singletons, and the module code is only run on the first import, you will be re-using the same database connection each time.","Q_Score":7,"Tags":"python,database,singleton","A_Id":15960691,"CreationDate":"2013-04-11T20:53:00.000","Title":"DB-Connections Class as a Singleton in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using\/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach\/alternative to this?\nThank you!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6703,"Q_Id":15958678,"Users Score":7,"Answer":"Normally, you have some kind of object representing the thing that uses a database (e.g., an instance of MyWebServer), and you make the database connection a member of that object.\nIf you instead have all your logic inside some kind of function, make the connection local to that function. (This isn't too common in many other languages, but in Python, there are often good ways to wrap up multi-stage stateful work in a single generator function.)\nIf you have all the database stuff spread out all over the place, then just use a global variable instead of a singleton. Yes, globals are bad, but singletons are just as bad, and more complicated. There are a few cases where they're useful, but very rare. (That's not necessarily true for other languages, but it is for Python.) And the way to get rid of the global is to rethink you design. There's a good chance you're effectively using a module as a (singleton) object, and if you think it through, you can probably come up with a good class or function to wrap it up in.\n\nObviously just moving all of your globals into class attributes and @classmethods is just giving you globals under a different namespace. But moving them into instance attributes and methods is a different story. That gives you an object you can pass around\u2014and, if necessary, an object you can have 2 of (or maybe even 0 under some circumstances), attach a lock to, serialize, etc.\nIn many types of applications, you're still going to end up with a single instance of something\u2014every Qt GUI app has exactly one MyQApplication, nearly every web server has exactly one MyWebServer, etc. No matter what you call it, that's effectively a singleton or global. And if you want to, you can just move everything into attributes of that god object.\nBut just because you can do so doesn't mean you should. You've still got function parameters, local variables, globals in each module, other (non-megalithic) classes with their own instance attributes, etc., and you should use whatever is appropriate for each value.\nFor example, say your MyWebServer creates a new ClientConnection instance for each new client that connects to you. You could make the connections write MyWebServer.instance.db.execute whenever they want to execute a SQL query\u2026 but you could also just pass self.db to the ClientConnection constructor, and each connection then just does self.db.execute. So, which one is better? Well, if you do it the latter way, it makes your code a lot easier to extend and refactor. If you want to load-balance across 4 databases, you only need to change code in one place (where the MyWebServer initializes each ClientConnection) instead of 100 (every time the ClientConnection accesses the database). If you want to convert your monolithic web app into a WSGI container, you don't have to change any of the ClientConnection code except maybe the constructor. And so on.","Q_Score":7,"Tags":"python,database,singleton","A_Id":15958721,"CreationDate":"2013-04-11T20:53:00.000","Title":"DB-Connections Class as a Singleton in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an error no such table: mytable, even though it is defined in models\/tables.py. I use sqlite. Interesting enough, if I go to admin panel -> my app -> database administration then I see a link mytable, however when I click on it then I get no such table: mytable.\nI don't know how to debug such error? \nAny ideas?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1115,"Q_Id":16026776,"Users Score":3,"Answer":"web2py keeps the structure it thinks the table has in a separate file. If someone has manually dropped the table, web2py will still think it exists, but of course you get an error when you try to actually use the table\nLook for the *.mytable.table file in the databases directory","Q_Score":2,"Tags":"python,web2py","A_Id":16026857,"CreationDate":"2013-04-16T00:21:00.000","Title":"web2py. no such table error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I understand how to save a redis database using bgsave. However, once my database server restarts, how do I tell if a saved database is present and how do I load it into my application. I can tolerate a few minutes of lost data, so I don't need to worry about an AOF, but I cannot tolerate the loss of, say, an hour's worth of data. So doing a bgsave once an hour would work for me. I just don't see how to reload the data back into the database.\nIf it makes a difference, I am working in Python.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1375,"Q_Id":16068644,"Users Score":1,"Answer":"You can stop redis and replace dump.rdb in \/var\/lib\/redis (or whatever file is in the dbfilename variable in your redis.conf). Then start redis again.","Q_Score":2,"Tags":"python,redis,persistence,reload","A_Id":16069631,"CreationDate":"2013-04-17T19:33:00.000","Title":"How to load a redis database after","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that retrieves the newest 5 records from a mysql database and sends email notification to a user containing this information.\nI would like the user to receive only new records and not old ones.\nI can retrieve data from mysql without problems...\nI've tried to store it in text files and compare the files but, of course, the text files containing freshly retrieved data will always have 5 records more than the old one.\nSo I have a logic problem here that, being a newbie, I can't tackle easily.\nUsing lists is also an idea but I am stuck in the same kind of problem.\nThe infamous 5 records can stay the same for one week and then we can have a new record or maybe 3 new records a day.\nIt's quite unpredictable but more or less that should be the behaviour.\nThank you so much for your time and patience.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":16078856,"Users Score":2,"Answer":"Are you assigning a unique incrementing ID to each record? If you are, you can create a separate table that holds just the ID of the last record fetched, that way you can only retrieve records with IDs greater than this ID. Each time you fetch, you could update this table with the new latest ID.\nLet me know if I misunderstood your issue, but saving the last fetched ID in the database could be a solution.","Q_Score":0,"Tags":"python","A_Id":16079138,"CreationDate":"2013-04-18T09:11:00.000","Title":"How to check if data has already been previously used","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to scrap about 40 random webpages at the same time.These pages vary on each request.\nI have used rpcs in python to fetch the urls and scraped the data using BeautifulSoup. It takes about 25 seconds to scrap all the data and display on the screen. \nTo increase the speed i stored the data in appengine datastore so that each data is scraped only once and can be accessed from there quickly.\nBut the problem is-> as the size of the data increases in the datastore, it is taking too long to fetch the data from the datastore(more than the scraping).\nShould i use memcache Or shift to mysql? Is mysql faster than gae-datastore?\nOr is there any other better way to fetch the data as quickly as possible?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":402,"Q_Id":16098570,"Users Score":0,"Answer":"Based on what I know about your app it would make sense to use memcache. It will be faster, and will automatically take care of things like expiring stale cache entries.","Q_Score":0,"Tags":"python,mysql,google-app-engine,google-cloud-datastore,web-scraping","A_Id":16131039,"CreationDate":"2013-04-19T06:29:00.000","Title":"What is the fastest way to get scraped data from so many web pages?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've had my python program removed from windows a while ago, and recently downloaded python2.7.4 from the main site, but when I type \"python\" in the Windows PowerShell(x86) prompt from C:, I get the message \"'python' is not recognized as an internal or external command, operable program or batch file.\", and I'd like to find out how to fix this.\nI get the same message when I'm in the actual python27 folder (and the python.exe is indeed there). However, when I type in .\\python, it runs as expected, and my computer can run other .exe's just fine. I'm using Windows 7 Home Premium Service Pack 1 on a Sony VAIO laptop. I'm not very familiar with the inner workings of my computer, so I'm not sure where to look from here.\nMy current path looks like this, with the python folder at the very end:\n%SystemRoot%\\system32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\Common Files\\Microsoft Shared\\Windows Live;C:\\Program Files (x86)\\Common Files\\Microsoft Shared\\Windows Live;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\WIDCOMM\\Bluetooth Software\\;C:\\Program Files\\WIDCOMM\\Bluetooth Software\\syswow64;C:\\Program Files (x86)\\Common Files\\Roxio Shared\\10.0\\DLLShared\\;C:\\Program Files (x86)\\Common Files\\Roxio Shared\\DLLShared\\;C:\\Program Files (x86)\\Common Files\\Adobe\\AGL;C:\\Program Files (x86)\\Windows Live\\Shared;C:\\Program Files\\Java\\jdk1.6.0_23\\bin;c:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\;c:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\;c:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\MySQL\\MySQL Workbench CE 5.2.42;C:\\Program Files\\MySQL\\MySQL Server 5.5\\bin;C:\\Program Files (x86)\\apache-ant-1.8.4\\bin;C:\\Program Files\\TortoiseSVN\\bin;C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\Common Files\\Microsoft Shared\\Windows Live;C:\\Program Files (x86)\\Common Files\\Microsoft Shared\\Windows Live;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\WIDCOMM\\Bluetooth Software\\;C:\\Program Files\\WIDCOMM\\Bluetooth Software\\syswow64;C:\\Program Files (x86)\\Common Files\\Roxio Shared\\10.0\\DLLShared\\;C:\\Program Files (x86)\\Common Files\\Roxio Shared\\DLLShared\\;C:\\Program Files (x86)\\Common Files\\Adobe\\AGL;C:\\Program Files (x86)\\Windows Live\\Shared;C:\\Program Files\\Java\\jdk1.6.0_23\\bin;c:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\;c:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\;c:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\MySQL\\MySQL Workbench CE 5.2.42;C:\\Program Files\\MySQL\\MySQL Server 5.5\\bin;C:\\Program Files (x86)\\apache-ant-1.8.4\\bin;C:\\Program Files\\TortoiseSVN\\bin;C:\\Program Files\\Java\\jdk1.6.0_23\\bin;C:\\Python27","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1533,"Q_Id":16107658,"Users Score":1,"Answer":"Making the comments an answer for future reference:\nHave a ; at the end of the PATH and logout and log back in.","Q_Score":1,"Tags":"python,windows,powershell,path,exe","A_Id":16108206,"CreationDate":"2013-04-19T15:02:00.000","Title":"Can't open python.exe in Windows Powershell","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My little website has a table of comments and a table of votes. Each user of the website gets to vote once on each comment.\nWhen displaying comments to the user, I will select from the comments table and outerjoin a vote if one exists for the current user.\nIs there a way to make a query where the vote will be attached to the comment through comment.my_vote ?\nThe way I'm doing it now, the query is returning a list for each result - [comment, vote] - and I'm passing that directly to my template. I'd prefer if the vote could be a child object of the comment.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":149,"Q_Id":16114939,"Users Score":0,"Answer":"In the end I decided that working with the tuple returned by the query wasn't a problem.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":17140662,"CreationDate":"2013-04-19T23:35:00.000","Title":"SqlAlchemy: Join onto another object","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any SQL injection equivalents, or other vulnerabilities I should be aware of when using NoSQL?\nI'm using Google App Engine DB in Python2.7, and noticed there is not much documentation from Google about security of Datastore.\nAny help would be appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":973,"Q_Id":16134927,"Users Score":7,"Answer":"Standard SQL injection techniques rely on the fact that SQL has various statements to either query or modify data. The datastore has no such feature. The GQL (the query language for the datastore) can only be used to query, not modify. Inserts, updates, and deletes are done using a separate method that does not use a text expression. Thus, the datastore is not vulnerable to such injection techniques. In the worst case, an attacker could only change the query to select data you did not intend, but never change it.","Q_Score":2,"Tags":"python,security,google-app-engine,nosql,google-cloud-datastore","A_Id":16140194,"CreationDate":"2013-04-21T18:51:00.000","Title":"NDB\/DB NoSQL Injection Google Datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am totally fresh and noob as you can be on Twisted. I chose a database proxy as my final project. The idea is, have a mysql as a database. A twisted proxy runs in between client and the database.The proxy makes the methods like UPDATE,SELECT,INSERT through its XMLRPC to the client. And, the methods itself in the proxy hits the database and grabs the data. And, I was thinking of some caching mechanism too on the proxy. So, any heads up on the project? How does chaching work in twisted","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":166,"Q_Id":16155776,"Users Score":0,"Answer":"As you use XML-RPC, you will have to write simple Twisted web application that handles XML-RPC calls. There are many possibilities for cache: expiring, storing on disk, invalidating, etc etc. You may start from simple dict for storing queries and find its limitations.","Q_Score":1,"Tags":"python,twisted","A_Id":16171818,"CreationDate":"2013-04-22T20:07:00.000","Title":"Database Proxy using Twisted","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a rails developer that is learning python and I am doing a project using the pyramid framework. I am used to having some sort of way of rolling back the database changes If I change the models in some sort of way. Is there some sort of database rollback that works similar to the initialize_project_db command?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":139,"Q_Id":16157144,"Users Score":2,"Answer":"initialize_db is not a migration script. It is for bootstrapping your model and that's that. If you want to tie in migrations with upgrade\/rollback support, look at alembic for SQL schema migrations.","Q_Score":0,"Tags":"python,database,pyramid","A_Id":16159421,"CreationDate":"2013-04-22T21:36:00.000","Title":"Is there some sort of way to roll back the initialize_project_db script in pyramid?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an old SQLite 2 database that I would like to read using Python 3 (on Windows). Unfortunately, it seems that Python's sqlite3 library does not support SQLite 2 databases. Is there any other convenient way to read this type of database in Python 3? Should I perhaps compile an older version of pysqlite? Will such a version be compatible with Python 3?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":445,"Q_Id":16193630,"Users Score":0,"Answer":"As the pysqlite author I am pretty sure nobody has ported pysqlite 1.x to Python 3 yet. The only solution that makes sense effort-wise is the one theomega suggested.\nIf all you need is access the data from Python for importing them elsewhere, but doing the sqlite2 dump\/sqlite3 restore dance is not possible, there is an option, but it is not convenient: Use the builtin ctypes module to access the necessary functions from the SQLite 2 DLL. You would then implement a minimal version of pysqlite yourself that only wraps what you really need.","Q_Score":0,"Tags":"sqlite,python-3.x","A_Id":23542492,"CreationDate":"2013-04-24T13:43:00.000","Title":"Read an SQLite 2 database using Python 3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's the best way to automatically query several dozen MySQL databases with a script on a nightly basis? The script usually returns no results, so I'd ideally have it email or notify me if any are ever returned.\nI've looked into PHP, Ruby and Python for this, but I'm a little stumped as to how best to handle this.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":307,"Q_Id":16203859,"Users Score":1,"Answer":"I believe the only one can answer this question is you. All 3 examples you gave can do what you need to do with cron to automate the job. But the best script language to be used is the one you are most comfortable to use.","Q_Score":0,"Tags":"php,python,mysql,sql,ruby","A_Id":16203901,"CreationDate":"2013-04-24T23:19:00.000","Title":"What's the best way to automate running MySQL scripts on several databases on a daily basis?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am quite new to heroku and I reached a bump in my dev...\nI am trying to write a server\/client kind of application...on the server side I will have a DB(I installed postgresql for python) and I was hoping I could reach the server, for now, via a python client(for test purposes) and send data\/queries and perform basic tasks on the DB.\nI am using python with Heroku, I manage to install the DB and it seems to be working(i.e i can query, insert, delete, etc...)\nnow all i want is to write a server(in python) that would be my app and would listen on a port and receive messages and then perform whatever tasks it is asked to do...I tought about using sockets for this and have managed to write a basic server\/client locally...however when I deploy the app on heroku i cannot connect to the server and my code is basically worthless\ncan somebody plz advise on the basic framework for this sort of requirements...surely I am not the first guy to want to write a client\/server app...if you could point to a tutorial\/doc i would be much obliged.\nThx","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":569,"Q_Id":16244924,"Users Score":3,"Answer":"Heroku is for developing Web (HTTP, HTTPS) applications. You can't deploy code that uses socket to Heroku.\nIf you want to run your app on Heroku, the easier way is to use a web framework (Flask, CherryPy, Django...). They usually also come with useful libraries and abstractions for you to talk to your database.","Q_Score":0,"Tags":"python,heroku","A_Id":16245012,"CreationDate":"2013-04-26T20:44:00.000","Title":"how to write a client\/server app in heroku","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a SQLAlchemy Session object and would like to know whether it is dirty or not. The exact question what I would like to (metaphorically) ask the Session is: \"If at this point I issue a commit() or a rollback(), the effect on the database is the same or not?\".\nThe rationale is this: I want to ask the user wether he wants or not to confirm the changes. But if there are no changes, I would like not to ask anything. Of course I may monitor myself all the operations that I perform on the Session and decide whether there were modifications or not, but because of the structure of my program this would require some quite involved changes. If SQLAlchemy already offered this opportunity, I'd be glad to take advantage of it.\nThanks everybody.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12854,"Q_Id":16256777,"Users Score":0,"Answer":"Sessions have a private _is_clean() member which seems to return true if there is nothing to flush to the database. However, the fact that it is private may mean it's not suitable for external use. I'd stop short of personally recommending this, since any mistake here could obviously result in data loss for your users.","Q_Score":18,"Tags":"python,sqlalchemy","A_Id":16257019,"CreationDate":"2013-04-27T20:54:00.000","Title":"How to check whether SQLAlchemy session is dirty or not","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a web app in GAE that needs to make use of some simple relationships between the datastore entities. Additionally, I want to do what I can from the outset to make import and exportability easier, and to reduce development time to migrate the application to another platform.\nI can see two possible ways of handling relationships between entities in the datastore:\n\nIncluding the key (or ID) of the related entity as a field in the entity\nOR\nCreating a unique identifier as an application-defined field of an entity to allow other entities to refer to it\n\nThe latter is less integrated with GAE, and requires some kind of mechanism to ensure the unique identifier is in fact unique (which in turn will rely on ancestor queries).\nHowever, the latter may make data portability easier. For example, if entities are created on a local machine they can be uploaded (provided the unique identifier is unique) without problem. By contrast, relying on the GAE defined ID will not work as the ID will not be consistent from the development to the deployed environment.\nThere may be data exportability considerations too that mean an application-defined unique identifier is preferable.\nWhat is the best way of doing this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":16266979,"Users Score":1,"Answer":"GAE's datastore just doesn't export well to SQL. There's often situations where data needs to be modeled very differently on GAE to support certain queries, ie many-to-many relationships. Denormalizing is also the right way to support some queries on GAE's datastore. Ancestor relationships are something that don't exist in the SQL world.\nIn order to import export data, you'll need to write scripts specific to your data models.\nIf you're planning for compatibility with SQL, use CloudSQL instead of the datastore.\nIn terms of moving data between dev\/production, you've already identified the ways to do it. There's no real \"easy\" way.","Q_Score":0,"Tags":"google-app-engine,python-2.7,google-cloud-datastore","A_Id":16268751,"CreationDate":"2013-04-28T19:40:00.000","Title":"GAE: planning for exportability and relational databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using wx.Grid to build spreadsheetlike input interface. I want to lock the size of the cells so the user can not change them. I have successfully disabled the drag-sizing with grid.EnableDragGridSize(False) of the grid but user can still resize the cells by using borders between column and row labels. I am probably missing something in wxGrid documentation.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":426,"Q_Id":16278613,"Users Score":0,"Answer":"I found the solution. To completely lock user ability to resize cells it is needed to use .EnableDragGridSize(False) , .DisableDragColSize() and .DisableDragRowSize() methods.","Q_Score":1,"Tags":"python,wxpython","A_Id":16279016,"CreationDate":"2013-04-29T12:26:00.000","Title":"wx.Grid cell size lock","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to easily call a python script from php using system(), although there are several options. They all work fine, except they all fail. Through trial and error I have narrowed it down to it failing on \nimport MySQLdb\nI am not too familiar with php, but I am using it in a pinch. I understand while there could be reasons why such a restriction would be in place, but this will be on a local server, used in house, and the information in the mysql db is backed up and not to critical. Meaning such a restriction can be reasonably ignored. \nBut how to allow php to call a python script that imports mysql? I am on a Linux machine (centOs) if that is relevant.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":322,"Q_Id":16281823,"Users Score":1,"Answer":"The Apache user (www-data in your case) has a somewhat restricted environment. Check where the Python MySQLdb package is installed and edit the Apache user's env (cf Apache manual and your distrib's one about this) so it has a usable Python environment with the right PYTHONPATH etc.","Q_Score":0,"Tags":"php,python,mysql,linux","A_Id":16282538,"CreationDate":"2013-04-29T14:55:00.000","Title":"call python script from php that connects to MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have some free form entries for names, where some are in the format \"Last Name, First Name\" and others are in the format \"First Name Last Name\" (eg \"Bob MacDonald\" and \"MacDonald. Bob\" are both present).\nFrom what I understand, Lucene indexing does not allow for wildcards in the beginning of the sentence, so what would be some ways in which I could find both. This is for neo4j and py2neo, so solutions in either lucene pattern matching, or in python regex matching are welcome.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":194,"Q_Id":16290237,"Users Score":1,"Answer":"Can you just use OR? \"Hilary Clinton\" OR \"Clinton, Hilary\"?","Q_Score":2,"Tags":"python,regex,neo4j,lucene","A_Id":16290406,"CreationDate":"2013-04-30T00:09:00.000","Title":"Lucene or Python: Select both \"Hilary Clinton\" and \"Clinton, Hilary\" name entries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"There are two possible cases where I am finding MySQL and RDBMS too slow. I need a recommendation for a better alternative in terms of NOSQL.\n1) I have an application that's saving tons of emails for later analysis. Email content is saved in a simple table with a couple of relations to another two tables. Columns are sender, recepient, content, headers, timestamp, etc.\nNow that the records are a close to a million, it's taking longer to search through. Basically there are some pattern searches we are running.\nWhich would be the best free\/open source NOSQL for replacement to store mails so that searching through them would be faster?\n2) Another use case is fundamentally ann asset management library consisting of files. System very simplar to mails. Here we have files of all type of extensions. When the files are created or changed, we are storing meta data of the files in a table. Again data sizes have grown big over time, that searching them is not easy.\nIdeas welcome. Someone suggested Mongo. Is there anything better and faster?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":16304959,"Users Score":1,"Answer":"If search is your primary use case, I'd look into a search solution like ElasticSearch or Solr. Even if some databases support some sort of full text indexing, they're not optimized for this problem.","Q_Score":0,"Tags":"python,nosql","A_Id":16306049,"CreationDate":"2013-04-30T16:42:00.000","Title":"Possible NoSQL cases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been running a Python octo.py script to do word counting\/author on a series of files. The script works well -- I tried it on a limited set of data and am getting the correct results.\nBut when I run it on the complete data set it takes forever. I am running on a windows XP laptop with dual core 2.33 GHz and 2 GB RAM.\nI opened up my CPU usage and it shows the processors running at 0%-3% of maximum. \nWhat can I do to force Octo.py to utilize more CPU?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":196,"Q_Id":16376374,"Users Score":0,"Answer":"As your application isn't very CPU intensive, the slow disk turns out to be the bottleneck. Old 5200 RPM laptop hard drives are very slow, which, in addition to fragmentation and low RAM (which impacts disk caching), make reading very slow. This in turns slows down processing and yields low CPU usage. You can try defragmenting, compressing the input files (as they become smaller in disk size, processing speed will increase) or other means of improving IO.","Q_Score":0,"Tags":"python-2.7,multiprocessing,cpu-usage","A_Id":16378262,"CreationDate":"2013-05-04T16:20:00.000","Title":"Octo.py only using between 0% and 3% of my CPUs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a function to do a bulk-save to a mongoDB using pymongo, is there a way of doing it? I've already tried using insert and it works for new records but it fails on duplicates. I need the same functionality that you get using save but with a collection of documents (it replaces an already added document with the same _id instead of failing).\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":311,"Q_Id":16379254,"Users Score":1,"Answer":"you can use bulk insert with option w=0 (ex safe=False), but then you should do a check to see if all documents were actually inserted if this is important for you","Q_Score":3,"Tags":"python,mongodb,pymongo","A_Id":16380066,"CreationDate":"2013-05-04T21:42:00.000","Title":"Is there a pymongo (or another Python library) bulk-save?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am executing update in mysqldb which is changing the values of part of a key and field. When I execute the query in python it triggers something in the database to cause it to add extra rows. When I execute the same exact query from mysql workbench it performs the update correctly without adding extra rows. What is the difference between calling from the application and calling from python?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":16420461,"Users Score":0,"Answer":"There was a trigger activating that I did not know about. Thanks for the help","Q_Score":0,"Tags":"python,mysql","A_Id":16943780,"CreationDate":"2013-05-07T13:35:00.000","Title":"MySQLdb for python behaves differently for queries than the mysql workbench browser","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I have text that is saved in a Postgresql database is there any way to execute that text as Python code and potentially have it update the same database?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":16470079,"Users Score":0,"Answer":"let me see if I understand what you are trying to accomplish:\n\nstore ad-hoc user code in a varchar field on a database\nread and execute said code\nallow said code to affect the database in question, say drop table ...\n\nAssuming that I've got it, you could write something that\n\nreads the table holding the code (use pyodbc or something)\nruns an eval on what was pulled from the db - this will let you execute ANY code, including self updating code\n\nare you sure this is what you want to do?","Q_Score":0,"Tags":"python,postgresql","A_Id":16470721,"CreationDate":"2013-05-09T19:57:00.000","Title":"Execute text in Postgresql database as Python code","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can connect to my local mysql database from python, and I can create, select from, and insert individual rows.\nMy question is: can I directly instruct mysqldb to take an entire dataframe and insert it into an existing table, or do I need to iterate over the rows? \nIn either case, what would the python script look like for a very simple table with ID and two data columns, and a matching dataframe?","AnswerCount":9,"Available Count":1,"Score":-0.022218565,"is_accepted":false,"ViewCount":168445,"Q_Id":16476413,"Users Score":-1,"Answer":"df.to_sql(name = \"owner\", con= db_connection, schema = 'aws', if_exists='replace', index = >True, index_label='id')","Q_Score":60,"Tags":"python,mysql,pandas,mysql-python","A_Id":56185092,"CreationDate":"2013-05-10T06:29:00.000","Title":"How to insert pandas dataframe via mysqldb into database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do something like \nselect * from table where name like '%name%'\nis there anyway to do this in Hbase ? and if there is a way so how to do that\nps. I use HappyBase to communicate with Hbase","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":364,"Q_Id":16606906,"Users Score":1,"Answer":"HBase provides a scanner interface that allows you to enumerate over a range of keys in an HTable. HappyBase has support for scans and this is documented pretty well in their API.\nSo this would solve your question if you were asking for a \"like 'name%'\" type of query which searches for anything that begins with the prefix 'name'. I am assuming name is the row key in your table, otherwise you would need a secondary index which relates the name field to the row key value of the table or go with the sub-awesome approach of scanning the entire table and doing the matching in Python yourself, depending on your usecase...\nEdit: HappyBase also supports passing a 'filter' string assuming you are using a recent HBase version. You could use the SubStringComparator or RegexStringComparator to fit your needs.","Q_Score":0,"Tags":"python,hbase,thrift","A_Id":16608107,"CreationDate":"2013-05-17T10:33:00.000","Title":"Hbase wildcard support","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to read and write data's into .xlsx extentsion files using python. And I have to use cell formatting features like merging cells,bold,font size,color etc..So which python module is good to use ?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":367,"Q_Id":16651124,"Users Score":1,"Answer":"openpyxl is the only library I know of that can read and write xlsx files. It's down side is that when you edit an existing file it doesn't save the original formatting or charts. A problem I'm dealing with right now. If anyone knows a work around please let me know.","Q_Score":0,"Tags":"python","A_Id":24190976,"CreationDate":"2013-05-20T13:55:00.000","Title":"Which module has more option to read and write xlsx extension files using Python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In MongoDB if we provide a coordinate and a distance, using $near operator will find us the documents nearby within the provided distance, and sorted by distance to the given point. \nDoes Redis provide similar functions?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":798,"Q_Id":16761134,"Users Score":1,"Answer":"Noelkd was right. There is no inbuilt function in Redis.\nI found that the simplest solution is to use geohash to store the hashed lat\/lng as keys.\nGeohash is able to store locations nearby with similar structure, e.g.\nA hash of a certain location is ebc8ycq, then the nearby locations can be queried with the wildcard ebc8yc* in Redis.","Q_Score":2,"Tags":"python,mongodb,redis,geospatial","A_Id":16886089,"CreationDate":"2013-05-26T16:13:00.000","Title":"How to find geographically near documents in Redis, like $near in MongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running my Django site on appengine. In the datastore, there is an entity kind \/ table X which is only updated once every 24 hours. \nX has around 15K entries and each entry is of form (\"unique string of length <20\", integer).\nIn some context, a user request involves fetching an average of 200 entries from X, which is quite costly if done individually.\nWhat is an efficient way I can adopt in this situation?\nHere are some ways I thought about, but have some doubts in them due to inexperience\n\nUsing the Batch query supported by db.get() where a list of keys may be passed as argument and the get() will try to fetch them all in one walk. This will reduce the time quite significantly, but still there will be noticeable overhead and cost. Also, I am using Django models and have no idea about how to relate these two.\nManually copying the whole database into memory (like storing it in a map) after each update job which occurs every 24 hour. This will work really well and also save me lots of datastore reads but I have other doubts. Will it remain persistent across instances? What other factors do I need to be aware of which might interfere? This or something like this seems perfect for my situation.\n\nThe above are just what I could come up with in first thought. There must be ways I am unaware\/missing.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":16773961,"Users Score":1,"Answer":"Your total amout of data is very small and looks like a dict. Why not save it (this object) as a single entry in the database or the blobstore and you can cache this entry.","Q_Score":1,"Tags":"python,django,google-app-engine","A_Id":16775062,"CreationDate":"2013-05-27T13:08:00.000","Title":"A way to optimize reading from a datastore which updates once a day","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I need to represent instances of Python \"Long integer\" in MySQL. I wonder what the most appropriate SQL data type I should use.\nThe Python documentation (v2.7) says (for numbers.Integral):\n\nLong integers\n\nThese represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2\u2019s complement which gives the illusion of an infinite string of sign bits extending to the left.\n\n\nMy read of the MySQL documentation suggests that BIGINT is limited to 64 bits. The DECIMAL type seems to be limited to 65 digits. I can, of course, use BLOB.\nThe application needs to support very large amounts of data, but I don't know yet how big these long integers might get, nor how many of them I'm likely to see.\nI'd like to preserve the spirit of the Python long integer definition, which suggests BLOB. I'd also like to avoid re-inventing the wheel, and so I am appealing to the stackoverflow hive-mind.\nSuggestions?","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":1333,"Q_Id":16867823,"Users Score":3,"Answer":"Yes if you really need unlimited precision then you'll have to use a blob because even strigns are limited. \nBut really I can almost guarantee that you'll be fine with a NUMERIC\/DECIMAL data type. 65 digits means that you can represent numbers in the range (-10^65, 10^65). How large is this? To give you some idea: The number of atoms in the whole universe is estimated to be about 10^80. If you only need positive numbers you can further increase the range by a factor of 2 by subtracting 10^65 -1 beforehand.","Q_Score":2,"Tags":"python,mysql,mysql-python","A_Id":16867914,"CreationDate":"2013-06-01T00:26:00.000","Title":"What are the options for storing Python long integers in MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can someone advise on what database is better for storing textual information such as part of speech sequences, dependencies, sentences used in NLP project written in python. Now this information is stored in files and they need to be parsed every time in order to extract the mentioned blocks which are used as an input for next processing stage. \nOptions considered - MongoDB, Cassandra and MySQL. Are NoSQL databases better in this type of application.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2075,"Q_Id":16872221,"Users Score":6,"Answer":"This really depends on what exactly you are storing and which operations you will perform on this data.\nSQL vs. NoSQL is a very fundamental decision and no one can give you a good advice here. If your data fits relational model well, then, SQL (PostgreSQL or MySQL) is your choice. If your data is more like documents, use MongoDB.\nThat said, just recently I made a search engine. We had to store indexed pages (raw text), the same text but tokenized and some additional metadata. MongoDB performed really well.","Q_Score":0,"Tags":"python,mysql,mongodb,nlp,bigdata","A_Id":16873052,"CreationDate":"2013-06-01T11:31:00.000","Title":"Database for NLP project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have my Django app running and I just added South. I performed some migrations which worked fine locally, but I am seeing some database errors on my Heroku version. I'd like to view the current schema for my database both locally and on Heroku so I can compare and see exactly what is different. Is there an easy way to do this from the command line, or a better way to debug this?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3348,"Q_Id":16942317,"Users Score":3,"Answer":"From the command line you should be able to do heroku pg:psql to connect directly via PSQL to your database and from in there \\dt will show you your tables and \\d will show you your table schema.","Q_Score":2,"Tags":"python,django,postgresql,heroku,django-south","A_Id":16942831,"CreationDate":"2013-06-05T14:15:00.000","Title":"How to View My Postgres DB Schema from Command Line","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am executing an update query using MySQLdb and python 2.7. Is it possible to know which rows affected by retrieving all their ids?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":67,"Q_Id":16961438,"Users Score":2,"Answer":"You can get the number of affected rows by using cursor.rowcount. The information which rows are affected is not available since the mysql api does not support this.","Q_Score":1,"Tags":"python,mysql-python","A_Id":16961869,"CreationDate":"2013-06-06T11:53:00.000","Title":"Python, mySQLdb: Is it possible to retrieve updated keys, after update?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"MySQL is installed at \/usr\/local\/mysql\nIn site.cfg the path for mysql_config is \/usr\/local\/mysql\/bin\/mysql_config\nbut when i try to build in the terminal im getting this error:\nhammads-imac-2:MySQL-python-1.2.4b4 syedhammad$ sudo python setup.py build\nrunning build\nrunning build_py\ncopying MySQLdb\/release.py -> build\/lib.macosx-10.8-intel-2.7\/MySQLdb\nrunning build_ext\nbuilding '_mysql' extension\nclang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'beta',4) -D_version_=1.2.4b4 -I\/usr\/local\/mysql\/include -I\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/include\/python2.7 -c _mysql.c -o build\/temp.macosx-10.8-intel-2.7\/_mysql.o -Wno-null-conversion -Os -g -fno-strict-aliasing -arch x86_64\nunable to execute clang: No such file or directory\nerror: command 'clang' failed with exit status 1\nHelp Please","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":141,"Q_Id":16985604,"Users Score":2,"Answer":"You probably need Xcode's Command Line Tools.\nDownload the lastest version of Xcode, then go to \"Preferences\", select \"Download\" tab, then install Command Line Tools.","Q_Score":1,"Tags":"python,mysql,macos","A_Id":16985650,"CreationDate":"2013-06-07T13:41:00.000","Title":"Configuring MySQL with python on OS X lion","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are currently developing an application that makes heavy use of PostgreSQL. For the most part we access the database using SQLAlchemy, and this works very well. For testing the relevant objects can be either mocked, or used without database access. But there are some parts of the system that run non-standard queries. These subsystems have to create temporary tables insert a huge number of rows and then merge data back into the main table.\nCurrently there are some SQL statements in these subsystems, but this makes the relevant classes tightly coupled with the database, which in turn makes things harder to unit-test.\nBasically my question is, is there any design pattern for solving this problem? The only thing that I could come up with is to put these SQL statements into a separate class and just pass an instance to the other class. This way I can mock the query-class for unit-tests, but it still feels a bit clumsy. Is there a better way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":16999676,"Users Score":0,"Answer":"So after playing around with it some more I now have a solution that is halfway decent. I split the class in question up into three separate classes:\n\nA class that provides access to the required data;\nA context manager that supports the temporary table stuff;\nAnd the old class with all the logic (sans the database stuff);\n\nWhen I instantiate my logic class I supply it with an instance of the aforementioned classes. It works ok, abstraction is slightly leaky (especially the context manager), but I can at least unit test the logic properly now.","Q_Score":3,"Tags":"python,sql,design-patterns","A_Id":17017714,"CreationDate":"2013-06-08T12:48:00.000","Title":"Design Pattern for complicated queries","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been writing a Python web app (in Flask) for a while now, and I don't believe I fully grasp how database access should work across multiple request\/response cycles. Prior to Python my web programming experience was in PHP (several years worth) and I'm afraid that my PHP experience is misleading some of my Python work.\nIn PHP, each new request creates a brand new DB connection, because nothing is shared across requests. The more requests you have, the more connections you need to support. However, in a Python web app, where there is shared state across requests, DB connections can persist.\nSo I need to manage those connections, and ensure that I close them. Also, I need to have some kind of connection pool, because if I have just one connection shared across all requests, then requests could block waiting on DB access, if I don't have enough connections available.\nIs this a correct understanding? Or have I identified the differences well? In a Python web app, do I need to have a DB connection pool that shares its connections across many requests? And the number of connections in the pool will depend on my application's request load?\nI'm using Psycopg2.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":351,"Q_Id":17012349,"Users Score":4,"Answer":"Have you looked in to SQLAlchemy at all? It takes care of a lot of the dirty details - it maintains a pool of connections, and reuses\/closes them as necessary.","Q_Score":4,"Tags":"python,psycopg2","A_Id":17012369,"CreationDate":"2013-06-09T17:36:00.000","Title":"Database access strategy for a Python web app","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Need to get one row from a table, and delete the same row.\nIt does not matter which row it is. The function should be generic, so the column names are unknown, and there are no identifiers. (Rows as a whole can be assumed to be unique.)\nThe resulting function would be like a pop() function for a stack, except that the order of elements does not matter.\nPossible solutions:\n\nDelete into a temporary table.\n(Can this be done in pysqlite?)\nGet * with 1 as limit, and the Delete * with 1 as limit.\n(Is this safe if there is just one user?)\nGet one row, then delete with a WHERE clause that compares the whole row.\n(Can this be done in pysqlite?)\n\nSuggestions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":17127306,"Users Score":1,"Answer":"Well. every table in a sqlite has a rowid. Select one and delete it?","Q_Score":0,"Tags":"python,database,pysqlite","A_Id":17382716,"CreationDate":"2013-06-15T19:47:00.000","Title":"row_pop() function in pysqlite?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the user registration form made in django.\nI want to know the city from which the user is registering.\nIs there any way that i get the IP address of the user and then somehow get the city for that IP. using some API or something","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":17159576,"Users Score":0,"Answer":"Not in any reliable way, or at least not in Django. The problem is that user IPs are usually dynamic, hence the address is changing every couple of days. Also some ISPs soon will start to use a single IP for big blocks of users (forgot what this is called) since they are running out of IPv4 IP addresses... In other words, all users from that ISP within a whole state or even country will have a single IP address.\nSo using the IP is not reliable. You could probably figure out the country or region of the user with reasonable accuracy however my recommendation is not to use the IP for anything except logging and permission purposes (e.g. blocking a spam IP).\nIf you want user locations, you can however use HTML5 location API which will have a much better shot of getting more accurate location since it can utilize other methods such us using a GPS sensor in a phone.","Q_Score":0,"Tags":"python,django,ip","A_Id":17159679,"CreationDate":"2013-06-18T02:12:00.000","Title":"Is there any simple way to store the user location while registering in database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess.\nMy questions are:\n\nhow can i get the same order as the items of the website itself.\nhow can i reverse this order of question 1.\n\nSo items on website:\n\nA\nB\nC\nD\nE\n\nadding order in my sql:\n\nE\nD\nC\nB\nA","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":201,"Q_Id":17213515,"Users Score":0,"Answer":"Items in a database are have not a special order if you don't impose it. So you should add a timestamp to your table in the database, keep it up-to-date (mysql has a special flag to mark a field as auto-now) and use ORDER BY in your queries.","Q_Score":2,"Tags":"python,scrapy","A_Id":17213740,"CreationDate":"2013-06-20T12:20:00.000","Title":"Scrapy reversed item ordening for preparing in db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess.\nMy questions are:\n\nhow can i get the same order as the items of the website itself.\nhow can i reverse this order of question 1.\n\nSo items on website:\n\nA\nB\nC\nD\nE\n\nadding order in my sql:\n\nE\nD\nC\nB\nA","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":201,"Q_Id":17213515,"Users Score":1,"Answer":"It's hard to say without the actual code, but in theory..\nScrapy is completely async, you cannot know the order of items that will be parsed and processed through the pipeline.\nBut, you can control the behavior by \"marking\" each item with priority key. Add a field priority to your Item class, in the parse_item method of your spider set the priority based on the position on a web page, then in your pipeline you can either write this priority field to the database (in order to have an ability to sort later), or gather all items in a class-wide list, and in close_spider method sort the list and bulk insert it into the database.\nHope that helps.","Q_Score":2,"Tags":"python,scrapy","A_Id":17221923,"CreationDate":"2013-06-20T12:20:00.000","Title":"Scrapy reversed item ordening for preparing in db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Lets take SQLAlchemy as an example.\nWhy should I use the Flask SQLAlchemy extension instead of the normal SQLAlchemy module?\nWhat is the difference between those two?\nIsn't is perfectly possible to just use the normal module in your Flask app?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":17222824,"Users Score":4,"Answer":"The extensions exist to extend the functionality of Flask, and reduce the amount of code you need to write for common usage patterns, like integrating your application with SQLAlchemy in the case of flask-sqlalchemy, or login handling with flask-login. Basically just clean, reusable ways to do common things with a web application. \nBut I see your point with flask-sqlalchemy, its not really that much of a code saver to use it, but it does give you the scoped-session automatically, which you need in a web environment with SQLAlchemy. \nOther extensions like flask-login really do save you a lot of boilerplate code.","Q_Score":1,"Tags":"python,sqlalchemy,flask,flask-sqlalchemy","A_Id":17223377,"CreationDate":"2013-06-20T20:06:00.000","Title":"Why do Flask Extensions exist?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to save data to sqlite3 databases which will be fetched from the remote system by FTP. Each database would be given a name that is an encoding of the time and date with a resolution of 1 hour (i.e. a new database every hour).\nFrom the Python 3 sqlite3 library, would any problems be encountered if two threads try to create the database at the same time? Or are there protections against this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":426,"Q_Id":17274626,"Users Score":0,"Answer":"This will work just fine.\nWhen two threads are trying to create the same file, one will fail to do so, but it will continue to try to lock the file.","Q_Score":1,"Tags":"python,python-3.x,sqlite","A_Id":17275138,"CreationDate":"2013-06-24T11:41:00.000","Title":"Can sqlite3 databases be created in a thread-safe way?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a flask application which use three types of databases - MySQL, Mongo and Redis. Now, if it had been simple MySQL I could have use SQLAlchemy or something on that line for database modelling. Now, in the current scenario where I am using many different types of database in a single application, I think I will have to create custom models.\nCan you please suggest what are the best practices to do that? Or any tutorial indicating the same?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":17276970,"Users Score":0,"Answer":"It's not an efficient model, but this would work:\nYou can write three different APIs (RESTful pattern is a good idea). Each will be an independent Flask application, listening on a different port (likely over localhost, not the public IP interface).\nA forth Flask application is your main application that external clients can access. The view functions in the main application will issue API calls to the other three APIs to obtain data as they see fit.\nYou could optimize and merge one of the three database APIs into the main application, leaving only two (likely the two less used) to be implemented as APIs.","Q_Score":3,"Tags":"python,database,flask,flask-sqlalchemy","A_Id":17289054,"CreationDate":"2013-06-24T13:41:00.000","Title":"How to create models if I am using various types of database simultaneously?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for the best approach for inserting a row into a spreadsheet using openpyxl.\nEffectively, I have a spreadsheet (Excel 2007) which has a header row, followed by (at most) a few thousand rows of data. I'm looking to insert the row as the first row of actual data, so after the header. My understanding is that the append function is suitable for adding content to the end of the file.\nReading the documentation for both openpyxl and xlrd (and xlwt), I can't find any clear cut ways of doing this, beyond looping through the content manually and inserting into a new sheet (after inserting the required row). \nGiven my so far limited experience with Python, I'm trying to understand if this is indeed the best option to take (the most pythonic!), and if so could someone provide an explicit example. Specifically can I read and write rows with openpyxl or do I have to access cells? Additionally can I (over)write the same file(name)?","AnswerCount":12,"Available Count":1,"Score":-0.0166651236,"is_accepted":false,"ViewCount":90928,"Q_Id":17299364,"Users Score":-1,"Answer":"Unfortunately there isn't really a better way to do in that read in the file, and use a library like xlwt to write out a new excel file (with your new row inserted at the top). Excel doesn't work like a database that you can read and and append to. You unfortunately just have to read in the information and manipulate in memory and write out to what is essentially a new file.","Q_Score":19,"Tags":"python,excel,xlrd,xlwt,openpyxl","A_Id":17305443,"CreationDate":"2013-06-25T14:00:00.000","Title":"Insert row into Excel spreadsheet using openpyxl in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have a password protected XLS file which i've forgotten the password for...I'm aware it's a date within a certain range so i'm trying to write a brute forcer to try various dates of the year. However, I can't find how to use python\/java to enter the password for the file. It's protected such that I can't open the xls file unless I have it and it has some very important information on there (so important I kept the password in a safe place that I now can't find lol).\nI'm using fedora. Are there any possible suggestions? Thankyou.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":268,"Q_Id":17344335,"Users Score":0,"Answer":"If you search there are a number of applications that you can download that will unblock the workbook.","Q_Score":0,"Tags":"java,python,excel,passwords,xls","A_Id":17344366,"CreationDate":"2013-06-27T13:20:00.000","Title":"How to enter password in XLS files with python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my python\/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file. \nIn other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned. \nNow, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":169,"Q_Id":17382053,"Users Score":1,"Answer":"I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API.\nWhat I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a \"file\" in that directory. When you're done populating the database, return that \"file\", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write.","Q_Score":0,"Tags":"python,django,sqlite","A_Id":17382483,"CreationDate":"2013-06-29T16:01:00.000","Title":"Python: Create and return an SQLite DB as a web request result","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"suppose there was a database table with one column, and it's a PK. To make things more specific this is a django project and the database is in mysql. \nIf I needed an additional column with all unique values, should I create a new UniqueField with unique integers, or just write a hash-like function to convert the existing PK's for each existing row (model instance) into a new unique variable. The current PK is a varchar\/ & string. \nWith creating a new column it consumes more memory but I think writing a new function and converting fields frequently has disadvantages also. Any ideas?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":17393291,"Users Score":1,"Answer":"Having a string-valued PK should not be a problem in any modern database system. A PK is automatically indexed, so when you perform a look-up with a condition like table1.pk = 'long-string-key', it won't be a string comparison but an index look-up. So it's ok to have string-valued PK, regardless of the length of the key values.\nIn any case, if you need an additional column with all unique values, then I think you should just add a new column.","Q_Score":0,"Tags":"python,mysql,django","A_Id":17393525,"CreationDate":"2013-06-30T18:03:00.000","Title":"Database design, adding an extra column versus converting existing column with a function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i am working on developing a Django application with Cassandra as the back end database. while Django supports ORM feature for SQL, i wonder if there is any thing similar for Cassandra.\nwhat would be the best approach to load the schema into the Cassandra server and perform CRUD operations.\nP.S. I am complete beginner to Cassandra.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":410,"Q_Id":17403346,"Users Score":3,"Answer":"There's an external backend for Cassandra, but it has some issues with the authentication middleware, which doesn't handle users correctly in the admin. If you use a non-relational database, you lose a lot of goodies that django has. You could try using Postgres' nosql extension for the parts of your data that you want to store in a nosql'y way, and the regular Postgres' tables for the rest.","Q_Score":1,"Tags":"python,django,orm,cassandra","A_Id":17403637,"CreationDate":"2013-07-01T11:25:00.000","Title":"Cassandra-Django python application approach","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a state column in my table which has the following possible values: discharged, in process and None.\nCan I fetch all the records in the following order: in process, discharged followed by None?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1384,"Q_Id":17408276,"Users Score":2,"Answer":"If you've declared that column as an enum type (as you should for cases such as these where the values are drawn from a small, fixed set of strings), then using ORDER BY on that column will order results according to the order in which the values of the enum were declared. So the datatype for that column should be ENUM('in process', 'discharged', 'None'); that will cause ORDER BY to sort in the order you desire. Specifically, each value in an enum is assigned a numerical index and that index is used when comparing enum values for sorting purposes. (The exact way in which you should declare an enum will vary according to which type of backend you're using.)","Q_Score":1,"Tags":"python,sql,sqlalchemy","A_Id":17408674,"CreationDate":"2013-07-01T15:33:00.000","Title":"Sqlalchemy order_by custom ordering?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to work with oursql in python 3.2, and it's really not going so well. \nFacts:\nI downloaded oursql binary and ran the installer.\nI have MySQL 5.1 installed.\nI separately downloaded the libmysql dll and placed it in the System32 directory.\nI downloaded cython for version 3.1 because there wasn't one for 2.7 or 3.2.\nI have python versions 2.7, 3.1, and 3.2 installed.\nI rebooted.\nI now still get the ImportError: DLL load failed: The specified module could not be found. error when running import oursql from the Python 3.1 shell.\nAny ideas?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":196,"Q_Id":17420396,"Users Score":0,"Answer":"OK, I moved libmysql.dll to the same directory as python.exe, instead of in the DLL's folder, and it seems like it works.","Q_Score":0,"Tags":"python,mysql,python-3.x,oursql","A_Id":17420506,"CreationDate":"2013-07-02T08:00:00.000","Title":"Error on installing oursql for Python 3.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python 2.7 and Mysql. I am using multi-threading and giving connections to different threads by using PooledDB . I give db connections to different threads by\npool.dedicated_connection().Now if a thread takes a connection from pool and dies due to some reason with closing it(ie. without returning it to the pool).What happens to this connection.\nIf it lives forever how to return it to the pool??","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":178,"Q_Id":17423384,"Users Score":2,"Answer":"No, it does not. You have to tell the server on the other side that the connection is closed, because it can't tell the difference between \"going away\" and \"I haven't sent my next query yet\" without an explicit signal from you.\nThe connection can time out, of course, but it won't be closed or cleaned up without instructions from you.","Q_Score":0,"Tags":"python,mysql,multithreading,python-2.7","A_Id":17423440,"CreationDate":"2013-07-02T10:37:00.000","Title":"Does database connection return to pool if a thread holding it dies?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an existing MySQL database that I set up on PMA, it has FKs that references columns that are not primary keys. Now I am trying to move the database to Django and am having trouble because when I try to set up d Foreign Keys in django it automatically references the Primary Key of the table that I am attempting to reference so the data doesnt match because column A and column B do not contain the same info. Is there a way to tell django what column to reference?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":17491720,"Users Score":0,"Answer":"You can use the to_field attribute of a ForeignKey.\nDjango should detect this automatically if you use .\/manage.py inspectdb, though.","Q_Score":0,"Tags":"python,mysql,django,phpmyadmin","A_Id":17491830,"CreationDate":"2013-07-05T14:54:00.000","Title":"Moving database from PMA to Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the easiest way to export the results of a SQL Server query to a CSV file? I have read that the pymssql module is the preferred way, and I'm guessing I'll need csv as well.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1521,"Q_Id":17495581,"Users Score":0,"Answer":"Do you need to do this programmatically or is this a one-off export?\nIf the latter, the easiest way by far is to use the SSMS export wizard. In SSMS, select the database, right-click and select Tasks->Export Data.","Q_Score":0,"Tags":"python,sql-server,csv,pymssql","A_Id":17495797,"CreationDate":"2013-07-05T19:20:00.000","Title":"Export SQL Server Query Results to CSV using pymssql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"im using openpyxl to edit an excel file that contains some formulas in certain cells. Now when i populate the cells from a text file, im expecting the formula to work and give me my desired output. But what i observe is that the formulas get removed and the cells are left blank.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1400,"Q_Id":17522521,"Users Score":1,"Answer":"I had the same problem when saving the file with openpyxl: formulas removed.\nBut I pointed out that some intermediate formulas were still there.\nAfter some tests, it appears that, in my case, all formulas which are displaying blank result (nothing) are cleaned when the save occured, unlike the formulas with an output in the cell, which are preserved.\nex :\n=IF((SUM(P3:P5))=0;\"\";(SUM(Q3:Q5))\/(SUM(P3:P5))) => can be removed when saving because of the blank result\nex :\n=IF((SUM(P3:P5))=0;\"?\";(SUM(Q3:Q5))\/(SUM(P3:P5))) => preserved when saving\nfor my example I'm using openpyxl-2.0.3 on Windows. Open and save function calls are :\nself._book = load_workbook(\"myfile.xlsx\", data_only=False)\nself._book.save(\"myfile.xlsx\")","Q_Score":0,"Tags":"python-2.7,openpyxl","A_Id":24183661,"CreationDate":"2013-07-08T08:55:00.000","Title":"Openpyxl: Formulas getting removed when saving file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two tables with a common field I want to find all the the\n items(user_id's) which present in first table but not in second.\n\nTable1(user_id,...)\nTable2(userid,...)\nuser_id in and userid in frist and second table are the same.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":225,"Q_Id":17541225,"Users Score":1,"Answer":"session.query(Table1.user_id).outerjoin(Table2).filter(Table2.user_id == None)","Q_Score":1,"Tags":"python,sqlalchemy","A_Id":17542024,"CreationDate":"2013-07-09T06:15:00.000","Title":"find missing value between to tables in sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build Python 3.3.2 from scratch on my SLE 11 (OpenSUSE).\nDuring the compilation of Python I got the message that the modules _bz2, _sqlite and _ssl have not been compiled.\nI looked for solutions with various search engines. It is often said that you have to install the -dev packages with your package management system, but I have no root access.\nI downloaded the source packages of the missing libs, but I have no idea how to tell Python to use these libs. Can somebody help me?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":444,"Q_Id":17546628,"Users Score":0,"Answer":"I don't use that distro, but Linux Mint (it's based on Ubuntu).\nIn my case before the compilation of Python 3.3.2 I've installed the necessary -dev libraries:\n $ sudo apt-get install libssl-dev\n $ sudo apt-get install libbz2-dev\n ...\nThen I've compiled and installed Python and those imports work fine.\nHope you find it useful\nLe\u00f3n","Q_Score":1,"Tags":"python-3.x,sqlite,ssl,compilation,non-admin","A_Id":17979292,"CreationDate":"2013-07-09T11:05:00.000","Title":"How to build python 3.3.2 with _bz2, _sqlite and _ssl from source","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a fairly simple Python program as outlined below:\nIt has 2 threads plus the main thread. One of the threads collects some data and puts it on a Queue. \nThe second thread takes stuff off the queue and logs it. Right now it's just printing out the stuff from the queue, but I'm working on adding it to a local MySQL database.\nThis is a process that needs to run for a long time (at least a few months). \nHow should I deal with the database connection? Create it in main, then pass it to the logging thread, or create it directly in the logging thread? And how do I handle unexpected situations with the DB connection (interrupted, MySQL server crashes, etc) in a robust manner?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":17578630,"Users Score":0,"Answer":"How should I deal with the database connection? Create it in main,\n then pass it to the logging thread, or create it directly in the\n logging thread?\n\nI would perhaps configure your logging component with the class that creates the connection and let your logging component request it. This is called dependency injection, and makes life easier in terms of testing e.g. you can mock this out later. \nIf the logging component created the connections itself, then testing the logging component in a standalone fashion would be difficult. By injecting a component that handles these, you can make a mock that returns dummies upon request, or one that provides connection pooling (and so on).\nHow you handle database issues robustly depends upon what you want to happen. Firstly make your database interactions transactional (and consequently atomic). Now, do you want your logger component to bring your system to a halt whilst it retries a write. Do you want it to buffer writes up and try out-of-band (i.e. on another thread) ? Is it mission critical to write this or can you afford to lose data (e.g. abandon a bad write). I've not provided any specific answers here, since there are so many options depending upon your requirements. The above details a few possible options.","Q_Score":1,"Tags":"python,mysql,database,multithreading","A_Id":17578684,"CreationDate":"2013-07-10T18:49:00.000","Title":"Architechture of multi-threaded program using database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do you install pyodbc package on a Linux (RedHat server RHEL) onto a Zope\/Plone bundled Python path instead of in the global Python path? \nyum install pyodbc and python setup.py install, all put pyodbc in the sys python path.\nI read articles about putting pyodbc in python2.4\/site-packages\/\nI tried that, but it didn't work for my Plone external method, which still complains about no module named pyodbc.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":265,"Q_Id":17662330,"Users Score":1,"Answer":"Add the package to the eggs section in buildout and then re-run buildout.\nThere might be additional server requirements to install pyodbc.","Q_Score":1,"Tags":"python,plone,zope,pyodbc","A_Id":17794367,"CreationDate":"2013-07-15T19:32:00.000","Title":"pyodbc Installation Issue on Plone Python Path","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a client-server interface realized using the module requests as client and tornado as server. I use this to query a database, where some dataitems may not be avaiable. For example the author in a query might not be there or the book-title. \nIs there a recommended way to let my client know, what was missing? Like an HTTP 404: Author missing or something like that?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":237,"Q_Id":17678927,"Users Score":1,"Answer":"Since HTTP 404 responses can have a response body, I would put the detailed error message in the body itself. You can, for example, send the string Author Not Found in the response body. You could also send the response string in the format that your API already uses, e.g. XML, JSON, etc., so that every response from the server has the same basic shape.\nWhether using code 404 with a X Not Found message depends on the structure of your API. If it is a RESTful API, where each URL corresponds to a resource, then 404 is a good choice if the resource itself is the thing missing. If a requested data field is missing, but the requested resource exists, I don't think 404 would be a good choice.","Q_Score":0,"Tags":"python,http,tornado,http-status-codes","A_Id":17681053,"CreationDate":"2013-07-16T14:13:00.000","Title":"Can I have more semantic meaning in an http 404 error?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"my teammate and i wrote a Python script running on the same server where the database is. Now we want to know if the performance changes when we write the same code as a stored procedure in our postgres database. What is the difference or its the same??\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":507,"Q_Id":17682444,"Users Score":2,"Answer":"There can be differences - PostgreSQL stored procedures (functions) uses inprocess execution, so there are no any interprocess communication - so if you process more data, then stored procedures (in same language) can be faster than server side application. But speedup depends on size of processed data.","Q_Score":2,"Tags":"python,database,performance,postgresql,plpgsql","A_Id":17686435,"CreationDate":"2013-07-16T16:48:00.000","Title":"What is the difference between using a python script running on server and a stored procedure?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been using the datastore with ndb for a multiplayer app. This appears to be using a lot of reads\/writes and will undoubtedly go over quota and cost a substantial amount.\nI was thinking of changing all the game data to be stored only in memcache. I understand that data stored here can be lost at any time, but as the data will only be needed for, at most, 10 minutes and as it's just a game, that wouldn't be too bad.\nAm I right to move to solely use memcache, or is there a better method, and is memcache essentially 'free' short term data storage?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":341,"Q_Id":17702165,"Users Score":1,"Answer":"As a commenter on another answer noted, there are now two memcache offerings: shared and dedicated. Shared is the original service, and is still free. Dedicated is in preview, and presently costs $.12\/GB hour.\nDedicated memcache allows you to have a certain amount of space set aside. However, it's important to understand that you can still experience partial or complete flushes at any time with dedicated memcache, due to things like machine reboots. Because of this, it's not a suitable replacement for the datastore. \nHowever, it is true that you can greatly reduce your datastore usage with judicious use of memcache. Using it as a write-through cache, for example, can greatly reduce your datastore reads (albeit not the writes).\nHope this helps.","Q_Score":3,"Tags":"python,google-app-engine,memcached,google-cloud-datastore,app-engine-ndb","A_Id":17816617,"CreationDate":"2013-07-17T14:13:00.000","Title":"Datastore vs Memcache for high request rate game","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have some very complex XSD schemas to work with. By complex I mean that each of these XSD would correspont to about 20 classes \/ tables in a database, with each table having approximately 40 fields. And I have 18 different XSD like that to program.\nWhat I'm trying to achieve is: Get a XML file defined by the XSD and save all the data in a PostgreSQL database using SQLAlchemy. Basically I need a CRUD application that will persist a XML file in the database following the model of the XSD schema, and also be able to retrieve an object from the database and create a XML file.\nI want to avoid having to manually create the python classes, the sqlalchemy table definitions, the CRUD code. This would be a monumental job, subject to a lot of small mistakes, given the complexity of the XSD files.\nI can generate python classes from XSD in many ways like GenerateDS, PyXB, etc... I need to save those objects in the database. I'm open to any suggestions, even if the suggestion is conceptually different that what I'm describing.\nThank you very much","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2179,"Q_Id":17750340,"Users Score":1,"Answer":"Not sure if there is a way directly, but you could indirectly go from XSD to a SQL Server DB, and then import the DB from SQLAlchemy","Q_Score":2,"Tags":"python,xml,postgresql,xsd,sqlalchemy","A_Id":34734878,"CreationDate":"2013-07-19T15:50:00.000","Title":"Generate Python Class and SQLAlchemy code from XSD to store XML on Postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with Python, fetching huge amounts of data from MS SQL Server Database and processing those for making graphs.\nThe real issue is that I wanted to know whether it would be a good idea to repeatedly perform queries to filter the data (using pyodbc for SQL queries) using attributes like WHERE and SELECT DISTINCT etc. in queries \nOR \nTo fetch the data and use the list comprehensions, map and filter functionalities of python to filter the data in my code itself.\nIf I choose the former, there would be around 1k queries performed reducing significant load on my python code, otherwise if I choose the latter, I would be querying once and add on a bunch of functions to go through all the records I have fetched, more or less the same number of times(1k). \nThe thing is python is not purely functional, (if it was, I wouldnt be asking and would have finished and tested my work hundreds of times by now). \nWhich one would you people recommend? \nFor reference I am using Python 2.7. It would be highly appreciated if you could provide sources of information too. Also, Space is not an issue for fetching the whole data.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":470,"Q_Id":17757031,"Users Score":0,"Answer":"If you have bandwidth to burn, and prefer Python to SQL, go ahead and do one big query and filter in Python.\nOtherwise, you're probably better off with multiple queries.\nSorry, no references here. ^_^","Q_Score":1,"Tags":"python,sql-server-2008,map,bigdata","A_Id":17757423,"CreationDate":"2013-07-19T23:33:00.000","Title":"SQL query or Programmatic Filter for Big Data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a load of data in CSV format. I need to be able to index this data based on a single text field (the primary key), so I'm thinking of entering it into a database. I'm familiar with sqlite from previous projects, so I've decided to use that engine.\nAfter some experimentation, I realized that that storing a hundred million records in one table won't work well: the indexing step slows to a crawl pretty quickly. I could come up with two solutions to this problem:\n\npartition the data into several tables\npartition the data into several databases\n\nI went with the second solution (it yields several large files instead of one huge file). My partition method is to look at the first two characters of the primary key: each partition has approximately 2 million records, and there are approximately 50 partitions.\nI'm doing this in Python with the sqlite3 module. I keep 50 open database connections and open cursors for the entire duration of the process. For each row, I look at the first two characters of the primary key, fetch the right cursor via dictionary lookup, and perform a single insert statement (via calling execute on the cursor).\nUnfortunately, the insert speed still decreases to an unbearable level after a while (approx. 10 million total processed records). What can I do to get around this? Is there a better way to do what I'm doing?","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":1881,"Q_Id":17826391,"Users Score":5,"Answer":"Wrap all insert commands into a single transaction.\nUse prepared statements.\nCreate the index only after inserting all the data (i.e., don't declare a primary key).","Q_Score":1,"Tags":"python,database,sqlite","A_Id":17826461,"CreationDate":"2013-07-24T06:07:00.000","Title":"What's the best way to insert over a hundred million rows into a SQLite database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Psycopg is the most popular PostgreSQL adapter for the Python programming language.\nThe name Psycopg does not make sense to me.\nI understand the last pg means Postgres, but what about Psyco?","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":2412,"Q_Id":17869761,"Users Score":11,"Answer":"I've always thought of it as psycho-Postgres.","Q_Score":21,"Tags":"python,postgresql","A_Id":17869993,"CreationDate":"2013-07-25T22:19:00.000","Title":"Where does the name `Psycopg` come from?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to copy the entire \/contentstore\/ folder on a bucket to a timestamped version. Basically \/contenstore\/ would be copied to \/contentstore\/20130729\/.\nMy entire script uses s3s3mirror first to clone my production S3 bucket to a backup. I then want to rename the backup to a timestamped copy so that I can keep multiple versions of the same. \nI have a working version of this using s3cmd but it seems to take an abnormally long time. The s3s3mirror part between the two buckets is done within minutes, possibly because it is a refresh on existing folder. But even in the case of a clean s3s3mirror (no existing contentstore on backup) it take around 20 minutes.\nOn the other hand copying the conentstore to a timestamped copy on the backup bucket takes over an hour and 10 minutes.\nAm I doing something incorrectly? Should the copy of data on the same bucket take longer than a full clone between two different buckets?\nAny ideas would be appreciated.\nP.S: The command I am running is s3cmd --recursive cp backupBucket\/contentStore\/ backupBucket\/20130729\/","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":995,"Q_Id":17931579,"Users Score":0,"Answer":"Since your source path contains your destination path, you may actually be copying things more than once -- first into the destination path, and then again when that destination path matches your source prefix. This would also explain why copying to a different bucket is faster than within the same bucket.\nIf you're using s3s3mirror, use the -v option and you'll see exactly what's getting copied. Does it show the same key being copied multiple times?","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,boto,s3cmd","A_Id":20389005,"CreationDate":"2013-07-29T18:32:00.000","Title":"Copying files in the same Amazon S3 bucket","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an html file on network which updates almost every minute with new rows in a table. At any point, the file contains close to 15000 rows I want to create a MySQL table with all data in the table, and then some more that I compute from the available data.\nThe said HTML table contains, say rows from the last 3 days. I want to store all of them in my mysql table, and update the table every hour or so (can this be done via a cron?)\nFor connecting to the DB, I'm using MySQLdb which works fine. However, I'm not sure what are the best practices to do so. I can scrape the data using bs4, connect to table using MySQLdb. But how should I update the table? What logic should I use to scrape the page that uses the least resources?\nI am not fetching any results, just scraping and writing.\nAny pointers, please?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":509,"Q_Id":17939824,"Users Score":0,"Answer":"My Suggestion is instead of updating values row by row try to use Bulk Insert in temporary table and then move the data into an actual table based on some timing key. If you have key column that will be good for reading the recent rows as you added.","Q_Score":1,"Tags":"python,mysql,beautifulsoup,mysql-python","A_Id":17940205,"CreationDate":"2013-07-30T06:33:00.000","Title":"Update a MySQL table from an HTML table with thousands of rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Title question says it all. I was trying to figure out how I could go about integrating the database created by sqlite3 and communicate with it through Python from my website. \nIf any further information is required about the development environment, please let me know.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1713,"Q_Id":17953552,"Users Score":1,"Answer":"It looks like your needs has changed and you are going into direction where static web site is not sufficient any more.\nFirstly, I would pick appropriate Python framework for your needs. if static website was sufficient until recently Django can be perfect for you.\nNext I would suggest describing your DB schema for ORM used in chosen framework. I see no point in querying your DB using SQL until you would have a specific reason.\nAnd finally, I would start using static content of your website as templates, replacing places where dynamic data is required. Django internal template language can be easily used that way. If not, Jinja2 also could be good.\nMy advise is base on many assumptions, as your question is quite open and undefined.\nAnyway, I think it would be the best way to start transition period from static to dynamic.","Q_Score":5,"Tags":"python,sqlite,static-site","A_Id":18099967,"CreationDate":"2013-07-30T17:29:00.000","Title":"I have a static website built using HTML, CSS and Javascript. How do I integrate this with a SQLite3 database accessed with the Python API?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently running into an issue in integrating ElasticSearch and MongoDB. Essentially I need to convert a number of Mongo Documents into searchable documents matching my ElasticSearch query. That part is luckily trivial and taken care of. My problem though is that I need this to be fast. Faster than network time, I would really like to be able to index around 100 docs\/second, which simply isn't possible with network calls to Mongo.\nI was able to speed this up a lot by using ElasticSearch's bulk indexing, but that's only half of the problem. Is there any way to either bundle reads or cache a collection (a manageable part of a collection, as this collection is larger than I would like to keep in memory) to help speed this up? I was unable to really find any documentation about this, so if you can point me towards relevant documentation I consider that a perfectly acceptable answer. \nI would prefer a solution that uses Pymongo, but I would be more than happy to use something that directly talks to MongoDB over requests or something similar. Any thoughts on how to alleviate this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":17955275,"Users Score":0,"Answer":"pymongo is thread safe, so you can run multiple queries in parallel. (I assume that you can somehow partition your document space.)\nFeed the results to a local Queue if processing the result needs to happen in a single thread.","Q_Score":1,"Tags":"python,performance,mongodb,pymongo","A_Id":24357799,"CreationDate":"2013-07-30T19:04:00.000","Title":"Bundling reads or caching collections with Pymongo","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an app that employs the python sqlite3 module. My database makes use of the implicit ROWID column provided by sqlite3. I expected that the ROWIDs be reordered after I delete some rows and vacuum the database. Because in the sqlite3 official document:\n\nThe VACUUM command may change the ROWIDs of entries in any tables that\n do not have an explicit INTEGER PRIMARY KEY.\n\nMy pysqlite version is 2.6.0 and the sqlite version is 3.5.9. Can anybody tell me why it is not working? Anything I should take care when using vacuum?\nP.S. I have a standalone sqlite installed whose version is 3.3.6. I tested the vacuum statement in it, and the ROWIDs got updated. So could the culprit be the version? Or could it be a bug of pysqlite? \nThanks in advance for any ideas or suggestions!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":17987732,"Users Score":0,"Answer":"This behaviour is version dependent.\nIf you want a guaranteed reordering, you have to copy all records into a new table yourself.\n(This works with both implicit and explicit ROWIDs.)","Q_Score":1,"Tags":"python,sqlite,pysqlite","A_Id":17988741,"CreationDate":"2013-08-01T07:30:00.000","Title":"Why are not ROWIDs updated after VACUUM when using python sqlite3 module?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually. \nHonestly, I get really stuck every time with this such kind of migration issues. \nWhat should I do to create the tables again?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":824,"Q_Id":17995963,"Users Score":0,"Answer":"are you using south? \nIf you are, there is a migration history database that exists.\nMake sure to delete the row mentionnaing the migration you want to run again.","Q_Score":0,"Tags":"python,django,django-south","A_Id":17996086,"CreationDate":"2013-08-01T13:51:00.000","Title":"syncdb is not creating tables again?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually. \nHonestly, I get really stuck every time with this such kind of migration issues. \nWhat should I do to create the tables again?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":824,"Q_Id":17995963,"Users Score":0,"Answer":"Try renaming the migration file and running python manage.py syncdb.","Q_Score":0,"Tags":"python,django,django-south","A_Id":29407625,"CreationDate":"2013-08-01T13:51:00.000","Title":"syncdb is not creating tables again?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm designing a g+ application for a big international brand. the entities I need to create are pretty much in form of a graph, hence a lot of many-to-many relations (arcs) connecting nodes that can be traversed in both directions. I'm reading all the readable docs online, but I haven't found anything so far specific to ndb design best practices and guidelines. unfortunately I am under nda, and cannot reveal details of the app, but it can match almost one to one the context of scientific conferences with proceedings, authors, papers and topics.\nbelow the list of entities envisioned so far (with context shifted to match the topics mentioned):\n\norganization (e.g. acm)\nconference (e.g. acm multimedia)\nconference issue (e.g. acm multimedia 13)\nconference track (e.g. nosql, machine learning, computer vision, etc.)\nauthor (e.g. myself)\npaper (e.g. \"designing graph like db for ndb\")\n\nas you can see, I can visit and traverse the graph through any direction (or facet, from a frontend point of view): \n\nauthor with co-authors\nauthor to conference tracks\nconference tracks to papers\n...\n\nand so on, you fill the list.\nI want to make it straight and solid because it will launch with a lot of p.r. and will need to scale consistently overtime, both in content and number of users. I would like to code it from scratch hence designing my own models, restful api to read\/write this data, avoiding non-rel django and keeping the presentation layer to a minimum template mechanism. I need to check with the company where I work, but we might be able to release part of the code with a decent open source license (ideally, a restful service for ndb models).\nif anyone could point me towards the right direction, that would be awesome.\nthanks!\nthomas\n[edit: corrected typo related to many-to-many relations]","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":478,"Q_Id":18017150,"Users Score":1,"Answer":"There's two ways to implement one-to-many relationships in App Engine.\n\nInside entity A, store a list of keys to entities B1, B2, B3. In th old DB, you'd use a ListProperty of db.Key. In ndb you'd use a KeyProperty with repeated = True.\nInside entity B1, B2, B3, store a KeyProperty to entity A.\n\nIf you use 1:\n\nWhen you have Entity A, you can fetch B1, B2, B3 by id. This can be potentially more consistent than the results of a query. \nIt could be slightly less expensive since you save 1 read operation over a query (assuming you don't count the cost of fetching entity A). Writing B instances is slightly cheaper since it's one less index to update.\nYou're limited in the number of B instances you can store by the maximum entity size and number of indexed properties on A. This makes sense for things like conference tracks since there's generally a limited number of tracks that doesn't go into the thousands.\nIf you need to sort the order of B1, B2, B3 arbitrarily, it's easier to store them in order in a list than to sort them using some sorted indexed property.\n\nIf you use 2:\n\nYou only need entity A's Key in order to query for B1, B2, B3. You don't actually need to fetch entity A to get the list.\nYou can have pretty much unlimited # of B entities.","Q_Score":1,"Tags":"python,google-app-engine,app-engine-ndb,graph-databases","A_Id":18035092,"CreationDate":"2013-08-02T12:42:00.000","Title":"best practice for graph-like entities on appengine ndb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have read somewhere that you can store python objects (more specifically dictionaries) as binaries in MongoDB by using BSON. However right now I cannot find any any documentation related to this.\nWould anyone know how exactly this can be done?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":23835,"Q_Id":18089598,"Users Score":5,"Answer":"Assuming you are not specifically interested in mongoDB, you are probably not looking for BSON. BSON is just a different serialization format compared to JSON, designed for more speed and space efficiency. On the other hand, pickle does more of a direct encoding of python objects.\nHowever, do your speed tests before you adopt pickle to ensure it is better for your use case.","Q_Score":18,"Tags":"python,mongodb,pymongo,bson","A_Id":18089722,"CreationDate":"2013-08-06T20:14:00.000","Title":"Is there a way to store python objects directly in mongoDB without serializing them","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have several S3 buckets containing a total of 40 TB of data across 761 million objects. I undertook a project to copy these objects to EBS storage. To my knowledge, all buckets were created in us-east-1. I know for certain that all of the EC2 instances used for the export to EBS were within us-east-1.\nThe problem is that the AWS bill for last month included a pretty hefty charge for inter-regional data transfer. I'd like to know how this is possible?\nThe transfer used a pretty simple Python script with Boto to connect to S3 and download the contents of each object. I suspect that the fact that the bucket names were composed of uppercase letters might have been a contributing factor (I had to specify OrdinaryCallingFormat()), but I don't know this for sure.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":878,"Q_Id":18113426,"Users Score":0,"Answer":"The problem ended up being an internal billing error at AWS and was not related to either S3 or Boto.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,boto","A_Id":18366790,"CreationDate":"2013-08-07T20:42:00.000","Title":"Boto randomly connecting to different regions for S3 transfers","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to run the following db2 command through the python pyodbc module \nIBM DB2 Command : \"DB2 export to C:\\file.ixf of ixf select * from emp_hc\" \ni am successfully connected to the DSN using the pyodbc module in python and works fine for select statement \nbut when i try to execute the following command from the Python IDLE 3.3.2\ncursor.execute(\" export to ? of ixf select * from emp_hc\",r\"C:\\file.ixf\") \npyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2\/LINUXX8664] SQL0104N An unexpected token \"db2 export to ? of\" was found following \"BEGIN-OF-STATEMENT\". Expected tokens may include: \"\". SQLSTATE=42601\\r\\n (-104) (SQLExecDirectW)')\nor \ncursor.execute(\" export to C:\\file.ixf of ixf select * from emp_hc\")\nTraceback (most recent call last):\n File \"\", line 1, in \n cursor.execute(\"export to C:\\myfile.ixf of ixf select * from emp_hc\")\npyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2\/LINUXX8664] SQL0007N The character \"\\\" following \"export to C:\" is not valid. SQLSTATE=42601\\r\\n (-7) (SQLExecDirectW)')\nam i doing something wrong ? any help will be greatly appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1372,"Q_Id":18134390,"Users Score":1,"Answer":"db2 export is a command run in the shell, not through SQL via odbc.\nIt's possible to write database query results to a file with python and pyodbc, but db2 export will almost certainly be faster and effortlessly handle file formatting if you need it for import.","Q_Score":0,"Tags":"python,sql,db2,pyodbc","A_Id":18135069,"CreationDate":"2013-08-08T19:24:00.000","Title":"sql import export command error using pyodbc module python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when connecting to mysql database in Django ,I get the error.\n\nI'm sure mysql server is running.\n\/var\/run\/mysqld\/mysqld.sock doesn't exist.\nWhen I run $ find \/ -name *.sock -type s, I only get \/tmp\/mysql.sock and some other irrelevant output.\nI added socket = \/tmp\/mysql.sock to \/etc\/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.\n\nI searched a lot, but I still don't know how to do.\nAny help is greate. Thanks in advance.\nWell, I just tried some ways. And it works.\nI did as follows.\n\nAdd socket = \/tmp\/mysql.sock .Restart the mysql server.\nln -s \/tmp\/mysql.sock \/var\/lib\/mysqld\/mysqld.sock\n\nI met an another problem today. I can't login to mysql.\nI'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.\nI add socket = \/var\/mysqld\/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":101165,"Q_Id":18150858,"Users Score":0,"Answer":"I faced this problem when connecting MySQL with Django when using Docker.\nTry 'PORT':'0.0.0.0'.\nDo not use 'PORT': 'db'. This will not work if you tried to run your app outside Docker.","Q_Score":26,"Tags":"python,mysql,django,mysql.sock","A_Id":66405102,"CreationDate":"2013-08-09T15:55:00.000","Title":"OperationalError: (2002, \"Can't connect to local MySQL server through socket '\/var\/run\/mysqld\/mysqld.sock' (2)\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"when connecting to mysql database in Django ,I get the error.\n\nI'm sure mysql server is running.\n\/var\/run\/mysqld\/mysqld.sock doesn't exist.\nWhen I run $ find \/ -name *.sock -type s, I only get \/tmp\/mysql.sock and some other irrelevant output.\nI added socket = \/tmp\/mysql.sock to \/etc\/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.\n\nI searched a lot, but I still don't know how to do.\nAny help is greate. Thanks in advance.\nWell, I just tried some ways. And it works.\nI did as follows.\n\nAdd socket = \/tmp\/mysql.sock .Restart the mysql server.\nln -s \/tmp\/mysql.sock \/var\/lib\/mysqld\/mysqld.sock\n\nI met an another problem today. I can't login to mysql.\nI'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.\nI add socket = \/var\/mysqld\/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":101165,"Q_Id":18150858,"Users Score":0,"Answer":"in flask, you may use that\napp=Flask(__name__)\napp.config[\"MYSQL_HOST\"]=\"127.0.0.1 \napp.config[\"MYSQL_USER\"]=\"root\"...","Q_Score":26,"Tags":"python,mysql,django,mysql.sock","A_Id":56762083,"CreationDate":"2013-08-09T15:55:00.000","Title":"OperationalError: (2002, \"Can't connect to local MySQL server through socket '\/var\/run\/mysqld\/mysqld.sock' (2)\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"when connecting to mysql database in Django ,I get the error.\n\nI'm sure mysql server is running.\n\/var\/run\/mysqld\/mysqld.sock doesn't exist.\nWhen I run $ find \/ -name *.sock -type s, I only get \/tmp\/mysql.sock and some other irrelevant output.\nI added socket = \/tmp\/mysql.sock to \/etc\/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.\n\nI searched a lot, but I still don't know how to do.\nAny help is greate. Thanks in advance.\nWell, I just tried some ways. And it works.\nI did as follows.\n\nAdd socket = \/tmp\/mysql.sock .Restart the mysql server.\nln -s \/tmp\/mysql.sock \/var\/lib\/mysqld\/mysqld.sock\n\nI met an another problem today. I can't login to mysql.\nI'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.\nI add socket = \/var\/mysqld\/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":101165,"Q_Id":18150858,"Users Score":0,"Answer":"You need to change your HOST from 'localhost' to '127.0.0.1' and check your django app :)","Q_Score":26,"Tags":"python,mysql,django,mysql.sock","A_Id":72389079,"CreationDate":"2013-08-09T15:55:00.000","Title":"OperationalError: (2002, \"Can't connect to local MySQL server through socket '\/var\/run\/mysqld\/mysqld.sock' (2)\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I used mysqldb to connect to a database in my localhost.\nIt works, but if I add data to a table in the database when the program is running, it shows that it has been added, but when I check the table from localhost, it hasn't been updated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":18245510,"Users Score":0,"Answer":"if your table uses innodb engine, you should call connection.commit() on every cursor.execute().","Q_Score":0,"Tags":"python,mysql,database-connection,mysql-python","A_Id":18245522,"CreationDate":"2013-08-15T02:41:00.000","Title":"musqldb-python doesnt really update the original database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to understand which of the following is a better option:\n\nData calculation using Python from the output of a MySQL query.\nPerform the calculations in the query itself.\n\nFor example, the query returns 20 rows with 10 columns.\nIn Python, I compute the difference or division of some of the columns.\nIs it a better thing to do this in the query or in Python ?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2805,"Q_Id":18270585,"Users Score":1,"Answer":"It is probably a matter of taste but...\n... to give you an exact opposite answer as the one by Alma Do Mundo, for (not so) simple calculation made on the SELECT ... clause, I generally push toward using the DB \"as a calculator\".\nCalculations (in the SELECT ... clause) are performed as the last step while executing the query. Only the relevant data are used at this point. All the \"big job\" has already been done (processing JOIN, where clauses, aggregates, sort).\nAt this point, the extra load of performing some arithmetic operations on the data is really small. And that will reduce the network traffic between your application and the DB server.\nIt is probably a matter of taste thought...","Q_Score":2,"Tags":"python,mysql,query-performance,sql-tuning,query-tuning","A_Id":18270751,"CreationDate":"2013-08-16T09:53:00.000","Title":"Data Calculations MySQL vs Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to understand which of the following is a better option:\n\nData calculation using Python from the output of a MySQL query.\nPerform the calculations in the query itself.\n\nFor example, the query returns 20 rows with 10 columns.\nIn Python, I compute the difference or division of some of the columns.\nIs it a better thing to do this in the query or in Python ?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2805,"Q_Id":18270585,"Users Score":1,"Answer":"If you are doing basic arithmetic operation on calculations in a row, then do it in SQL. This gives you the option of encapsulating the results in a view or stored procedure. In many databases, it also gives the possibility of parallel execution of the statements (although performance is not an issue with so few rows of data).\nIf you are doing operations between rows in MySQL (such as getting the max for the column), then the balance is more even. Most databases support simple functions to these calculations, but MySQL does not. The added complexity to the query gives some weight to doing these calculations on the client-side.\nIn my opinion, the most important consideration is maintainability of the code. By using a database, you are necessary incorporating business rules in the database itself (what entities are related to which other entities, for instance). A major problem with maintaining code is having business logic spread through various systems. I much prefer to have an approach where such logic is as condensed as possible, creating very clear APIs between different layers.\nFor such an approach, \"read\" access into the database would be through views. The logic that you are talking about would go into the views and be available to any user of the database -- ensuring consistency across different functions using the database. \"write\" access would be through stored procedures, ensuring that business rules are checked consistently and that operations are logged appropriately.","Q_Score":2,"Tags":"python,mysql,query-performance,sql-tuning,query-tuning","A_Id":18271329,"CreationDate":"2013-08-16T09:53:00.000","Title":"Data Calculations MySQL vs Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use python for manipulating some data in MySQL DB.\nDB is on a remote PC. And I will use another PC with Python to connect to the DB.\nWhen I searched how to install MySQLdb module to Python, they all said MySQL need to be installed on the local PC.\nIs it right? Or I don't need to install MySQL on the local PC?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":323,"Q_Id":18288616,"Users Score":1,"Answer":"You just need it if you want to compile the Python MySQL bindings from source. If you already have the binary version of the python library then the answer is no, you don't need it.","Q_Score":0,"Tags":"python,mysql","A_Id":18288628,"CreationDate":"2013-08-17T12:07:00.000","Title":"Do I need MySQL installed on my local PC to use MySQLdb for Python to connect MySQL server remotely?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the python packages xlrd and xlwt to read and write from excel spreadsheets using python. I can't figure out how to write the code to solve my problem though.\nSo my data consists of a column of state abbreviations and a column of numbers, 1 through 7. There are about 200-300 entries per state, and i want to figure out how many ones, twos, threes, and so on exist for each state. I'm struggling with what method I'd use to figure this out. \nnormally i would post the code i already have but i don't even know where to begin.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":247,"Q_Id":18413606,"Users Score":0,"Answer":"Prepare a dictionary to store the results.\nGet the numbers of line with data you have using xlrd, then iterate over each of them. \nFor each state code, if it's not in the dict, you create it also as a dict.\nThen you check if the entry you read on the second column exists within the state key on your results dict.\n4.1 If it does not, you'll create it also as a dict, and add the number found on the second column as a key to this dict, with a value of one.\n4.2 If it does, just increment the value for that key (+1).\n\nOnce it has finished looping, your result dict will have the count for each individual entry on each individual state.","Q_Score":0,"Tags":"python,excel,xlrd,xlwt","A_Id":18413675,"CreationDate":"2013-08-24T00:09:00.000","Title":"Python Programming approach - data manipulation in excel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been working with Python MySQLdb. With InnoDB tables autocommit is turned off in default and that was what I needed. But since I'm now working with MyISAM tables, the docs for MySQL say \n\nMyISAM tables effectively always operate in autocommit = 1 mode\n\nSince I'm running up to a few hundreds of queries a second, does committing with every single query slow down the performance of my script? Because I used to commit once every 1000 queries before, now I can't do that with MyISAM. If it slows it down, what can I try?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":440,"Q_Id":18462528,"Users Score":0,"Answer":"MyISAM has no transactions, so you can't not to \"autocommit\" using MyISAM.\nYour runtime change may be also caused by the fact you moved from innoDB to MyISAM.\nThe best approach for DB runtime issues in general is benchmarking, benchmarking and benchmarking.","Q_Score":1,"Tags":"python,mysql,commit","A_Id":18463239,"CreationDate":"2013-08-27T10:06:00.000","Title":"does autocommit slow down performance in python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.\nMy issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:\nDatabaseError: SSL error: decryption failed or bad record mac\nInterfaceError: connection already closed\nWhat is the best solution to make the forked TCP server work?\n\nCan I ensure the forked process uses its own database connection?\nShould I be looking at other modules for writing to the postgres database?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":941,"Q_Id":18492467,"Users Score":0,"Answer":"The libpq driver, which is what the psycopg2 driver usually used by django is built on, does not support forking an active connection. I'm not sure if there might be another driver does not, but I would assume not - the protocol does not support multiplexing multiple sessions on the same connection.\nThe proper solution to your problem is to make sure each forked processes uses its own database connection. The easiest way is usually to wait to open the connection until after the fork.","Q_Score":2,"Tags":"python,django,postgresql","A_Id":18496589,"CreationDate":"2013-08-28T15:42:00.000","Title":"Forking Django DB connections","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.\nMy issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:\nDatabaseError: SSL error: decryption failed or bad record mac\nInterfaceError: connection already closed\nWhat is the best solution to make the forked TCP server work?\n\nCan I ensure the forked process uses its own database connection?\nShould I be looking at other modules for writing to the postgres database?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":941,"Q_Id":18492467,"Users Score":1,"Answer":"So one solution I found is to create a new thread to spawn from. Django opens a new connection per thread so spawning from a new thread ensures you pass a new connection to the new process. \nIn retrospect I wish I'd used psycopg2 directly from the beginning rather than Django. Django is great for the web front end but not so great for a standalone app where all I'm using it for is the model layer. Using psycopg2 would have given be greater control over when to close and open connections. Not just because of the forking issue but also I found Django doesn't keep persistent postgres connections - something we should have better control of in 1.6 when released and should for my specific app give a huge performance gain. Also, in this type of application I found Django intentionally leaks - something that can be fixed with DEBUG set to False. Then again, I've written the app now :)","Q_Score":2,"Tags":"python,django,postgresql","A_Id":18531322,"CreationDate":"2013-08-28T15:42:00.000","Title":"Forking Django DB connections","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"The company I work for is starting development of a Django business application that will use MySQL as the database engine. I'm looking for a way to keep from having database credentials stored in a plain-text config file.\nI'm coming from a Windows\/IIS background where a vhost can impersonate an existing Windows\/AD user, and then use those credentials to authenticate with MS SQL Server.\nAs an example: If the Django application is running with apache2+mod_python on an Ubuntu server, would it be sane to add a \"www-data\" user to MySQL and then let MySQL verify the credentials using its PAM module?\nHopefully some of that makes sense. Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":691,"Q_Id":18495773,"Users Score":1,"Answer":"MySQL controls access to tables from its own list of users, so it's better to create MySQL users with permissions. You might want to create roles instead of users so you don't have as many to manage: an Admin, a read\/write role, a read-only role, etc.\nA Django application always runs as the web server user. You could change that to \"impersonate\" an Ubuntu user, but what if that user is deleted? Leave it as \"www-data\" and manage the database role that way.","Q_Score":3,"Tags":"python,mysql,django","A_Id":18496083,"CreationDate":"2013-08-28T18:39:00.000","Title":"Can a Django application authenticate with MySQL using its linux user?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an excel file whose extension is .xls but his type is Tab Space separated Text.\nWhen I try to open the file by MS Excel it tells me that the extension is fake. And So I have to confirm that I trust the file and so I can read it then.\nBut my real problem is that when I try to read my file by the xlrd library it gives me this message :\nxlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record;\nAnd so to resolve this problem, I go to Save as in MS Excel and I change the type manually to .xls.\nBut my boss insist that I have to do this by code. I have 3 choices : Shell script under Linux, .bat file under Windows or Python.\nSo, how can I change the type of the excel file from Tab space separated Text to xls file by Shell script (command line), .bat or Python?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":399,"Q_Id":18570143,"Users Score":1,"Answer":"mv file.{xls,csv}\nIt's a csv file, stop treating it as an excel file and things will work a lot better. :) There are nice csv manipulation tools available in most languages. Do you really need the excel library?","Q_Score":0,"Tags":"python,linux,excel,shell,xlrd","A_Id":18574653,"CreationDate":"2013-09-02T09:46:00.000","Title":"How to change automatically the type of the excel file from Tab space separated Text to xls file?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to do the following\n\nDelete many entities from a database, also those entities have a file associated with them saved into the file system, which are accessed also by the web server (images!).\n\nThe problem: File deletion might fail, I have all the files in a folder for the main entity (its actually a 1-N relation, being each one of the N the file owners). If I try to delete a file when the web server is accessing them, I will get an exception and the process will go in half, some images deleted, and some doesnt, leaving the system inconsistent.\nIs there a way to to do something similar to a transaction but in the file system (either delete all files or don't delete any)? Or perhaps another approach (the worst plan is to save the files in the database, but it is bad)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":708,"Q_Id":18581117,"Users Score":2,"Answer":"There is no way to transactionally delete multiple files on normal filesystems (you might be able to find esoteric filesystems where it is, but even if so I doubt that helps you. Apparently your current filesystem doesn't even let you delete a file that's being read, so presumably you're stuck with what you have!).\nPerhaps you could save in the database not the file contents, but a list of which filenames in the filesystem \"really exist\". Refer to that list for anything that requires consistency. If file deletion fails, you can mark the file as \"not really existing\" and requiring future attempts at deletion, then retry whenever seems sensible (maybe an occasional maintenance job, maybe a helper process retrying each failure with exponential backoff to a limit).\nFor this to work either (a) your webserver must refer to the database before serving the file, or else (b) it must be OK for there to be a indefinite period after the file fails to delete, during which it may nevertheless be served. And of course there is also the \"natural race condition\" that a file that begins to be served before the deletion attempt, will complete its download even after the transaction is complete.\n[Edit: Ah, it just occurred to me that \"i have all the files in a folder for the main entity\" might actually be really helpful. In your transaction, rename the directory. That atomically \"removes\" all the files, from their old names at least, and it will fail (on filesystems that forbid that sort of thing) if any of the files is in use. If the rename succeeds, and nobody else knows the new name, then they won't be accessing the files and you should be able to delete them all without trouble. I think. Of course this doesn't work if you encounter another reason for failing to delete the file, because then you might be able to rename the folder but unable to delete the file.]","Q_Score":0,"Tags":"python,django,transactions","A_Id":18581616,"CreationDate":"2013-09-02T21:41:00.000","Title":"Delete files atomically\/transactionally in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Mac running OS X 10.6.8, which comes pre-installed with SQLite3 v3.6. I installed v3.8 using homebrew. But when I type \"sqlite3\" in my terminal it continues to run the old pre-installed version. Any help? Trying to learn SQL as I'm building my first web app.\nNot sure if PATH variable has anything to do with it, but running echo $PATH results in the following: \/usr\/local\/bin:\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin:\/usr\/local\/bin:\/usr\/X11\/bin\nAnd the NEW version of SQLite3 is in the following directory: \/usr\/local\/Cellar\/sqlite\nI should add that I also downloaded the binary executable to my desktop, and that works if I click from my desktop, but doesn't work from the terminal.\nAny help would be greatly appreciated?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1449,"Q_Id":18626114,"Users Score":0,"Answer":"To figure out exactly which sqlite3 binaries your system can find type which -a sqlite3. This will list the apps in the order that they are found according to your PATH variable, this also shows what order the thes ystem would use when figuring out which to run if you have multiple versions.\nHomebrew should normally links binaries into your \/usr\/local\/bin, but as sqlite3 is provided by MAC OS, it is only installed into \/usr\/local\/Cellar\/sqlite3, and not linked into \/usr\/local\/bin. As the Cellar path is not in your PATH variable, the system doesn't know that the binaries exist to run.\nLong story short, you can just run the Homebrew binary directly with \/usr\/local\/Cellar\/sqlite\/3.8.0\/bin\/sqlite3.","Q_Score":1,"Tags":"python,linux,macos,sqlite","A_Id":18629528,"CreationDate":"2013-09-05T00:54:00.000","Title":"Running upgraded version of SQLite (3.8) on Mac when Terminal still defaults to old version 3.6","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to use BDB as a time-series data store, and planning to use the microseconds since epoch as the key values. I am using BTREE as the data store type.\nHowever, when I try to store integer keys, bsddb3 gives an error saying TypeError: Integer keys only allowed for Recno and Queue DB's. \nWhat is the best workaround? I can store them as strings, but that probably will make it unnecessarily slower.\nGiven BDB itself can handle any kind of data, why is there a restriction? can I sorta hack the bsddb3 implementation? has anyone used anyother methods?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":689,"Q_Id":18664940,"Users Score":-1,"Answer":"Well, there's no workaround. But you can use two approaches\n\nStore the integers as string using str or repr. If the ints are big, you can even use string formatting\nuse cPickle\/pickle module to store and retrieve data. This is a good way if you have data types other than basic types. For basics ints and floats this actually is slower and takes more space than just storing strings","Q_Score":0,"Tags":"python,berkeley-db,bsddb","A_Id":18793657,"CreationDate":"2013-09-06T19:11:00.000","Title":"Use integer keys in Berkeley DB with python (using bsddb3)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some code that I am working on that scrapes some data from a website, and then extracts certain key information from that website and stores it in an object. I create a couple hundred of these objects each day, each from unique url's. This is working quite well, however, I'm inexperienced in what options are available to me in Python for persistence and what would be best suited for my needs.\nCurrently I am using pickle. To do so, I am keeping all of these webpage objects and appending them in a list as new ones are created, then saving that list to a pickle (then reloading it whenever the list is to be updated). However, as i'm in the order of some GB of data, i'm finding pickle to be somewhat slow. It's not unworkable, but I'm wondering if there is a more well suited alternative. I don't really want to break apart the structure of my objects and store it in a sql type database, as its important for me to keep the methods and the data as a single object.\nShelve is one option I've been looking into, as my impression is then that I wouldn't have to unpickle and pickle all the old entries (just the most recent day that needs to be updated), but am unsure if this is how shelve works, and how fast it is. \nSo to avoid rambling on, my question is: what is the preferred persistence method for storing a large number of objects (all of the same type), to keep read\/write speed up as the collection grows?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":18674630,"Users Score":0,"Answer":"Martijn's suggestion could be one of the alternatives.\nYou may consider to store the pickle objects directly in a sqlite database which still can manage from the python standard library.\nUse a StringIO object to convert between the database column and python object.\nYou didn't mention the size of each object you are pickling now. I guess it should stay well within sqlite's limit.","Q_Score":0,"Tags":"python,persistence","A_Id":18674706,"CreationDate":"2013-09-07T15:02:00.000","Title":"Persistence of a large number of objects","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For a music project I want to find what which groups of artists users listens to. I have extracted three columns from the database: the ID of the artist, the ID of the user, and the percentage of all the users stream that is connected to that artist. \nE.g. Half of the plays from user 15, is of the artist 12. \n12 | 15 | 0.5\nWhat I hope to find is a methodology to group clusters of groups together, so e.g. find out that users who tends to listen to artist 12 also listens to 65, 74, and 34. \nI wonder what kind of methodologies that can be used for this grouping, and if there are any good sources for this approach (Python or Ruby would be great).","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":261,"Q_Id":18705223,"Users Score":0,"Answer":"Sounds like a classic matrix factorization task to me.\nWith a weighted matrix, instead of a binary one. So some fast algorithms may not be applicable, because they support binary matrixes only.\nDon't ask for source on Stackoverflow: asking for off-site resources (tools, libraries, ...) is off-topic.","Q_Score":0,"Tags":"python,ruby,data-mining,data-analysis","A_Id":18712558,"CreationDate":"2013-09-09T19:09:00.000","Title":"Data Mining: grouping based on two text values (IDs) and one numeric (ratio)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a beginner of openerp 7. i just want to know the details regarding how to generate report in openerp 7 in xls format. \nThe formats supported in OpenERP report types are : pdf, odt, raw, sxw, etc..\nIs there any direct feature that is available in OpenERP 7 regarding printing the report in EXCEL format(XLS)","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2902,"Q_Id":18716623,"Users Score":0,"Answer":"In python library are available to export data in pdf and excel\nFor excel you can use:\n 1)xlwt\n 2)Elementtree\nFor pdf genration :\n 1)Pypdf\n 2)Reportlab \n are available","Q_Score":1,"Tags":"python,openerp","A_Id":18716823,"CreationDate":"2013-09-10T10:34:00.000","Title":"How to print report in EXCEL format (XLS)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The context for this question is:\n\nA Google App Engine backend for a two-person multiplayer turn-based card game\nThe game revolves around different combinations of cards giving rise to different scores in the game\n\nObviously, one would store the state of a game in the GAE datastore, but I'm not sure on the approach for the design of the game logic itself. It seems I might have two choices:\n\nStore entries in the datastore with a key that is a sorted list of the valid combinations of cards that can be player. These will then map to the score values. When a player tries to play a combination of cards, the server-side python will sort the combination appropriately and lookup the key. If it succeeds, it can do the necessary updates for the score, if it fails then the combination wasn't valid.\nStore the valid combinations as a python dictionary written into the server-side code and perform the same lookups as above to test the validity\/get the score but without a trip to the datastore.\n\nFrom a cost point of view (datastore lookups aren't free), option 2 seems like it would be better. But then there is the performance of the instance itself - will the startup time, processing time, memory usage start to tip me into greater expense?\nThere's also the code maintanence issue of constructing that Python dictionary, but I can bash together some scripts to help me write the code for that on the infrequently occasions that the logic changes. I think there will be on the order of 1000 card combinations (that can produce a score) of between 2 and 6 cards if that helps anyone who wants to quantify the problem.\nI'm starting out with this design, and the summary of the above is whether it is sensible to store the static logic of this kind of game in the datastore, or simply keep it as part of the CPU bound logic? What are the pros and cons of both approaches?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":209,"Q_Id":18807022,"Users Score":1,"Answer":"If the logic is fixed, keep it in your code. Maybe you can procedurally generate the dicts on startup. If there is a dynamic component to the logic (something you want to update frequently), a data store might be a better bet, but it sounds like that's not applicable here. Unless the number of combinations runs over the millions, and you'd want to trade speed in favour of a lower memory footprint, stick with putting it in the application itself.","Q_Score":0,"Tags":"python,google-app-engine,google-cloud-datastore","A_Id":18807184,"CreationDate":"2013-09-14T22:29:00.000","Title":"Where to hold static information for game logic?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a postgres DB in which most of the tables have a column 'valid_time' indicating when the data in that row is intended to represent and an 'analysis_time' column, indicating when the estimate was made (this might be the same or a later time than the valid time in the case of a measurement or an earlier time in the case of a forecast). Typically there are multiple analysis times for each valid time, corresponding to different measurements (if you wait a bit, more data is available for a given time, so the analysis is better but the measurment is less prompt) and forecasts with different lead times.\nI am using SQLalchemy to access this DB in Python.\nWhat I would like to do is be able to pull out all rows with the most recent N unique datetimes of a specified column. For instance I might want the 3 most recent unique valid times, but this will typically be more than 3 rows, because there will be multiple analysis times for each of those 3 valid times.\nI am new to relational databases. In a sense there are two parts to this question; how can this be achieved in bare SQL and then how to translate that to the SQLalchemy ORM?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":144,"Q_Id":18818634,"Users Score":1,"Answer":"I'm not sure about the SQLalchemy part, but as far as the SQL queries I would do it in two steps:\n\nGet the times. For example, something like.\nSELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC;\nGet the rows with those times, using the previous step as a subquery:\nSELECT * FROM MyTable WHERE valid_time IN (SELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC);","Q_Score":0,"Tags":"python,sql,postgresql,sqlalchemy","A_Id":18818835,"CreationDate":"2013-09-15T23:45:00.000","Title":"Selecting the rows with the N most recent unique values of a datetime","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking into the software architecture for using a NoSQL database (MongoDB). I would ideally want to use a database independent ORM\/ODM for this, but I can't find any similar library to SQLAlchemy for NoSQL. Do you know any?\nI do find a lot of wrappers, but nothing that seems to be database independent. If there's none, is it because all the NoSQL databases out there have different use cases that a common ORM\/ODM wouldn't make sense like it does in the SQL case ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":943,"Q_Id":18827379,"Users Score":0,"Answer":"Not sure about python, but in Java you can use frameworks like PlayORM for this purpose which supports Csasandra, HBase and MongoDb.","Q_Score":3,"Tags":"python,mongodb,nosql","A_Id":18980345,"CreationDate":"2013-09-16T11:54:00.000","Title":"NoSQL database independent ORM\/ODM for Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple xlsx File which contain two worksheet(data,graph). I have created graph using xlsxwriter in graph worksheet and write data in data worksheet. So I need to combine all graph worksheet into single xlsx File. So My question is:\nopenpyxl : In openpyxl module, we can load another workbook and modify the value.is there anyway to append new worksheet of another File. For Example.\nI have two xlsx data.xlsx(graph worksheet) and data_1.xlsx(graph worksheet)\nSo Final xlsx (graph worksheet and graph_1 worksheet)\nxlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2875,"Q_Id":18913370,"Users Score":0,"Answer":"In answer to the last part of the question:\n\nxlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module.\n\nThat is correct. XlsxWriter only writes new files. It cannot be used to modify existing files. Rewriting files is not a planned feature.","Q_Score":2,"Tags":"python,openpyxl,xlsxwriter","A_Id":18917174,"CreationDate":"2013-09-20T09:33:00.000","Title":"Combine multiple xlsx File in single Xlsx File","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 17gb xml file. I want to store it in MySQL. I tried it using xmlparser in php but it says maximum execution time of 30 seconds exceeded and inserts only a few rows. I even tried in python using element tree but it is taking lot of memory gives memory error in a laptop of 2 GB ram. Please suggest some efficient way of doing this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":18945802,"Users Score":0,"Answer":"I'd say, turn off execution time limit in PHP (e.g. use a CLI script) and be patient. If you say it starts to insert something into database from a 17 GB file, it's actually doing a good job already. No reason to hasten it for such one-time job. (Increase memory limit too, just in case. Default 128 Mb is not that much.)","Q_Score":4,"Tags":"php,mysql,python-2.7,xml-parsing","A_Id":18945969,"CreationDate":"2013-09-22T16:01:00.000","Title":"extremely large xml file to mysql","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"in my program , ten process to write mongodb by update(key, doc, upsert=true)\nthe \"key\" is mongodb index, but is not unique.\n\nquery = {'hotelid':hotelid,\"arrivedate\":arrivedate,\"leavedate\":leavedate}\nwhere = \"data.%s\" % sourceid\ndata_value_where = {where:value}\nself.collection.update(query,{'$set':data_value_where},True)\n\nthe \"query\" id the not unique index\nI found sometimes the update not update exists data, but create a new data.\nI write a log for update method return, the return is \" {u'ok': 1.0, u'err': None, u'upserted': ObjectId('5245378b4b184fbbbea3f790'), u'singleShard': u'rs1\/192.168.0.21:10000,192.168.1.191:10000,192.168.1.192:10000,192.168.1.41:10000,192.168.1.113:10000', u'connectionId': 1894107, u'n': 1, u'updatedExisting': False, u'lastOp': 5928205554643107852L}\"\nI modify the update method to update(query, {'$set':data_value_where},upsert=True, safe=True), but three is no change for this question.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":820,"Q_Id":18995966,"Users Score":0,"Answer":"You would not end up with duplicate documents due to the operator you are using. You are actually using an atomic operator to update. \nAtomic (not to be confused with SQL atomic operations of all or nothing here) operations are done in sequence so each process will never pick up a stale document or be allowed to write two ids to the same array since the document each $set operation picks up will have the result of the last $set.\nThe fact that you did get duplicate documents most likely means you have an error in your code.","Q_Score":0,"Tags":"python,mongodb,pymongo","A_Id":18998582,"CreationDate":"2013-09-25T04:00:00.000","Title":"mongodb update(use upsert=true) not update exists data, insert a new data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"in my program , ten process to write mongodb by update(key, doc, upsert=true)\nthe \"key\" is mongodb index, but is not unique.\n\nquery = {'hotelid':hotelid,\"arrivedate\":arrivedate,\"leavedate\":leavedate}\nwhere = \"data.%s\" % sourceid\ndata_value_where = {where:value}\nself.collection.update(query,{'$set':data_value_where},True)\n\nthe \"query\" id the not unique index\nI found sometimes the update not update exists data, but create a new data.\nI write a log for update method return, the return is \" {u'ok': 1.0, u'err': None, u'upserted': ObjectId('5245378b4b184fbbbea3f790'), u'singleShard': u'rs1\/192.168.0.21:10000,192.168.1.191:10000,192.168.1.192:10000,192.168.1.41:10000,192.168.1.113:10000', u'connectionId': 1894107, u'n': 1, u'updatedExisting': False, u'lastOp': 5928205554643107852L}\"\nI modify the update method to update(query, {'$set':data_value_where},upsert=True, safe=True), but three is no change for this question.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":820,"Q_Id":18995966,"Users Score":0,"Answer":"You can call it \"threadsafe\", as the update itself is not done in Python, it's in the mongodb, which is built to cater many requests at once.\nSo in summary: You can safely do that.","Q_Score":0,"Tags":"python,mongodb,pymongo","A_Id":18996136,"CreationDate":"2013-09-25T04:00:00.000","Title":"mongodb update(use upsert=true) not update exists data, insert a new data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am considering to serialize a big set of database records for cache in Redis, using python and Cassandra. I have either to serialize each record and persist a string in redis or to create a dictionary for each record and persist in redis as a list of dictionaries.\nWhich way is faster? pickle each record? or create a dictionary for each record?\nAnd second : Is there any method to fetch from database as list of dic's? (instead of a list of model obj's)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2722,"Q_Id":19025952,"Users Score":3,"Answer":"Instead of serializing your dictionaries into strings and storing them in a Redis LIST (which is what it sounds like you are proposing), you can store each dict as a Redis HASH. This should work well if your dicts are relatively simple key\/value pairs. After creating each HASH you could add the key for the HASH to a LIST, which would provide you with an index of keys for the hashes. The benefits of this approach could be avoiding or lessening the amount of serialization needed, and may make it easier to use the data set in other applications and from other languages.\nThere are of course many other approaches you can take and that will depend on lots of factors related to what kind of data you are dealing with and how you plan to use it.\nIf you do go with serialization you might want to at least consider a more language agnostic serialization format, like JSON, BSON, YAML, or one of the many others.","Q_Score":4,"Tags":"python,redis,cassandra,cql,cqlengine","A_Id":19033019,"CreationDate":"2013-09-26T10:39:00.000","Title":"Python - Redis : Best practice serializing objects for storage in Redis","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"datetime is stored in postgres DB with UTC. I could see that the date is 2013-09-28 00:15:52.62504+05:30 in postgres table. \nBut when I fetch the value via django model, I get the same datetime field as datetime.datetime(2013, 9, 27, 18, 45, 52, 625040, tzinfo=).\nUSE_TZ is True and TIME_ZONE is 'Asia\/Kolkata' in settings.py file. I think saving to DB works fine as DB contains datetime with correct UTC of +5:30.\nWhat am i doing wrong here?\nPlease help. \nThanks\nKumar","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1585,"Q_Id":19058491,"Users Score":3,"Answer":"The issue has been solved. The problem was that I was using another naive datetime field for calculation of difference in time, whereas the DB field was an aware field. I then converted the naive to timezone aware date, which solved the issue.\nJust in case some one needs to know.","Q_Score":2,"Tags":"python,django,postgresql,timezone","A_Id":19076075,"CreationDate":"2013-09-27T19:19:00.000","Title":"Postgres datetime field fetched without timezone in django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm a complete beginner to Flask and I'm starting to play around with making web apps. \nI have a hard figuring out how to enforce unique user names. I'm thinking about how to do this in SQL, maybe with something like user_name text unique on conflict fail, but then how to I catch the error back in Python?\nAlternatively, is there a way to manage this that's built in to Flask?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1118,"Q_Id":19086885,"Users Score":0,"Answer":"You can use SQLAlchemy.It's a plug-in","Q_Score":1,"Tags":"python,sql,web-applications,flask","A_Id":19087185,"CreationDate":"2013-09-30T05:17:00.000","Title":"How do I enforce unique user names in Flask?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a program which calculates a set of plain interlinked objects (the objects consist of properties which basically are either String, int or link to another object).\nI would like to have the objects stored in a relational database for easy SQL querying (from another program).\nMoreover, the objects (classes) tend to change and evolve. I would like to have a generic solution not requiring any changes in the 'persistence layer' whenever the classes evolve.\nDo you see any way to do that?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":60,"Q_Id":19142497,"Users Score":1,"Answer":"What about storing the objects in JSON?\nYou could write a function that serialize your object before storing it into the database.\nIf you have a specific identifier for your objects, I would suggest to use it as index so that you can easily retrieve it.","Q_Score":1,"Tags":"python,database,orm","A_Id":19142716,"CreationDate":"2013-10-02T16:56:00.000","Title":"Store Python objects in a database for easy quering","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am dealing with a doubt about sqlalchemy and objects refreshing!\nI am in the situation in what I have 2 sessions, and the same object has been queried in both sessions! For some particular thing I cannot to close one of the sessions.\nI have modified the object and commited the changes in session A, but in session B, the attributes are the initial ones! without modifications!\nShall I implement a notification system to communicate changes or there is a built-in way to do this in sqlalchemy?","AnswerCount":6,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":54352,"Q_Id":19143345,"Users Score":9,"Answer":"I just had this issue and the existing solutions didn't work for me for some reason. What did work was to call session.commit(). After calling that, the object had the updated values from the database.","Q_Score":37,"Tags":"python,mysql,session,notifications,sqlalchemy","A_Id":54821257,"CreationDate":"2013-10-02T17:43:00.000","Title":"About refreshing objects in sqlalchemy session","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently using SQLAlchemy with two distinct session objects. In one object, I am inserting rows into a mysql database. In the other session I am querying that database for the max row id. However, the second session is not querying the latest from the database. If I query the database manually, I see the correct, higher max row id.\nHow can I force the second session to query the live database?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1599,"Q_Id":19159142,"Users Score":0,"Answer":"Had a similar problem, for some reason i had to commit both sessions. Even the one that is only reading.\nThis might be a problem with my code though, cannot use same session as it the code will run on different machines. Also documentation of SQLalchemy says that each session should be used by one thread only, although 1 reading and 1 writing should not be a problem.","Q_Score":2,"Tags":"python,mysql,database,session,sqlalchemy","A_Id":49755122,"CreationDate":"2013-10-03T12:22:00.000","Title":"How to force SQLAlchemy to update rows","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with a somewhat large set (~30000 records) of data that my Django app needs to retrieve on a regular basis. This data doesn't really change often (maybe once a month or so), and the changes that are made are done in a batch, so the DB solution I'm trying to arrive at is pretty much read-only. \nThe total size of this dataset is about 20mb, and my first thought is that I can load it into memory (possibly as a singleton on an object) and access it very fast that way, though I'm wondering if there are other, more efficient ways of decreasing the fetch time by avoiding disk I\/O. Would memcached be the best solution here? Or would loading it into an in-memory SQLite DB be better? Or loading it on app startup simply as an in-memory variable?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1409,"Q_Id":19310083,"Users Score":0,"Answer":"Does the disk IO really become the bottleneck of your application's performance and affect your user experience? If not, I don't think this kind of optimization is necessary.\nOperating system and RDBMS (e.g MySQL , PostgresQL) are really smart nowdays. The data in the disk will be cached in memory by RDBMS and OS automatically.","Q_Score":2,"Tags":"python,django,sqlite,orm,memcached","A_Id":19311615,"CreationDate":"2013-10-11T04:06:00.000","Title":"Load static Django database into memory","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a workbook that has some sheets in it. One of the sheets has charts in it. I need to use xlrd or openpyxl to edit another sheet, but, whenever I save the workbook, the charts are gone.\nAny workaround to this? Is there another python package that preserves charts and formatting?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":477,"Q_Id":19323049,"Users Score":2,"Answer":"This is currently not possible with either but I hope to have it in openpyxl 2.x. Patches \/ pull requests always welcome! ;-)","Q_Score":4,"Tags":"python,xlrd,xlwt,openpyxl,xlutils","A_Id":20910668,"CreationDate":"2013-10-11T16:33:00.000","Title":"How can I edit Excel Workbooks using XLRD or openpyxl while preserving charts?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple python\/Django Application in which I am inserting records in database through some scanning event. And I am able to show the data on a simple page. I keep reloading the page every second to show the latest inserted database records.But I want it to improve so that page should update the records when ever new entry comes in database, instead of reloading every second.\nIs there any way to do this?\nDatabase: I am using mysql\nPython: Python 2.7\nFramework: Django","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":824,"Q_Id":19332760,"Users Score":2,"Answer":"you need to elemplments the poll\/long poll or server push.","Q_Score":1,"Tags":"python,mysql,django","A_Id":19333028,"CreationDate":"2013-10-12T09:38:00.000","Title":"Updating client page only when new entry comes in database in Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I understand that ForeignKey constrains a column to be an id value contained in another table so that entries in two different tables can be easily linked, but I do not understand the behavior of relationships(). As far as I can tell, the primary effect of declaring a relationship between Parent and Child classes is that parentobject.child will now reference the entries linked to the parentobject in the children table. What other effects does declaring a relationship have? How does declaring a relationship change the behavior of the SQL database or how SQLAlchemy interacts with the database?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":251,"Q_Id":19366605,"Users Score":5,"Answer":"It doesn't do anything at the database level, it's purely for convenience. Defining a relationship lets SQLAlchemy know how to automatically query for the related object, rather than you having to manually use the foreign key. SQLAlchemy will also do other high level management such as allowing assignment of objects and cascading changes.","Q_Score":1,"Tags":"python,sql,sqlalchemy,relationship","A_Id":19369883,"CreationDate":"2013-10-14T18:21:00.000","Title":"SQLAlchemy Relationships","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm just curious if there's a way to make the no default value warning I get from Storm to go away. I have an insert trigger in MySQL that handles these fields and everything is functioning as expected so I just want to remove this unnecessary information. I tried setting the default value to None but that causes an error because the fields do not allow nulls. So how do I make the warning go away?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":770,"Q_Id":19373289,"Users Score":0,"Answer":"Is it not possible for you to remove the 'IsNull' constraint from your MySQL database? I'm not aware of any where it is not possible to do this. Otherwise you could set a default string which represents a null value.","Q_Score":1,"Tags":"python,mysql,apache-storm","A_Id":20010872,"CreationDate":"2013-10-15T04:26:00.000","Title":"How can I avoid \"Warning: Field 'xxx' doesn't have a default value\" in Storm?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a few large hourly upload tables with RECORD fieldtypes. I want to pull select records out of those tables and put them in daily per-customer tables. The trouble I'm running into is that using QUERY to do this seems to flatten the data out.\nIs there some way to preserve the nested RECORDs, or do I need to rethink my approach?\nIf it helps, I'm using the Python API.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":234,"Q_Id":19458338,"Users Score":0,"Answer":"Unfortunately, there isn't a way to do this right now, since, as you realized, all results are flattened.","Q_Score":1,"Tags":"python,google-bigquery","A_Id":19459294,"CreationDate":"2013-10-18T20:17:00.000","Title":"Bigquery: how to preserve nested data in derived tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a simple way to extract text from excel\/word\/ppt files. The objective is to index contents in whoosh for search with haystack.\nThere are some packages like xlrd and pandas that work for excel, but they go way beyond what I need, and I'm not really sure that they will actually just print the cell's unformatted text content straight from the box.\nAnybody knows of an easy way around this? My guess is ms office files must be xml-shaped. \nThanks!\nA.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":631,"Q_Id":19500625,"Users Score":2,"Answer":"I've done this \"by hand\" before--as it turns out, .(doc|ppt|xls)x files are just zip files which contain .xml files with all of your content. So you can use zipfile and your favorite xml parser to read the contents if you can find no better tool to do it.","Q_Score":1,"Tags":"python,django-haystack,whoosh","A_Id":19500864,"CreationDate":"2013-10-21T17:07:00.000","Title":"Extract text from ms office files with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Rackspace has added the feature to select certain cloud servers (as hosts) while creating a user in a cloud database instance. This allows the specified user to be accessed, only from those cloud servers.\nSo I would like to know whether there is an API available in pyrax(python SDK for Rackspace APIs) to accomplish this or not.\nIf possible, then how to pass multiple cloud server IPs using the API.\nThanks,","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":19585830,"Users Score":0,"Answer":"I released version 1.6.1 of pyrax a few days ago that adds support for the 'host' parameter for users, as well as for Cloud Database backups.","Q_Score":1,"Tags":"python,mysql,database,cloud,rackspace-cloud","A_Id":19761340,"CreationDate":"2013-10-25T09:17:00.000","Title":"Host Parameter While Creating a User in Rackspace Cloud Database Instance","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are building a datawarehouse in PostgreSQL. We want to connect to different data sources. Most data will come from ms access. We not not python experts (yet :-)).\nWe found several database connectors. We want to use (as much as possible) standard SQL for our queries.\nWe looked at pyodbc pscopg2.\nGiven that we use MS Access and PostgreSQL and want to have the same query syntax and return data types; Which drivers should we use ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":126,"Q_Id":19605580,"Users Score":1,"Answer":"Your query syntax differences will depend on PostgreSQL extensions vs MS Access-specific quirks. The psycodb and pyodbc will both provide a query interface using whatever SQL dialect (with quirks) the underlying db connections provide.","Q_Score":0,"Tags":"python,postgresql,ms-access,psycopg2,pyodbc","A_Id":20310119,"CreationDate":"2013-10-26T10:25:00.000","Title":"python postgresql ms access driver advice","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The goal is to find values in an Excel spreadsheet which match values in a separate list, then highlight the row with a fill color (red) where matches are found. In other words:\n\nExcel file A: source list (approximately 200 items)\nExcel file B: has one column containing the list we are checking; must apply fill color (red) to entire row where matches are found\n\nWondering what the best approach might be. I'm currently using AppleScript to highlight and sort data in a large volume of spreadsheets; a looped find checks each cell in a range for a single string of text and colors all matching rows. While this task is similar, the source list contains hundreds of items so it feels silly (and very slow) to include all this data in the actual script. Any suggestions would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":961,"Q_Id":19612872,"Users Score":0,"Answer":"I don't know what format your original list is in, but this sounds like a job for conditional formatting, if you can get the list into Excel. You can do conditional formatting based on a formula, and you can use a VLOOKUP() formula to do it.","Q_Score":2,"Tags":"python,regex,excel,macos,applescript","A_Id":20011728,"CreationDate":"2013-10-26T23:07:00.000","Title":"Find text in Excel file matching text in separate file, then apply fill color to row","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like my Python script to read some data out of a postgresql dump file. The Python will be running on a system without postgresql, and needs to process the data in a dump file.\nIt looks fairly straightforward to parse the CREATE TABLE calls to find the column names, then the INSERT INTO rows to build the contents. But I'm sure there would be quite a few gotchas in doing this reliably. Does anyone know of a module which will do this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4660,"Q_Id":19638019,"Users Score":1,"Answer":"Thanks for all the comments, even if they are mostly \"don't do this!\" ;)\nGiven:\n\nThe dump is always produced in the same format from a 3rd-party system\nI need to be able to automate reading it on another 3rd-party system without postgres\n\nI've gone for writing my own basic parser, which is doing a good enough job for what I require.","Q_Score":3,"Tags":"python,postgresql","A_Id":19703149,"CreationDate":"2013-10-28T14:53:00.000","Title":"How to read postgresql dump file in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Django + Postgres. When I do a SQL query using psql, \ne.g. \\d+ myapp_stories\ncorrectly shows the columns in the table\nBut when I do SELECT * FROM myapp_stories, it returns nothing. But querying the same database & table from my python code returns data just fine. So there is data in the table. Any thoughts? I'm using venv, not sure if that affects anything. Thanks in advance!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":82,"Q_Id":19664732,"Users Score":1,"Answer":"I guess you forgot to enter semicolon:\n\nSELECT * FROM myapp_stories;","Q_Score":1,"Tags":"python,django,postgresql","A_Id":19665116,"CreationDate":"2013-10-29T17:05:00.000","Title":"SELECT using psql returns no rows even though data is there","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Django + Postgres. When I do a SQL query using psql, \ne.g. \\d+ myapp_stories\ncorrectly shows the columns in the table\nBut when I do SELECT * FROM myapp_stories, it returns nothing. But querying the same database & table from my python code returns data just fine. So there is data in the table. Any thoughts? I'm using venv, not sure if that affects anything. Thanks in advance!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":82,"Q_Id":19664732,"Users Score":1,"Answer":"Prefix the table in your query with the schema, as the search_path might be causing your query (or psql) to look in a schema other than what you are expecting.","Q_Score":1,"Tags":"python,django,postgresql","A_Id":19666882,"CreationDate":"2013-10-29T17:05:00.000","Title":"SELECT using psql returns no rows even though data is there","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have PyQt application which uses SQLite files to store data and would like to allows multiple users to read and write to the same database. It uses QSqlDatabase and QSqlTableModels with item views for reading and editing. \nAs is multiple users can launch the application and read\/write to different tables. The issue is this:\nSay user1's application reads table A then user2 writes to index 0,0 on table A. Since user1 application has already read and cached that cell and doesn't see user2's change right away. The Qt item view's will update when the dataChanged signal emits but in this case the data is being changed by another application instance. Is there some way to trigger file changes by another application instance. What's the best way to handle this. \nI'm assuming this is really best solved by using an SQL server host connection rather than SQLite for the database, but in the realm of SQLite what would be my closest workaround option?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":19759594,"Users Score":0,"Answer":"SQLite has no mechanism by which another user can be notified.\nYou have to implement some communication mechanism outside of SQLite.","Q_Score":0,"Tags":"python,sql,qt,sqlite","A_Id":19764106,"CreationDate":"2013-11-03T23:40:00.000","Title":"Signaling Cell Changes across multiple QSqlDatabase to the same SQliteFile","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get Django running on OS X Mavericks and I've encountered a bunch of errors along the way, the latest way being that when runpython manage.py runserver to see if everything works, I get this error, which I believe means that it misses libssl:\n\nImportError: dlopen(\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/psycopg2\/_psycopg.so, 2): Library not loaded: @loader_path\/..\/lib\/libssl.1.0.0.dylib Referenced from: \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/psycopg2\/_psycopg.so Reason: image not found\n\nI have already upgraded Python to 2.7.6 with the patch that handles some of the quirks of Mavericks.\nAny ideas?\nFull traceback:\n\nUnhandled exception in thread started by >\n Traceback (most recent call last):\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/commands\/runserver.py\", line 93, in inner_run\n self.validate(display_num_errors=True)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/base.py\", line 280, in validate\n num_errors = get_validation_errors(s, app)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/core\/management\/validation.py\", line 28, in get_validation_errors\n from django.db import models, connection\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/init.py\", line 40, in \n backend = load_backend(connection.settings_dict['ENGINE'])\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/init.py\", line 34, in getattr\n return getattr(connections[DEFAULT_DB_ALIAS], item)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/utils.py\", line 93, in getitem\n backend = load_backend(db['ENGINE'])\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/utils.py\", line 27, in load_backend\n return import_module('.base', backend_name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/utils\/importlib.py\", line 35, in import_module\n import(name)\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/backends\/postgresql_psycopg2\/base.py\", line 14, in \n from django.db.backends.postgresql_psycopg2.creation import DatabaseCreation\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/django\/db\/backends\/postgresql_psycopg2\/creation.py\", line 1, in \n import psycopg2.extensions\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/psycopg2\/init.py\", line 50, in \n from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID\n ImportError: dlopen(\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/psycopg2\/_psycopg.so, 2): Library not loaded: @loader_path\/..\/lib\/libssl.1.0.0.dylib\n Referenced from: \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/psycopg2\/_psycopg.so\n Reason: image not found","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":9380,"Q_Id":19767569,"Users Score":2,"Answer":"It seems that it's libssl.1.0.0.dylib that is missing. Mavericks comme with libssl 0.9.8. You need to install libssl via homebrew.\nIf loaderpath points to \/usr\/lib\/, you also need to symlink libssl from \/usr\/local\/Cell\/openssl\/lib\/ into \/usr\/lib.","Q_Score":2,"Tags":"python,django,macos,postgresql","A_Id":19772866,"CreationDate":"2013-11-04T12:15:00.000","Title":"Django can't find libssl on OS X Mavericks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have two different python programs. One of the program uses the python BeautifulSoup module, the other uses the MySQLdb module. When I run the python files individually, I have no problem and the program run fine and give me the desired output. However I need to combine the two programs so to achieve my ultimate goal. However the Beautiful soup module only runs if I open it in python 2.7.3 and the MySQLdb runs only on the python 2.7.4 (64bit) version. I installed both the modules exactly the way it was mentioned in the docs. Any help will be much appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":19799605,"Users Score":0,"Answer":"If you have 2 versions of python installed on your system, then you've somehow installed one library in each of them.\nYou either need to install both libraries in both versions of python (which 2 seperate versions of pip can do), or need to setup your PYTHONPATH environment variable to allow loading of modules from additional paths (like the site-packages folder of the python 2.7.4 (64bit) installation from the python 2.7.3 executable).","Q_Score":0,"Tags":"python,python-2.7,beautifulsoup,mysql-python","A_Id":19801757,"CreationDate":"2013-11-05T21:42:00.000","Title":"Modules not Working across different python versions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Situation: \nI have a requirement to use connection pooling while connecting to Oracle database in python. Multiple python applications would use the helper connection libraries I develop.\nMy Thought Process:\nHere I can think of two ways of connection pooling:\n1) Let connection pool be maintained and managed by database itself (as provided by Oracle's DRCP) and calling modules just ask connections from the connection broker described by Oracle DRCP.\n2) Have a server process that manages the connection pool and all caller modules ask for connections from this pool (like dbcp?)\nWhat suggestions do I need:\noption 1) looks very straight forward since pool does not need to be stored by application. \nBut I wanted to know what advantages do I get other than simplicity using option 1)?\nI am trying to avoid option 2) since it would require a dedicated server process always running (considering shelving is not possible for connection objects).\nIs there any other way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":622,"Q_Id":19848191,"Users Score":0,"Answer":"Let the database handle the pool. . . it's smarter than you'll be, and you'll leverage every bug fix\/performance improvement Oracle's installed base comes up with.","Q_Score":4,"Tags":"python,oracle,connection-pooling","A_Id":19848278,"CreationDate":"2013-11-07T22:40:00.000","Title":"Application vs Database Resident Connection Pool","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got one table in which modifications are made :-account_bank_statement, what other tables are needed for the point of sale and if i make a sale in which tables modifications are made.I want to make a sale but not through the pos provided.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":19892934,"Users Score":0,"Answer":"All the sales done through post is registered in post.order. If you are creating orders from an external source other than pos, you can create the order in this table and call the confirm bottom action. Rest changes in all other table will be done automatically..","Q_Score":0,"Tags":"python,openerp","A_Id":19897059,"CreationDate":"2013-11-10T17:46:00.000","Title":"In which tables changes are made in openERP when an items is sold at Point of sale","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I know there exists a plugin for nginx to load the config through perl. I was wondering, does anyone have any experience doing this without using a plugin? Possibly a fuse-backed Python script that queries a DB?\nI would really like to not use the perl plugin, as it doesn't seem that stable.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":722,"Q_Id":19957613,"Users Score":1,"Answer":"I haven't seen any working solution to solve your task, a quick google search doesn't give any useful information either (it doesn't look like HttpPerlModule could help with DB stored configuration). \nIt sounds like it's a good task to develop and contribute to Nginx project !","Q_Score":0,"Tags":"python,sql,configuration,nginx,fuse","A_Id":20018813,"CreationDate":"2013-11-13T15:23:00.000","Title":"Running Nginx with a database-backed config file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to share an in-memory database between processes. I'm using Python's sqlite3. The idea is to create a file in \/run\/shm and use it as a database. Questions are:\n\nIs that safe? In particular: do read\/write locks (fcntl) work the same in shm?\nIs that a good idea in the first place? I'd like to keep things simple and not have to create a separate database process.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":230,"Q_Id":19976664,"Users Score":0,"Answer":"I've tested fcntl (in Python) with shm files and it seems that locking works correctly. Indeed, from process point of view it is a file and OS handles everything correctly.\nI'm going to keep this architecture since it is simple enough and I don't see any (major) drawbacks.","Q_Score":1,"Tags":"python,sqlite,shared-memory","A_Id":20004051,"CreationDate":"2013-11-14T11:38:00.000","Title":"sqlite3 database in shared memory","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a web app in Python (Flask) that, essentially, shows the user information from a PostgreSQL database (via Flask-SQLAlchemy) in a random order, with each set of information being shown on one page. Hitting a Next button will direct the user to the next set of data by replacing all data on the page with new data, and so on.\nMy conundrum comes with making the presentation truly random - not showing the user the same information twice by remembering what they've seen and not showing them those already seen sets of data again.\nThe site has no user system, and the \"already seen\" sets of data should be forgotten when they close the tab\/window or navigate away.\nI should also add that I'm a total newbie to SQL in general.\nWhat is the best way to do this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":512,"Q_Id":20072309,"Users Score":1,"Answer":"The easiest way is to do the random number generation in javascript at the client end...\nTell the client what the highest number row is, then the client page keeps track of which ids it has requested (just a simple js array). Then when the \"request next random page\" button is clicked, it generates a new random number less than the highest valid row id, and providing that the number isn't in its list of previously viewed items, it will send a request for that item.\nThis way, you (on the server) only have to have 2 database accessing views:\n\nmain page (which gives the js, and the highest valid row id)\ndisplay an item (by id) \n\nYou don't have any complex session tracking, and the user's browser is only having to keep track of a simple list of numbers, which even if they personally view several thousand different items is still only going to be a meg or two of memory.\nFor performance reasons, you can even pre-fetch the next item as soon as the current item loads, so that it displays instantly and loads the next one in the background while they're looking at it. (jQuery .load() is your friend :-) )\nIf you expect a large number of items to be removed from the database (so that the highest number is not helpful), then you can instead generate a list of random ids, send that, and then request them one at a time. Pre-generate the random list, as it were.\nHope this helps! :-)","Q_Score":1,"Tags":"python,sql,flask,flask-sqlalchemy","A_Id":20081554,"CreationDate":"2013-11-19T13:03:00.000","Title":"Best way to show a user random data from an SQL database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am wondering if anyone knows a way to generate a connection to a SQLite database in python from a StringIO object.\nI have a compressed SQLite3 database file and I would like to decompress it using the gzip library and then connect to it without first making a temp file.\nI've looked into the slqite3 library source, but it looks like filename gets passed all the way through into the C code. Are there any other SQLite3 connection libraries that you could use a file ID for? Or is there some why I can trick the builtin sqlite3 library into thinking that my StringIO (or some other object type) is an actual file?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1371,"Q_Id":20084135,"Users Score":4,"Answer":"The Python sqlite3 module cannot open a database from a file number, and even so, using StringIO will not give you a file number (since it does not open a file, it just emulates the Python file object).\nYou can use the :memory: special file name to avoid writing a file to disk, then later write it to disk once you are done with it. This will also make sure the file is optimized for size, and you can opt not to write e.g. indexes if size is really a big issue.","Q_Score":10,"Tags":"python,sqlite,stringio","A_Id":20084315,"CreationDate":"2013-11-19T23:10:00.000","Title":"SQLite3 connection from StringIO (Python)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost).\nHow do I make a connection to that database from a user's computer, and also be able to retrieve data from it securely? Can I build some PHP script on the server that is able to take parameters and retrieve data to that user?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":20193144,"Users Score":0,"Answer":"As i understood you are able to connect only with \"server itself (localhost)\" so to connect from any ip do this: \n mysql> CREATE USER 'myname'@'%.mydomain.com' IDENTIFIED BY 'mypass';\nI agree with @Daniel no PHP script needed...","Q_Score":0,"Tags":"php,python,mysql,python-2.7","A_Id":20193357,"CreationDate":"2013-11-25T12:29:00.000","Title":"Secure MySQL Connection in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost).\nHow do I make a connection to that database from a user's computer, and also be able to retrieve data from it securely? Can I build some PHP script on the server that is able to take parameters and retrieve data to that user?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":143,"Q_Id":20193144,"Users Score":1,"Answer":"You should not make a connection from the user's computer. By default, most database configurations are done to allow only requests from the same server (localhost) to access the database. \nWhat you will need is this: \n\nA server side script such as Python, PHP, Perl, Ruby, etc to access the database. The script will be on the server, and as such, it will access the database locally\nSend a web request from the user's computer using Python, Perl, or any programming language to the server side script as described above.\nSo, the application on the user's computer sends a request to the script on the server. The script connects to the database locally, accesses the data, and sends it back to the application. The application can then use the data as needed.\n\nThat is basically, what you are trying to achieve. \nHope the explanation is clear and it helps.","Q_Score":0,"Tags":"php,python,mysql,python-2.7","A_Id":20193562,"CreationDate":"2013-11-25T12:29:00.000","Title":"Secure MySQL Connection in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"win32com is a general library to access COM objects from Python.\nOne of the major hallmarks of this library is ability to manipulate excel documents.\nHowever, there is lots of customized modules, whose only purpose it to manipulate excel documents, like openpyxl, xlrd, xlwt, python-tablefu.\nAre these libraries any better for this specific task? If yes, in what respect?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3031,"Q_Id":20263021,"Users Score":9,"Answer":"Open and write directly and efficiently excel files, for instance.\n\nwin32com uses COM communication, which while being very useful for certain purposes, it needs to perform complicated API calls that can be very slow (so to say, you are using code that controls Windows, that controls Excel)\nopenpyxl or others, just open an excel file, parse it and let you work with it.\n\nTry to populate an excel file with 2000 rows, 100 cells each, with win32com and then with any other direct parser. While a parser needs seconds, win32com will need minutes. \nBesides, openpyxl (I haven't tried the others) does not need that excel is installed in the system. It does not even need that the OS is windows.\nTotally different concepts. win32com is a piece of art that opens a door to automate almost anything, while the other option is just a file parser. In other words, to iron your shirt, you use a $20 iron, not a 100 ton metal sheet attached to a Lamborghini Diablo.","Q_Score":3,"Tags":"python,excel,win32com,xlrd,openpyxl","A_Id":20263978,"CreationDate":"2013-11-28T10:04:00.000","Title":"What do third party libraries like openpyxl or xlrd\/xlwt have, what win32com doesn't have?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install.\n\n vfi@ubuntu:\/usr\/share\/pyshared$ python3\n Python 3.2.3 (default, Sep 25 2013, 18:22:43) \n [GCC 4.6.3] on linux2\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> import mysql.connector\n Traceback (most recent call last):\n File \"\", line 1, in \n ImportError: No module named mysql.connector\n Error in sys.excepthook:\n Traceback (most recent call last):\n File \"\/usr\/share\/pyshared\/apport_python_hook.py\", line 66, in apport_excepthook\n from apport.fileutils import likely_packaged, get_recent_crashes\n File \"apport\/__init__.py\", line 1, in \n from apport.report import Report\n File \"apport\/report.py\", line 20, in \n import apport.fileutils\n File \"apport\/fileutils.py\", line 22, in \n from apport.packaging_impl import impl as packaging\n File \"apport\/packaging_impl.py\", line 20, in \n import apt\n File \"apt\/__init__.py\", line 24, in \n from apt.package import Package\n File \"apt\/package.py\", line 1051\n return file_list.read().decode(\"utf-8\").split(u\"\\n\")\n ^\n SyntaxError: invalid syntax\n\n Original exception was:\n Traceback (most recent call last):\n File \"\", line 1, in \n ImportError: No module named mysql.connector\n\nhere is the related modules state on my pc:\n\nvfi@ubuntu:\/usr\/share\/pyshared$ sudo aptitude search python3-apt\ni python3-apt - Python 3 interface to libapt-pkg \np python3-apt:i386 - Python 3 interface to libapt-pkg \np python3-apt-dbg - Python 3 interface to libapt-pkg (debug extension) \np python3-apt-dbg:i386 - Python 3 interface to libapt-pkg (debug extension) \nv python3-apt-dbg:any - \nv python3-apt-dbg:any:i386 - \nv python3-apt:any - \nv python3-apt:any:i386 - \nvfi@ubuntu:\/usr\/share\/pyshared$ sudo aptitude search python-apt\ni python-apt - Python interface to libapt-pkg \np python-apt:i386 - Python interface to libapt-pkg \ni python-apt-common - Python interface to libapt-pkg (locales) \np python-apt-dbg - Python interface to libapt-pkg (debug extension) \np python-apt-dbg:i386 - Python interface to libapt-pkg (debug extension) \nv python-apt-dbg:any - \nv python-apt-dbg:any:i386 - \np python-apt-dev - Python interface to libapt-pkg (development files) \np python-apt-doc - Python interface to libapt-pkg (API documentation) \nv python-apt-p2p - \nv python-apt-p2p-khashmir - \nv python-apt:any - \nv python-apt:any:i386 - \ni python-aptdaemon - Python module for the server and client of aptdaemon \np python-aptdaemon-gtk - Transitional dummy package \ni python-aptdaemon.gtk3widgets - Python GTK+ 3 widgets to run an aptdaemon client \np python-aptdaemon.gtkwidgets - Python GTK+ 2 widgets to run an aptdaemon client \ni python-aptdaemon.pkcompat - PackageKit compatibilty for AptDaemon \np python-aptdaemon.test - Test environment for aptdaemon clients \nvfi@ubuntu:\/usr\/share\/pyshared$ sudo aptitude search python-mysql.connector\npi python-mysql.connector - pure Python implementation of MySQL Client\/Server protocol \n\nHope you can help!\nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":20264,"Q_Id":20275176,"Users Score":0,"Answer":"pip3 install mysql-connector-python worked for me","Q_Score":4,"Tags":"mysql,python-3.x,python-module","A_Id":65242155,"CreationDate":"2013-11-28T21:46:00.000","Title":"ImportError: No module named mysql.connector using Python3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install.\n\n vfi@ubuntu:\/usr\/share\/pyshared$ python3\n Python 3.2.3 (default, Sep 25 2013, 18:22:43) \n [GCC 4.6.3] on linux2\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> import mysql.connector\n Traceback (most recent call last):\n File \"\", line 1, in \n ImportError: No module named mysql.connector\n Error in sys.excepthook:\n Traceback (most recent call last):\n File \"\/usr\/share\/pyshared\/apport_python_hook.py\", line 66, in apport_excepthook\n from apport.fileutils import likely_packaged, get_recent_crashes\n File \"apport\/__init__.py\", line 1, in \n from apport.report import Report\n File \"apport\/report.py\", line 20, in \n import apport.fileutils\n File \"apport\/fileutils.py\", line 22, in \n from apport.packaging_impl import impl as packaging\n File \"apport\/packaging_impl.py\", line 20, in \n import apt\n File \"apt\/__init__.py\", line 24, in \n from apt.package import Package\n File \"apt\/package.py\", line 1051\n return file_list.read().decode(\"utf-8\").split(u\"\\n\")\n ^\n SyntaxError: invalid syntax\n\n Original exception was:\n Traceback (most recent call last):\n File \"\", line 1, in \n ImportError: No module named mysql.connector\n\nhere is the related modules state on my pc:\n\nvfi@ubuntu:\/usr\/share\/pyshared$ sudo aptitude search python3-apt\ni python3-apt - Python 3 interface to libapt-pkg \np python3-apt:i386 - Python 3 interface to libapt-pkg \np python3-apt-dbg - Python 3 interface to libapt-pkg (debug extension) \np python3-apt-dbg:i386 - Python 3 interface to libapt-pkg (debug extension) \nv python3-apt-dbg:any - \nv python3-apt-dbg:any:i386 - \nv python3-apt:any - \nv python3-apt:any:i386 - \nvfi@ubuntu:\/usr\/share\/pyshared$ sudo aptitude search python-apt\ni python-apt - Python interface to libapt-pkg \np python-apt:i386 - Python interface to libapt-pkg \ni python-apt-common - Python interface to libapt-pkg (locales) \np python-apt-dbg - Python interface to libapt-pkg (debug extension) \np python-apt-dbg:i386 - Python interface to libapt-pkg (debug extension) \nv python-apt-dbg:any - \nv python-apt-dbg:any:i386 - \np python-apt-dev - Python interface to libapt-pkg (development files) \np python-apt-doc - Python interface to libapt-pkg (API documentation) \nv python-apt-p2p - \nv python-apt-p2p-khashmir - \nv python-apt:any - \nv python-apt:any:i386 - \ni python-aptdaemon - Python module for the server and client of aptdaemon \np python-aptdaemon-gtk - Transitional dummy package \ni python-aptdaemon.gtk3widgets - Python GTK+ 3 widgets to run an aptdaemon client \np python-aptdaemon.gtkwidgets - Python GTK+ 2 widgets to run an aptdaemon client \ni python-aptdaemon.pkcompat - PackageKit compatibilty for AptDaemon \np python-aptdaemon.test - Test environment for aptdaemon clients \nvfi@ubuntu:\/usr\/share\/pyshared$ sudo aptitude search python-mysql.connector\npi python-mysql.connector - pure Python implementation of MySQL Client\/Server protocol \n\nHope you can help!\nThanks","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":20264,"Q_Id":20275176,"Users Score":5,"Answer":"Finally figured out what was my problem. \npython-mysql.connector was not a py3 package and apt-get nor aptitude was proposing such version.\nI managed to install it with pip3 which was not so simple on ubuntu 12.04 because it's only bundled with ubuntu starting at 12.10 and the package does not have the same name under pip...\n\nvfi@ubuntu:$sudo apt-get install python3-setuptools\nvfi@ubuntu:$sudo easy_install3 pip\n\nvfi@ubuntu:$ pip --version\npip 1.4.1 from \/usr\/local\/lib\/python3.2\/dist-packages\/pip-1.4.1-py3.2.egg (python 3.2)\n\nvfi@ubuntu:$sudo pip install mysql-connector-python","Q_Score":4,"Tags":"mysql,python-3.x,python-module","A_Id":20275797,"CreationDate":"2013-11-28T21:46:00.000","Title":"ImportError: No module named mysql.connector using Python3?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting different information for a particular thing and i'm storing those information in a dictionary\ne.g. {property1:val , property2:val, property3:val} \nnow I have several dictionary of this type (as I get many things ..each dictionary is for a thing)\nnow I want to save information in DB so there would be as many columns as key:value pair in a dictionary \nso what is the best or simplest way to do that.\nPlease provide all steps to do that (I mean syntax for login in DB, push data into row or execute sql query etc... I hope there wont be more than 4 or 5 steps )\nPS: All dictionaries have the same keys, and each key always has the same value-type . and also number of columns are predefined.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":861,"Q_Id":20304863,"Users Score":0,"Answer":"You're doing it wrong!\nMake an object that represents a row in the database, use __getitem__ to pretend it's a dictionary. \nPut your database logic in that.\nDon't go all noSQL unless your tables are not related. Just by being tables they are ideal for SQL!","Q_Score":0,"Tags":"python,database,python-2.7,beautifulsoup,mysql-python","A_Id":20305193,"CreationDate":"2013-11-30T19:46:00.000","Title":"How to save Information in Database using BeautifulSoup","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm getting different information for a particular thing and i'm storing those information in a dictionary\ne.g. {property1:val , property2:val, property3:val} \nnow I have several dictionary of this type (as I get many things ..each dictionary is for a thing)\nnow I want to save information in DB so there would be as many columns as key:value pair in a dictionary \nso what is the best or simplest way to do that.\nPlease provide all steps to do that (I mean syntax for login in DB, push data into row or execute sql query etc... I hope there wont be more than 4 or 5 steps )\nPS: All dictionaries have the same keys, and each key always has the same value-type . and also number of columns are predefined.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":861,"Q_Id":20304863,"Users Score":0,"Answer":"If your dictionaries all have the same keys, and each key always has the same value-type, it would be pretty straight-forward to map this to a relational database like MySQL.\nAlternatively, you could convert your dictionaries to objects and use an ORM like SQLAlchemy to do the back-end work.","Q_Score":0,"Tags":"python,database,python-2.7,beautifulsoup,mysql-python","A_Id":20305076,"CreationDate":"2013-11-30T19:46:00.000","Title":"How to save Information in Database using BeautifulSoup","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am hosting a web app at pythonanywhere.com and experiencing a strange problem. Every half-hour or so I am getting the OperationalError: (2006, 'MySQL server has gone away'). However, if I resave my wsgi.py file, the error disappears. And then appears again some half-an-hour later...\nDuring the loading of the main page, my app checks a BOOL field in a 1x1 table (basically whether sign-ups should be open or closed). The only other MySQL actions are inserts into another small table, but none of these appear to be associated with the problem.\nAny ideas for how I can fix this? I can provide more information as is necessary. Thanks in advance for your help.\nEDIT\nProblem turned out to be a matter of knowing when certain portions of code run. I assumed that every time a page loaded a new connection was opened. This was not the case; however, I have fixed it now.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2432,"Q_Id":20308097,"Users Score":4,"Answer":"It normally because your mysql network connect be disconnected, may by your network gateway\/router, so you have two options. One is always build a mysql connect before every query (not using connect pool etc). Second is try and catch this error, then get connect and query db again.","Q_Score":2,"Tags":"python,mysql,mysql-python,pythonanywhere","A_Id":20309286,"CreationDate":"2013-12-01T02:33:00.000","Title":"Periodic OperationalError: (2006, 'MySQL server has gone away')","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Have some programming background, but in the process of both learning Python and making a web app, and I'm a long-time lurker but first-time poster on Stack Overflow, so please bear with me. \nI know that SQLite (or another database, seems like PostgreSQL is popular) is the way to store data between sessions. But what's the most efficient way to store large amounts of data during a session? \nI'm building a script to identify the strongest groups of employees to work on various projects in a company. I have received one SQLite database per department containing employee data including skill sets, achievements, performance, and pay. \nMy script currently runs one SQL query on each database in response to an initial query by the user, pulling all the potentially-relevant employees and their data. It stores all of that data in a list of Python dicts so the end-user can mix-and-match relevant people. \nI see two other options: I could still run the comprehensive initial queries but instead of storing it in Python dicts, dump it all into SQLite temporary tables; my guess is that this would save some space and computing because I wouldn't have to store all the joins with each record. Or I could just load employee name and column\/row references, which would save a lot of joins on the first pass, then pull the data on the fly from the original databases as the user requests additional data, storing little if any data in Python data structures. \nWhat's going to be the most efficient? Or, at least, what is the most common\/proper way of handling large amounts of data during a session? \nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2649,"Q_Id":20320642,"Users Score":1,"Answer":"Aren't you over-optimizing? You don't need the best solution, you need a solution which is good enough. \nImplement the simplest one, using dicts; it has a fair chance to be adequate. If you test it and then find it inadequate, try SQLite or Mongo (both have downsides) and see if it suits you better. But I suspect that buying more RAM instead would be the most cost-effective solution in your case.\n(Not-a-real-answer disclaimer applies.)","Q_Score":2,"Tags":"python,python-2.7,sqlite,sqlalchemy","A_Id":20320905,"CreationDate":"2013-12-02T04:05:00.000","Title":"What's faster: temporary SQL tables or Python dicts for session data?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two databases (infact two database dump ... db1.sql and db2.sql)\nboth database have only 1 table in each.\nin each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value \ni just want to go through both databases and find a row from each table so that they both have one common value\nnow from these two rows(one from each table) i would extract some information and would write into a file.\nI want efficient methods to do that \nPS: If you got my question please edit the title \nEDIT: I want to compare these two tables(database) by a column which have contact number as primary key.\nbut the problem is one table has it is as an integer(big integer) and other table has it is as a string. now how could i inner-join them.\nbasically i dont want to create another database, i simply want to store two columns from each table into a file so I guess i dont need inner-join. do i? \ne.g. \nin table-1 = 9876543210\nin table-2 = \"9876543210\"","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1032,"Q_Id":20348584,"Users Score":0,"Answer":"Not sure if I understand what it is you want to do. You want to match a value from a column from one table to a value from a column from another table?\nIf you'd have the data in two tables in a database, you could make an inner join.\nDepending on how big the file is, you could use a manual comparison tool like WinMerge.","Q_Score":0,"Tags":"python,mysql,sql,database,mysql-python","A_Id":20348851,"CreationDate":"2013-12-03T10:28:00.000","Title":"Compare two databases and find common value in a row","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two databases (infact two database dump ... db1.sql and db2.sql)\nboth database have only 1 table in each.\nin each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value \ni just want to go through both databases and find a row from each table so that they both have one common value\nnow from these two rows(one from each table) i would extract some information and would write into a file.\nI want efficient methods to do that \nPS: If you got my question please edit the title \nEDIT: I want to compare these two tables(database) by a column which have contact number as primary key.\nbut the problem is one table has it is as an integer(big integer) and other table has it is as a string. now how could i inner-join them.\nbasically i dont want to create another database, i simply want to store two columns from each table into a file so I guess i dont need inner-join. do i? \ne.g. \nin table-1 = 9876543210\nin table-2 = \"9876543210\"","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1032,"Q_Id":20348584,"Users Score":0,"Answer":"You can use Join with alias name.","Q_Score":0,"Tags":"python,mysql,sql,database,mysql-python","A_Id":20348719,"CreationDate":"2013-12-03T10:28:00.000","Title":"Compare two databases and find common value in a row","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I was making a simple chat app with python. I want to store user specific data in a database, but I'm unfamiliar with efficiency. I want to store usernames, public rsa keys, missed messages, missed group messages, urls to profile pics etc.\nThere's a couple of things in there that would have to be grabbed pretty often, like missed messages and profile pics and a couple of hashes. So here's the question: what database style would be fastest while staying memory efficient? I want it to be able to handle around 10k users (like that's ever gonna happen).\nheres some I thought of:\n\neverything in one file (might be bad on memory, and takes time to load in, important, as I would need to load it in after every change.)\nseperate files per user (Slower, but memory efficient)\nseperate files\nper data value\n\ndirectory for each user, seperate files for each value.\n\n\nthanks,and try to keep it objective so this isnt' instantly closed!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":20380661,"Users Score":0,"Answer":"The only answer possible at this point is 'try it and see'.\nI would start with MySQL (mostly because it's the 'lowest common denominator', freely available everywhere); it should do everything you need up to several thousand users, and if you get that far you should have a far better idea of what you need and where the bottlenecks are.","Q_Score":0,"Tags":"python,database,performance,chat","A_Id":20382525,"CreationDate":"2013-12-04T16:27:00.000","Title":"efficient database file trees","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to know how many rows were commited on the last commit on a SQLAlchemy Session? For instance, if I had just inserted 2 rows, I wish to know that there were 2 rows inserted, etc.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":193,"Q_Id":20389368,"Users Score":1,"Answer":"You can look at session.new, .dirty, and .deleted to see what objects will be committed, but that doesn't necessarily represent the number of rows, since those objects may set extra rows in a many-to-many association, polymorphic table, etc.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":20389560,"CreationDate":"2013-12-05T00:59:00.000","Title":"SQLAlchemy, how many rows were commited on last commit","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a giant (100Gb) csv file with several columns and a smaller (4Gb) csv also with several columns. The first column in both datasets have the same category. I want to create a third csv with the records of the big file which happen to have a matching first column in the small csv. In database terms it would be a simple join on the first column. \nI am trying to find the best approach to go about this in terms of efficiency. As the smaller dataset fits in memory, I was thinking of loading it in a sort of set structure and then read the big file line to line and querying the in memory set, and write to file on positive.\nJust to frame the question in SO terms, is there an optimal way to achieve this?\nEDIT: This is a one time operation.\nNote: the language is not relevant, open to suggestions on column, row oriented databases, python, etc...","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":20389982,"Users Score":0,"Answer":"If you are only doing this once, your approach should be sufficient. The only improvement I would make is to read the big file in chunks instead of line by line. That way you don't have to hit the file system as much. You'd want to make the chunks as big as possible while still fitting in memory.\nIf you will need to do this more than once, consider pushing the data into some database. You could insert all the data from the big file and then \"update\" that data using the second, smaller file to get a complete database with one large table with all the data. If you use a NoSQL database like Cassandra this should be fairly efficient since Cassandra is pretty good and handling writes efficiently.","Q_Score":0,"Tags":"c#,python,database,bigdata","A_Id":20390085,"CreationDate":"2013-12-05T02:00:00.000","Title":"Intersecting 2 big datasets","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Just wondering how to store files in the google app engine datastore.\nThere are lots of examples on the internet, but they are using blobstore\nI have tried importing db.BlobProperty, but when i put() the data\nit shows up as a i think. It appears like there is no data\nSimilar to None for a string\nAre there any examples of using the Datastore to store files\nOr can anyone point me in the right direction\nI am new to programming, so not to complex, but I have a good\nhang of Python, just not an expert yet.\nThanks for any help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":20421965,"Users Score":0,"Answer":"Datastore has a limit on the size of objects stored there, thats why all examples and documentation say to use the blobstore or cloud storage. Do that.","Q_Score":0,"Tags":"python,google-app-engine,blob,google-cloud-datastore","A_Id":20424484,"CreationDate":"2013-12-06T10:47:00.000","Title":"How do I store files in googleappengine datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using a python script to run hourly scrapes of a website that publishes the most popular hashtags for a social media platform. They're to be stored in a database (MYSQL), with each row being a hashtag and then a column for each hour that it appears in the top 20, where the number of uses within that past hour is listed. \nSo, the amount of rows as well as columns will constantly increase, as new hashtags appear and ones that have previously appeared resurface into the top 20.\nIs there a best way to go about this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":20452796,"Users Score":2,"Answer":"Your design is poorly suited for a relational database such as MySQL. The best way to go about it is to either redesign your storage layout to a form that a relational database works well with (eg. make each row a (hashtag, hour) pair), or use something other than a relational database to store it.","Q_Score":0,"Tags":"python,mysql","A_Id":20452854,"CreationDate":"2013-12-08T11:17:00.000","Title":"Best way to handle a database with lots of dynamically added columns?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Problem\nI have a list of ~5000 locations with latitude and longitude coordinates called A, and a separate subset of this list called B. I want to find all locations from A that are within n miles of any of the locations in B.\nStructure\nAll of this data is stored in a mysql database, and requested via a python script.\nApproach\nMy current approach is to iterate through all locations in B, and request locations within n miles of each location, adding them to the list if they don't exist yet.\nThis works, but in the worst case, it takes a significant amount of time, and is quite inefficient. I feel like there has to be a better way, but I am at a loss as for how to do it.\nIdeas\n\nLoad all locations into a list in python, and calculate distances there. This would reduce the number of mysql queries, and likely speed up the operation. It would still be slow though.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":71,"Q_Id":20455129,"Users Score":1,"Answer":"Load B into a python list and for each calculate maxlat, minlat, maxlong, minlong that everything outside of the box is definitely outside of your radius, if your radius is in nautical miles and lat\/long in degrees. You can then raise an SQL query for points meeting criteria of minlat < lat < maxlat and minlong < long < maxlong. The resulting points can then be checked for exact distance and added to the in range list if they are in range. \nI would suggest doing this in multiple processes.","Q_Score":0,"Tags":"python,mysql,latitude-longitude","A_Id":20455724,"CreationDate":"2013-12-08T15:29:00.000","Title":"Finding Locations with n Miles of Existing Locations","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can convert date read from excel to a proper date using xldate_as_tuple function. Is there any function which can do the reverse i.e. convert proper date to float which is stored as date in excel ?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1405,"Q_Id":20464887,"Users Score":0,"Answer":"Excel dates are represented as pywintypes.Time type objects. So in order to e.g. assign the current timestamp to a cell you do:\nworkbook.Worksheets(1).Cells(1,1).Value = pywintypes.Time(datetime.datetime.now())","Q_Score":2,"Tags":"python,excel,xlrd","A_Id":21302801,"CreationDate":"2013-12-09T06:57:00.000","Title":"How to convert current date to float which is stored in excel as date?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"how to do file uploading in turbogears 2.3.1? I am using CrudRestController and tgext.datahelpers and it is uploading the file in the sqlite3 database but in an unknown format. I want to make a copy of the uploaded file in the hard drive. My query is how to ensure that when user uploads a file, it is loaded both in the database and the hard drive.\n(Thank you for suggestions)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":226,"Q_Id":20492587,"Users Score":0,"Answer":"tgext.datahelpers uploads files on disk inside the public\/attachments directory (this can be change with tg.config['attachments_path']).\nSo your file is already stored on disk, only the file metadata, like the URL, filename, thumbnail_url and so on are stored on database in JSON format","Q_Score":1,"Tags":"python-2.7,turbogears2","A_Id":20525832,"CreationDate":"2013-12-10T10:57:00.000","Title":"file upload turbogears 2.3.1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tend to start projects that are far beyond what I am capable of doing, bad habit or a good way to force myself to learn, I don't know. Anyway, this project uses a postgresql database, python and sqlalchemy. I am slowly learning everything from sql to sqlalchemy and python. I have started to figure out models and the declarative approach, but I am wondering: what is the easiest way to populate the database with data that needs to be there from the beginning, such as an admin user for my project? How is this usually done?\nEdit:\nPerhaps this question was worder in a bad way. What I wanted to know was the possible ways to insert initial data in my database, I tried using sqlalchemy and checking if every item existed or not, if not, insert it. This seemed tedious and can't be the way to go if there is a lot of initial data. I am a beginner at this and what better way to learn is there than to ask the people who do this regularly how they do it? Perhaps not a good fit for a question on stackoverflow, sorry.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":995,"Q_Id":20587888,"Users Score":0,"Answer":"You could use a schema change management tool like liquibase. Normally this is used to keep your database schema in source control, and apply patches to update your schema.\nYou can also use liquibase to load data from CSV files. So you could add a startup.csv file in liquibase that would be run the first time you run liquibase against your database. You can also have it run any time, and will merge data in the CSV with the database.","Q_Score":1,"Tags":"python,sql,postgresql,sqlalchemy","A_Id":20589295,"CreationDate":"2013-12-14T20:26:00.000","Title":"Sqlalchemy, python, easiest way to populate database with data","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building the back-end for my web app; it would act as an API for the front-end and it will be written in Python (Flask, to be precise).\nAfter taking some decisions regarding design and implementation, I got to the database part. And I started thinking whether NoSQL data storage may be more appropriate for my project than traditional SQL databases. Following is a basic functionality description which should be handled by the database and then a list of pros and cons I could come up with regarding to which type of storage should I opt for. Finally some words about why I have considered RethinkDB over other NoSQL data storages.\nBasic functionality of the API\nThe API consists of only a few models: Artist, Song, Suggestion, User and UserArtists.\nI would like to be able to add a User with some associated data and link some Artists to it. I would like to add Songs to Artists on request, and also generate a Suggestion for a User, which will contain an Artist and a Song.\nMaybe one of the most important parts is that Artists will be periodically linked to Users (and also Artists can be removed from the system -- hence from Users too -- if they don't satisfy some criteria). Songs will also be dynamically added to Artists. All this means is that Users don't have a fixed set of Artists and nor do Artists have a fixed set of Songs -- they will be continuously updating.\nPros\nfor NoSQL:\n\nFlexible schema, since not every Artist will have a FacebookID or Song a SoundcloudID;\nWhile a JSON API, I believe I would benefit from the fact that records are stored as JSON;\nI believe the number of Songs, but especially Suggestions will raise quite a bit, hence NoSQL will do a better job here;\n\nfor SQL:\n\nIt's fixed schema may come in handy with relations between models;\nFlask has support for SQLAlchemy which is very helpful in defining models;\n\nCons\nfor NoSQL:\n\nRelations are harder to implement and updating models transaction-like involves a bit of code;\nFlask doesn't have any wrapper or module to ease things, hence I will need to implement some kind of wrapper to help me make the code more readable while doing database operations;\nI don't have any certainty on how should I store my records, especially UserArtists\n\nfor SQL:\n\nOperations are bulky, I have to define schemas, check whether columns have defaults, assign defaults, validate data, begin\/commit transactions -- I believe it's too much of a hassle for something simple like an API;\n\nWhy RethinkDB?\nI've considered RehinkDB for a possible implementation of NoSQL for my API because of the following:\n\nIt looks simpler and more lightweight than other solutions;\nIt has native Python support which is a big plus;\nIt implements table joins and other things which could come in handy in my API, which has some relations between models;\nIt is rather new, and I see a lot of implication and love from the community. There's also the will to continuously add new things that leverage database interaction.\n\nAll these being considered, I would be glad to hear any advice on whether NoSQL or SQL is more appropiate for my needs, as well as any other pro\/con on the two, and of course, some corrections on things I haven't stated properly.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2835,"Q_Id":20597590,"Users Score":14,"Answer":"I'm working at RethinkDB, but that's my unbiased answer as a web developer (at least as unbiased as I can).\n\nFlexible schema are nice from a developer point of view (and in your case). Like you said, with something like PostgreSQL you would have to format all the data you pull from third parties (SoundCloud, Facebook etc.). And while it's not something really hard to do, it's not something enjoyable.\nBeing able to join tables, is for me the natural way of doing things (like for user\/userArtist\/artist). While you could have a structure where a user would contain artists, it is going to be unpleasant to use when you will need to retrieve artists and for each of them a list of users.\n\nThe first point is something common in NoSQL databases, while JOIN operations are more a SQL databases thing.\nYou can see RethinkDB as something providing the best of each world.\nI believe that developing with RethinkDB is easy, fast and enjoyable, and that's what I am looking for as a web developer.\nThere is however one thing that you may need and that RethinkDB does not deliver, which is transactions. If you need atomic updates on multiple tables (or documents - like if you have to transfer money between users), you are definitively better with something like PostgreSQL. If you just need updates on multiple tables, RethinkDB can handle that.\nAnd like you said, while RethinkDB is new, the community is amazing, and we - at RethinkDB - care a lot about our users.\nIf you have more questions, I would be happy to answer them : )","Q_Score":11,"Tags":"python,sql,database,nosql,rethinkdb","A_Id":20600546,"CreationDate":"2013-12-15T17:37:00.000","Title":"How suitable is opting for RethinkDB instead of traditional SQL for a JSON API?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working with an Oracle database with millions of rows and 100+ columns. I am attempting to store this data in an HDF5 file using pytables with certain columns indexed. I will be reading subsets of these data in a pandas DataFrame and performing computations.\nI have attempted the following:\nDownload the the table, using a utility into a csv file, read the csv file chunk by chunk using pandas and append to HDF5 table using pandas.HDFStore. I created a dtype definition and provided the maximum string sizes.\nHowever, now when I am trying to download data directly from Oracle DB and post it to HDF5 file via pandas.HDFStore, I run into some problems.\npandas.io.sql.read_frame does not support chunked reading. I don't have enough RAM to be able to download the entire data to memory first.\nIf I try to use cursor.fecthmany() with a fixed number of records, the read operation takes ages at the DB table is not indexed and I have to read records falling under a date range. I am using DataFrame(cursor.fetchmany(), columns = ['a','b','c'], dtype=my_dtype) \nhowever, the created DataFrame always infers the dtype rather than enforce the dtype I have provided (unlike read_csv which adheres to the dtype I provide). Hence, when I append this DataFrame to an already existing HDFDatastore, there is a type mismatch for e.g. a float64 will maybe interpreted as int64 in one chunk.\nAppreciate if you guys could offer your thoughts and point me in the right direction.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5171,"Q_Id":20618523,"Users Score":0,"Answer":"Okay, so I don't have much experience with oracle databases, but here's some thoughts:\nYour access time for any particular records from oracle are slow, because of a lack of indexing, and the fact you want data in timestamp order. \nFirstly, you can't enable indexing for the database?\nIf you can't manipulate the database, you can presumably request a found set that only includes the ordered unique ids for each row?\nYou could potentially store this data as a single array of unique ids, and you should be able to fit into memory. If you allow 4k for every unique key (conservative estimate, includes overhead etc), and you don't keep the timestamps, so it's just an array of integers, it might use up about 1.1GB of RAM for 3 million records. That's not a whole heap, and presumably you only want a small window of active data, or perhaps you are processing row by row?\nMake a generator function to do all of this. That way, once you complete iteration it should free up the memory, without having to del anything, and it also makes your code easier to follow and avoids bloating the actual important logic of your calculation loop.\nIf you can't store it all in memory, or for some other reason this doesn't work, then the best thing you can do, is work out how much you can store in memory. You can potentially split the job into multiple requests, and use multithreading to send a request once the last one has finished, while you process the data into your new file. It shouldn't use up memory, until you ask for the data to be returned. Try and work out if the delay is the request being fulfilled, or the data being downloaded.\nFrom the sounds of it, you might be abstracting the database, and letting pandas make the requests. It might be worth looking at how it's limiting the results. You should be able to make the request for all the data, but only load the results one row at a time from the database server.","Q_Score":12,"Tags":"python,pandas,hdf5,pytables","A_Id":29225626,"CreationDate":"2013-12-16T18:50:00.000","Title":"Reading a large table with millions of rows from Oracle and writing to HDF5","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I dont even know if this is possible. But if it is, can someone give me the broadstrokes on how I can use a Python script to populate a Google spreadsheet?\nI want to scrape data from a web site and dump it into a google spreadsheet. I can imagine what the Python looks like (scrapy, etc). But does the language support writing to Google Drive? Can I kick off the script within the spreadsheet itself or would it have to run outside of it?\nIdeal scenario would be to open a google spreadsheet, click on a button, Python script executes and data is filled in said spreadsheet.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3149,"Q_Id":20693168,"Users Score":0,"Answer":"Yes, it is possible and this is how I am personally doing it so.\nsearch for \"doGet\" and \"doPost(e)","Q_Score":0,"Tags":"python,google-sheets","A_Id":50629830,"CreationDate":"2013-12-19T22:45:00.000","Title":"Is this possible - Python script to fill a Google spreadsheet?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i have deployed a simple Django application on AWS. The database i use is MySQL. Most parts of this application runs well. But there happens to be a problem when i submitted a form and store data from the form into a model. The error page presents Data truncated for column 'temp' at row 1. temp is a ChoiceField like this: temp = forms.ChoiceField(label=\"temperature\", choices=TEMP), in the model file the temp is a CharField like this temp = models.CharField(max_length=2, choices=TEMP). The error happens at .save(). How can i fix this problem? Any advice and help is appreciated. BTW, as what i have searched, the truncation problem happens because of data type to be stored in database. But i still cannot figure out how to modify my code.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2106,"Q_Id":20712174,"Users Score":1,"Answer":"Your column is only 2 chars wide, but you are trying to store the strings 'HIGH', 'MEDIUM', 'LOW' from your TEMP choices (the first value of each tuple is saved in the database). Increase max_length or choose different values for choices, e.g. TEMP = ( ('H', 'High'), ('M', 'Medium'), ('L', 'Low'), ). \nIt worked fine in SQLite because SQLite simply ignores the max_length attribute (and other things).","Q_Score":0,"Tags":"python,mysql,database,django,amazon-ec2","A_Id":20712349,"CreationDate":"2013-12-20T21:24:00.000","Title":"Data truncated for column 'temp' at row 1","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm started enhancing an application which has developed in Python. Zope server has been used to deploy the application.\nIn many modules DB connection has established and used for DB transaction, and which has not used any connection pooling mechanism. Considering the volume of users it is vulnerable to have DB connections established for every request and it is a bad design.\n\nNow In order to have connection pooling, what should I do? My application\n uses Python 2.4,Zope 2.11.4 and MySQL 5.5.\n\nIs Zope provides any way to achieve it, like configure the DB in external file and inside the Python code referring the connection which Zope takes care of utilizing from connection pool? Or Do I need to write in a Python code in such a way that independent of the server(Zope or other) provided MySQL module for python","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":309,"Q_Id":20798818,"Users Score":0,"Answer":"I guess you've advanced with your problem, but this is not a reason not to comment.\n1) Long-term answer: seriously consider building a path to migrating to ZODB instead of mysql. ZODB is integrated with Zope and is way more efficient than mysql for storing Zope data. You can't do it at once, but may be you can identify part of the data that can be migrated to ZODB first, and then do it by \"clusters of data\".\n2) short-term answer: I don't know what library you're using to connect to mysql (there aren't many of them), let's say it's python-mysqldb, and the function to Connect to the database is Connect. You Can write your own MySqlDB module, and put it before the system MySqlDB in the sys.path (manipulating the sys.path of your zope application if necessary), so your module is called instead of the system MySqlDB one. In your module, you write a Connect function that encapsulates your pooling logic and proxy everything else to the original (system) MySqlDB module.\nHope I've been clear for you or everyone else having the same problem.","Q_Score":1,"Tags":"python,mysql,connection-pooling,mysql-python,zope","A_Id":21954872,"CreationDate":"2013-12-27T10:15:00.000","Title":"How to configure DB connection pooling in Python Zope server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import sqlalchemy.databases.sqlite.DateTimeMixIn. I get ImportError: No module named sqlite. SQLAlchemy 0.8.4 is installed. If I do import sqlite I get the same error.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":79,"Q_Id":20834740,"Users Score":1,"Answer":"Sounds like the python binary you are using wasn't compiled with the sqlite module. If you are compiling from source, make sure you have the sqlite headers available.","Q_Score":0,"Tags":"python,sqlite,sqlalchemy","A_Id":20835718,"CreationDate":"2013-12-30T06:52:00.000","Title":"Importing SQLAlchemy DateTimeMixin raises ImportErrror","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to check whether a row in a table has been modified or not in Cassandra.\nI don't want to compare the date before and after updating row in table.\nAfter Update operation I need to verify the query executed properly or not using python scripts. I am using Cassandra Driver for python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":167,"Q_Id":20855659,"Users Score":0,"Answer":"If you want to verify that an update happened as planned, execute a SELECT against the updated row.","Q_Score":0,"Tags":"python,cassandra","A_Id":20928821,"CreationDate":"2013-12-31T10:15:00.000","Title":"Cassandra row update check in a table","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to publish an Android application that I have developed but have a minor concern.\nThe application will load with a database file (or sqlite3 file). If updates arise in the future and these updates are only targeting the application's functionality without the database structure, I wish to allow users to keep their saved entries in their sqlite3 files.\nSo what is the best practice to send updates? Compile the apk files with the new updated code only and without the database files? Or is there any other suggestion?\nPS: I am not working with Java and Eclipse, but with python for Android and the Kivy platform which is an amazing new way for developing Android applications.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":487,"Q_Id":20856465,"Users Score":0,"Answer":"if you're using local sqlite then you have to embed the database file within the app as failure to do so it means there's no database, in case for updates database have version numbers where as it can not upgrade the database provided the version number is the same as the previous app updates","Q_Score":4,"Tags":"android,python-2.7,sqlite,apk,kivy","A_Id":20856571,"CreationDate":"2013-12-31T11:14:00.000","Title":"Update apk file on Google Play","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to publish an Android application that I have developed but have a minor concern.\nThe application will load with a database file (or sqlite3 file). If updates arise in the future and these updates are only targeting the application's functionality without the database structure, I wish to allow users to keep their saved entries in their sqlite3 files.\nSo what is the best practice to send updates? Compile the apk files with the new updated code only and without the database files? Or is there any other suggestion?\nPS: I am not working with Java and Eclipse, but with python for Android and the Kivy platform which is an amazing new way for developing Android applications.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":487,"Q_Id":20856465,"Users Score":0,"Answer":"I had the same issue when I started my app but since kivy has no solution for this I tried to create a directory outside my app directory in android with a simple os.mkdir('..\/##') and I put all the files there. Hope this helps!","Q_Score":4,"Tags":"android,python-2.7,sqlite,apk,kivy","A_Id":46767741,"CreationDate":"2013-12-31T11:14:00.000","Title":"Update apk file on Google Play","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Django app that has several database backends - all connected to different instances of Postgresql database. One of them is not guaranteed to be always online. It even can be offline when application starts up.\nCan I somehow configure Django to use lazy connections? I would like to:\n\nTry querying\nreturn \"sorry, try again later\" if database is offline\nor return the results if database is online\n\nIs this possible?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":380,"Q_Id":20878709,"Users Score":2,"Answer":"The original confusion is that Django tries to connect to its databases on startup. This is actually not true. Django does not connect to database, until some app tries to access the database.\nSince my web application uses auth and site apps, it looks like it tries to connect on startup. But its not tied to startup, its tied to the fact that those app access the database \"early\".\nIf one defines second database backend (non-default), then Django will not try connecting to it unless application tries to query it.\nSo the solution was very trivial - originally I had one database that hosted both auth\/site data and also \"real\" data that I've exposed to users. I wanted to make \"real\" database connection to be volatile. So I've defined separate psql backend for it and switched default backend to sqlite. \nNow when trying to access \"real\" database through Query, I can easily wrap it with try\/except and handle \"Sorry, try again later\" over to the user.","Q_Score":1,"Tags":"python,django,django-models","A_Id":21235393,"CreationDate":"2014-01-02T08:01:00.000","Title":"Lazy psql connection with Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"As the title suggests, I am using the s3cmd tool to upload\/download files on Amazon.\nHowever I have to use Windows Server and bring in some sort of progress reporting. \nThe problem is that on windows, s3cmd gives me the following error:\nERROR: Option --progress is not yet supported on MS Windows platform. Assuming -\n-no-progress.\nNow, I need this --progress option.\nAre there any workarounds for that? Or maybe some other tool?\nThanks.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":701,"Q_Id":21017853,"Users Score":2,"Answer":"OK, I have found a decent workaround to that:\nJust navigate to C:\\Python27\\Scripts\\s3cmd and comment out lines 1837-1845.\nThis way we can essentially skip a windows check and print progress on the cmd.\nHowever, since it works normally, I have no clue why the authors put it there in the first place.\nCheers.","Q_Score":1,"Tags":"python,windows,progress-bar,progress,s3cmd","A_Id":21165278,"CreationDate":"2014-01-09T10:38:00.000","Title":"s3cmd tool on Windows server with progress support","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to install psycopg2 on Mac OS X Mavericks but it doesn't see any pg_config file.\nPostgres was installed via Postgres.app .\nI found pg_config in \/Applications\/Postgres.app\/Contents\/MacOS\/bin\/ and put it to setup.cfg but still can't install psycopg2.\nWhat might be wrong?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":916,"Q_Id":21033198,"Users Score":0,"Answer":"I had the same problem when I tried to install psycopg2 via Pycharm and using Postgres93.app. The installer (when running in Pycharm) insisted it could not find the pg_config file despite the fact that pg_config is on my path and I could run pg_config and psql successfully in Terminal. For me the solution was to install a clean version of python with homebrew. Navigate to the homebrew installation of Python and run pip in the terminal (rather than with Pycharm). It seems pip running in Pycharm did not see the postgres installation on my PATH, but running pip directly in a terminal resolved the problem.","Q_Score":2,"Tags":"python,macos,postgresql,psycopg2","A_Id":21414139,"CreationDate":"2014-01-09T23:15:00.000","Title":"Can't install psycopg2 on Maverick","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Recently i m working on web2py postgresql i made few changes in my table added new fields with fake_migration_all = true it does updated my .table file but the two new added fields were not able to be altered in postgres database table and i also tried fake_migration_all = false and also deleted mu .table file but still it didnt help to alter my table does able two add fields in datatable\nAny better solution available so that i should not drop my data table and fields should also be altered\/added in my table so my data shouldn't be loast","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":535,"Q_Id":21046136,"Users Score":0,"Answer":"fake_migrate_all doesn't do any actual migration (hence the \"fake\") -- it just makes sure the metadata in the .table files matches the current set of table definitions (and therefore the actual database, assuming the table definitions in fact match the database).\nIf you want to do an actual migration of the database, then you need to make sure you do not have migrate_enabled=False in the call to DAL(), nor migrate=False in the relevant db.define_table() calls. Unless you explicitly set those to false, migrations are enabled by default.\nAlways a good idea to back up your database before doing a migration.","Q_Score":0,"Tags":"python,web2py","A_Id":21050586,"CreationDate":"2014-01-10T13:55:00.000","Title":"Web2py postgreSQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Forgive my ignorance as I am new to oursql. I'm simply trying to pass a parameter to a statement:\ncursor.execute(\"select blah from blah_table where blah_field = ?\", blah_variable)\nthis treated whatever is inside the blah_variable as a char array so if I pass \"hello\" it will throw a ProgrammingError telling me that 1 parameter was expected but 5 was given.\nI've tried looking through the docs but their examples are not using variables. Thanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":233,"Q_Id":21053472,"Users Score":1,"Answer":"IT is expecting a sequence of parameters. Use:\n[blah_variable]","Q_Score":0,"Tags":"python,parameters,oursql","A_Id":21053569,"CreationDate":"2014-01-10T20:06:00.000","Title":"Python oursql treating a string variable as a char array","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am deploying my flask app to EC2, however i get the error in my error.log file once i visit the link of my app.\nMy extensions are present in the site-packages of my flask environment and not the \"usr\" folder of the server, however it tries to search usr folder to find the hook\n\nFile \"\/usr\/local\/lib\/python2.7\/dist-packages\/flask\/exthook.py\", line 87, in load_module\n\nIt is located in \n\n\/var\/www\/sample\/flask\/lib\/python2.7\/site-packages\n\nHow to get over this issue?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2246,"Q_Id":21107967,"Users Score":0,"Answer":"You should be building your python apps in a virtualenv rather than using the system's installation of python. Try creating a virtualenv for your app and installing all of the extensions in there.","Q_Score":0,"Tags":"python,deployment,amazon-ec2,flask,flask-sqlalchemy","A_Id":21124613,"CreationDate":"2014-01-14T07:23:00.000","Title":"ImportError: No module named flask.ext.sqlalchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What I am using: PostgreSQL and Python. I am using Python to access PostgreSQL\nWhat I need: Receive a automatic notification, on Python, if anyone records something on a specific table on database.\nI think that it is possible using a routine that go to that table, over some interval, and check changes. But it requires a loop and I would like something like an a assynchronous way.\nIs it possible?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17343,"Q_Id":21117431,"Users Score":17,"Answer":"donmage is quite right - LISTEN and NOTIFY are what you want. You'll still need a polling loop, but it's very lightweight, and won't cause detectable server load.\nIf you want psycopg2 to trigger callbacks at any time in your program, you can do this by spawning a thread and having that thread execute the polling loop. Check to see whether psycopg2 enforces thread-safe connection access; if it doesn't, you'll need to do your own locking so that your polling loop only runs when the connection is idle, and no other queries interrupt a polling cycle. Or you can just use a second connection for your event polling.\nEither way, when the background thread that's polling for notify events receives one, it can invoke a Python callback function supplied by your main program, which might modify data structures \/ variables shared by the rest of the program. Beware, if you do this, that it can quickly become a nightmare to maintain.\nIf you take that approach, I strongly suggest using the multithreading \/ multiprocessing modules. They will make your life massively easier, providing simple ways to exchange data between threads, and limiting modifications made by the listening thread to simple and well-controlled locations.\nIf using threads instead of processes, it is important to understand that in cPython (i.e. \"normal Python\") you can't have a true callback interrupt, because only one thread may be executing in cPython at once. Read about the \"global interpreter lock\" (GIL) to understand more about this. Because of this limitation (and the easier, safer nature of shared-nothing by default concurrency) I often prefer multiprocessing to multithreading.","Q_Score":21,"Tags":"python,postgresql,events,triggers,listener","A_Id":21128034,"CreationDate":"2014-01-14T15:35:00.000","Title":"How to receive automatic notifications about changes in tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a database that contains personally-identifying information (PII) that needs to be encrypted. \nFrom the Python side, I can use PyCrypto to encrypt data using AES-256 and a variable salt; this results in a Base64 encoded string.\nFrom the PostgreSQL side, I can use the PgCrypto functions to encrypt data in the same way, but this results in a bytea value.\nFor the life of me, I can't find a way to convert between these two, or to make a comparison between the two so that I can do a query on the encrypted data. Any suggestions\/ideas?\nNote: yes, I realize that I could do all the encryption\/decryption on the database side, but my goal is to ensure that any data transmitted between the application and the database still does not contain any of the PII, as it could, in theory, be vulnerable to interception, or visible via logging.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2355,"Q_Id":21122847,"Users Score":3,"Answer":"Imagine you have a Social Security Number field in your table. Users must be able to query for a particular SSN when needed. The SSN, obviously, needs to be encrypted. I can encrypt it from the Python side and save it to the database, but then in order for it to be searchable, I would have to use the same salt for every record so that I can incorporate the encrypted value as part of my WHERE clause, and that just leaves us vulnerable. I can encrypt\/decrypt on the database side, but in that case, I'm sending the SSN in plain-text whenever I'm querying, which is also bad.\n\nThe usual solution to this kind of issue is to store a partial value, hashed unsalted or with a fixed salt, alongside the randomly salted full value. You index the hashed partial value and search on that. You'll get false-positive matches, but still significantly benefit from DB-side indexed searching. You can fetch all the matches and, application-side, discard the false positives.\nQuerying encrypted data is all about compromises between security and performance. There's no magic answer that'll let you send a hashed value to the server and have it compare it to a bunch of randomly salted and hashed values for a match. In fact, that's exactly why we salt our hashes - to prevent that from working, because that's also pretty much what an attacker does when trying to brute-force.\nSo. Compromise. Either live with sending the SSNs as plaintext (over SSL) for comparison to salted & hashed stored values, knowing that it still greatly reduces exposure because the whole lot can't be dumped at once. Or index a partial value and search on that.\nDo be aware that another problem with sending values unhashed is that they can appear in the server error logs. Even if you don't have log_statement = all, they may still appear if there's an error, like query cancellation or a deadlock break. Sending the values as query parameters reduces the number of places they can appear in the logs, but is far from foolproof. So if you send values in the clear you've got to treat your logs as security critical. Fun!","Q_Score":1,"Tags":"python,postgresql,encryption","A_Id":21128178,"CreationDate":"2014-01-14T20:01:00.000","Title":"Encryption using Python and PostgreSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have several million rows in a Sqlite database that need 5 columns updated. Each row\/column value is different, so I have to update each row individually.\nBecause of the way I'm looping through JSON from an external API, for each row, I have the option of either:\n1) do 5 UPDATE operations, one for value. \n2) build a temporary dict in python, then unpack it into a single UPDATE operation that updates all 5 columns at once. \nBasically I'm trading off Python time (slower language, but in memory) for SQLite time (faster language, but on disk).\nWhich is faster?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":71,"Q_Id":21150012,"Users Score":1,"Answer":"Building a dict doesn't really take that much memory. It's much more efficient since you'll only need to do one operation - and let SQLite handle it. Well, python is going to clean the dict anyway, so this is definitely the way to go. \nBut as @JoranBeasley mentioned in the comment.. You never know until you try. \nHope this helps!","Q_Score":1,"Tags":"python,python-2.7,sqlite","A_Id":21150081,"CreationDate":"2014-01-15T22:55:00.000","Title":"For a single Sqlite row, faster to do 5 UPDATEs or build a python dict, then 1 Update?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Oracle database and in a certain column I need to insert Strings, which in some cases are larger than 4000 symbols (Oracle 11g limits Varchar2 size to 4000). We are required to use Oracle 11g, and I know about the 12g extended mode. I would not like to use the CLOB datatype for performance considerations. The solution that I have in mind is to split the column and write a custom SQLAlchemy datatype that writes the data to the second column in case of string larger than 4000.\nSo, my questions are:\n\nAre we going to gain any significant performance boost from that (rather than using Clob)?\nHow should that SQLAlchemy be implemented? Currently we are using types.TypeDecorator for custom types, but in this case we need to read\/write in two fields.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":820,"Q_Id":21237645,"Users Score":1,"Answer":"CLOB or NCLOB would be the best options. Avoid splitting data into columns. What would happen when you have data larger than 2 columns - it will fail again. It also makes it maintenance nightmare. I've seen people split data into rows in some databases just because the database would not support larger character datatypes (old Sybase versions). However, if your database has a datatype built for this purpose by all means use it.","Q_Score":0,"Tags":"python,oracle,oracle11g,sqlalchemy","A_Id":21238505,"CreationDate":"2014-01-20T15:20:00.000","Title":"SQLAlchemy type containing strings larger than 4000 on Oracle using Varchar2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a serious problem, I don't now how to solve it.\nI have a Win 7 64bit laptop, with MS Office 2007 installed (32 bits).\nI installed Anaconda 64bits, BUT I am trying to connect to a MS Access MDB file with the ACE drives and I got an error that there is no driver installed.\nDue to MS Office 2007, I was forced to install ACE drivers 32 bits.\nAny help?\nThe same code runs perfect under Win XP, with exactly the same installed: Anaconda, ACE drivers and MS Office 2007.\nIt can be a problem mixin 32bits and 64 bits?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":217,"Q_Id":21296441,"Users Score":1,"Answer":"I finally got it!\nYes, the problem was mixing 32 and 64 bits.\nI solved the problem installing the Microsoft ACE Drivers 64bits on a MS-DOS console, writting:\nAccessDatabaseEngine_x64.exe \/passive\nAnd everything works!","Q_Score":1,"Tags":"python,ms-access,ms-office,anaconda","A_Id":21333377,"CreationDate":"2014-01-22T23:39:00.000","Title":"Python on Win 7 64bits error MS Access","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a big problem here with python, openpyxl and Excel files. My objective is to write some calculated data to a preconfigured template in Excel. I load this template and write the data on it. There are two problems:\n\nI'm talking about writing Excel books with more than 2 millions of cells, divided into several sheets.\nI do this successfully, but the waiting time is unthinkable.\n\nI don't know other way to solve this problem. Maybe openpyxl is not the solution. I have tried to write in xlsb, but I think openpyxl does not support this format. I have also tried with optimized writer and reader, but the problem comes when I save, due to the big data. However, the output file size is 10 MB, at most. I'm very stuck with this. Do you know if there is another way to do this?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5110,"Q_Id":21328884,"Users Score":4,"Answer":"The file size isn't really the issue when it comes to memory use but the number of cells in memory. Your use case really will push openpyxl to the limits at the moment which is currently designed to support either optimised reading or optimised writing but not both at the same time. One thing you might try would be to read in openpyxl with use_iterators=True this will give you a generator that you can call from xlsxwriter which should be able to write a new file for you. xlsxwriter is currently significantly faster than openpyxl when creating files. The solution isn't perfect but it might work for you.","Q_Score":5,"Tags":"python,excel,openpyxl","A_Id":21352070,"CreationDate":"2014-01-24T09:25:00.000","Title":"openpyxl: writing large excel files with python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is it possible to use custom _id fields with Django and MongoEngine?\nThe problem is, if I try to save a string to the _id field it throws an Invalid ObjectId eror. What I want to do is using my own Id's. This never was a problem without using Django because I caught the DuplicateKeyError on creation if a given id was already existing (which was even necessary to tell the program, that this ID is already taken) \nNow it seems as if Django\/MongoEngine won't even let me create a custom _id field :-\/\nIs there any way to work arround this without creating a second field for the ID and let the _id field create itself?\nGreetings Codehai","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2413,"Q_Id":21370889,"Users Score":6,"Answer":"You can set the parameter primary_key=True on a Field. This will make the target Field your _id Field.","Q_Score":0,"Tags":"python,django,mongodb,mongoengine","A_Id":21498341,"CreationDate":"2014-01-26T23:51:00.000","Title":"custom _id fields Django MongoDB MongoEngine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have python 2.7 32 bit running on a Windows 8.1 64 bit machine. \nI have Access 2013 and a .accdb file that I'm trying to access from python and pyodbc.\nI can create a 64 bit DSN in the 64 bit ODBC manager. However, when I try to connect to it from python, I get the error: \n\nError: (u'IM002', u'[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified')\n\nPresumably, python is only looking for 32-bit DSNs and doesn't find the 64-bit one that I've created.\nWhen I try to create a 32-bit DSN within the 32-bit ODBC manager, there is no driver for a accdb file (just .mdb).\nI think I need a 32 bit ODBC driver for Access 2013 files (.accdb), but haven't been able to find one.\nIs it possible to do what I'm trying to do? -- 32bit python access a Access 2013 .accdb file?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":10567,"Q_Id":21393558,"Users Score":2,"Answer":"Trial and error showed that installing the \"Access Database Engine\" 2007 seemed to create 32-bit ODBC source for Access accdb files.","Q_Score":4,"Tags":"python,ms-access,odbc","A_Id":21393854,"CreationDate":"2014-01-27T23:03:00.000","Title":"32 bit pyodbc reading 64 bit access (accdb)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to integrate a Python application and PHP application for data access. I have a Python app and it stores data in its application, now i want to access the data from python database to php application database. For PHP-Python integration which methods are used?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":561,"Q_Id":21399625,"Users Score":0,"Answer":"The easiest way to accomplish this is to build a private API for your PHP app to access your Python app. For example, if using Django, make a page that takes several parameters and returns JSON-encoded information. Load that into your PHP page, use json_decode, and you're all set.","Q_Score":0,"Tags":"php,python,web-services,integration","A_Id":21410252,"CreationDate":"2014-01-28T07:43:00.000","Title":"Integration of PHP-Python applications","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Flask-Babel for translating string.\nIn some templates I'm reading the strings from the database(postgresql). \nHow can I translate the strings from the database using Flask-Babel?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1789,"Q_Id":21497489,"Users Score":2,"Answer":"It's not possible to use Babel in database translations, as database content is dynamic and babel translations are static (they didn't change). \nIf you read the strings from the database you must save the translations on the database. You can create a translation table, something like (locale, source, destination), and get the translated values with a query.","Q_Score":9,"Tags":"python,flask,python-babel,flask-babel","A_Id":22099629,"CreationDate":"2014-02-01T11:31:00.000","Title":"translating strings from database flask-babel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm having trouble in establishing an ideal setup where I can distinguish between production and test environment for my django app. \nI'm using a postgresql database that stores a relative file path to a s3 bucket after I upload an image. Am I supposed to make a production copy of all the files in the s3 bucket and connect my current development code to this static directory to do testing? I certainly don't want to connect to production ... What's best practice in this situation?\nAlso I may be doing things wrong here by having the file path in a postgresql database. Would it be more ideal to have some foreign key to a mongodb table which then holds the file path for the file path in aws s3?\nAnother best practice question is how should the file path should be organized? Should I just organize the file path like the following:\n~somebucket\/{userName}\/{date}\/{fileNameName}\nOR\n~somebucket\/{userName}\/{fileName}\nOR\n~somebucket\/{fileName}\nOR\n~somebucket\/{date}\/{userName}\/{fileNameName}\nOR\n~somebucket\/{fileName} = u1234d20140101funnypic.png ??\nThis is really confusing for me on how to build an ideal way to store static files for development and production. Any better recommendations would be greatly appreciated. \nThanks for your time :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":21518268,"Users Score":1,"Answer":"Its good to have different settings for production and dev. \nSo you can just create a settings folder and have settings may be prod.py and dev.py. this will let you use diff apps for eg: you actually don't need debug tool bar on prod. \nAnd regarding the file, I feel you dont have to worry about the structure as such, you can always refer to Etag and get the file (md5 hash of the object)","Q_Score":0,"Tags":"python,django,mongodb,postgresql,amazon-s3","A_Id":21518701,"CreationDate":"2014-02-03T00:53:00.000","Title":"How should I set up my dev enviornment for a django app so that I can pull on static s3 files?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Model class which is part of my self-crafted ORM. It has all kind of methods like save(), create() and so on. Now, the thing is that all these methods require a connection object to act properly. And I have no clue on what's the best approach to feed a Model object with a connection object.\nWhat I though of so far:\n\nprovide a connection object in a Model's __init__(); this will work, by setting an instance variable and use it throughout the methods, but it will kind of break the API; users shouldn't always feed a connection object when they create a Model object;\ncreate the connection object separately, store it somewhere (where?) and on Model's __init__() get the connection from where it has been stored and put it in an instance variable (this is what I thought to be the best approach, but have no idea of the best spot to store that connection object);\ncreate a connection pool which will be fed with the connection object, then on Model's __init__() fetch the connection from the connection pool (how do I know which connection to fetch from the pool?).\n\nIf there are any other approached, please do tell. Also, I would like to know which is the proper way to this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":21650889,"Users Score":1,"Answer":"Here's how I would do:\n\nUse a connection pool with a queue interface. You don't have to choose a connection object, you just pick the next on the line. This can be done whenever you need transaction, and put back afterwards.\nUnless you have some very specific needs, I would use a Singleton class for the database connection. No need to pass parameters on the constructor every time.\nFor testing, you just put a mocked database connection on the Singleton class.\n\nEdit:\nAbout the connection pool questions (I could be wrong here, but it would be my first try):\n\nKeep all connections open. Pop when you need, put when you don't need it anymore, just like a regular queue. This queue could be exposed from the Singleton.\nYou start with a fixed, default number of connections (like 20). You could override the pop method, so when the queue is empty you block (wait for another to free if the program is multi-threaded) or create a new connection on the fly.\nDestroying connections is more subtle. You need to keep track of how many connections the program is using, and how likely it is you have too many connections. Take care, because destroying a connection that will be needed later slows the program down. In the end, it's a n heuristic problem that changes the performance characteristics.","Q_Score":0,"Tags":"python,database-connection","A_Id":21651170,"CreationDate":"2014-02-08T19:39:00.000","Title":"Getting connection object in generic model class","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble.\nMy current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields\/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key. \nOne of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore?\nAnother thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key?\nI'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7423,"Q_Id":21655862,"Users Score":13,"Answer":"I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [, ] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity).\nYou can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [, , , ]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round.\n(Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.)\nSaving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group.","Q_Score":17,"Tags":"python,google-app-engine,app-engine-ndb","A_Id":21658988,"CreationDate":"2014-02-09T05:53:00.000","Title":"Simple explanation of Google App Engine NDB Datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am just beginning learning Django and working through the tutorial, so sorry if this is very obvious. \nI have already a set of Python scripts whose ultimate result is an sqlite3 db that gets constantly updated; is Django the right tool for turning this sqlite db something like a pretty HTML table for a website? \nI can see that Django is using an sqlite db for managing groups\/users and data from its apps (like the polls app in the tutorial), but I'm not yet sure where my external sqlite db, driven by my other scripts, fits into the grand scheme of things?\nWould I have to modify my external python scripts to write out to a table in the Django db (db.sqlite3 in the Django project dir in tutorial at least), then make a Django model based on my database structure and fields?\nBasically,I think my question boils down to:\n1) Do I need to create Django model based on my db, then access the one and only Django \"project db\", and have my external script write into it.\n2) or can Django utilise somehow a seperate db driven by another script somehow?\n3) Finally, is Django the right tool for such a task before I invest weeks of reading...","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1298,"Q_Id":21767229,"Users Score":1,"Answer":"If you care about taking control over every single aspect of how you want to render your data in HTML and serve it to others, Then for sure Django is a great tool to solve your problem.\nDjango's ORM models make it easier for you to read and write to your database, and they're database-agnostic. Which means that you can reuse the same code with a different database (like MySQL) in the future.\nSo, to wrap it up. If you're planning to do more development in the future, then use Django. If you only care about creating these HTML pages once and for all, then don't.\nPS: With Django, you can easily integrate these scripts into your Django project as management commands, run them with cronjobs and integrate everything you develop together with a unified data access layer.","Q_Score":0,"Tags":"python,django,sqlite","A_Id":21768188,"CreationDate":"2014-02-13T22:48:00.000","Title":"Django and external sqlite db driven by python script","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to serve up some user uploaded files with Flask, and have an odd problem, or at least one that I couldn't turn up any solutions for by searching. I need the files to retain their original filenames after being uploaded, so they will have the same name when the user downloads them. Originally I did not want to deal with databases at all, and solved the problem of filename conflicts by storing each file in a randomly named folder, and just pointing to that location for the download. However, stuff came up later that required me to use a database to store some info about the files, but I still kept my old method of handling filename conflicts. I have a model for my files now and storing the name would be as simple as just adding another field, so that shouldn't be a big problem. I decided, pretty foolishly after I had written the implmentation, on using Amazon S3 to store the files. Apparently S3 does not deal with folders in the way a traditional filesystem does, and I do not want to deal with the surely convoluted task of figuring out how to create folders programatically on S3, and in retrospect, this was a stupid way of dealing with this problem in the first place, when stuff like SQLalchemy exists that makes databases easy as pie. Anyway, I need a way to store multiple files with the same name on s3, without using folders. I thought of just renaming the files with a random UUID after they are uploaded, and then when they are downloaded (the user visits a page and presses a download button so I need not have the filename in the URL), telling the browser to save the file as its original name retrieved from the database. Is there a way to implement this in Python w\/Flask? When it is deployed I am planning on having the web server handle the serving of files, will it be possible to do something like this with the server? Or is there a smarter solution?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":143,"Q_Id":21807032,"Users Score":0,"Answer":"I'm stupid. Right in the Flask API docs it says you can include the parameter attachment_filename in send_from_directory if it differs from the filename in the filesystem.","Q_Score":0,"Tags":"python,amazon-s3,flask","A_Id":21817783,"CreationDate":"2014-02-16T03:48:00.000","Title":"Is there a way to tell a browser to download a file as a different name than as it exists on disk?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Maybe I got this wrong: Is there a way to automatically create the target table for a tabledata.insertAll command? If yes please point me in the right direction.\nIf not - what is the best approach to create the tables needed? Check for existing tables on startup and create the ones that does not exist by loading from GCS? Or can they be created directly from code without a load job?\nI have a number of event classes (Python Cloud endpoints) defined and the perfect solution would be using those definitions to create matching BQ tables.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":973,"Q_Id":21830868,"Users Score":4,"Answer":"There is no way to create a table automatically during streaming, since BigQuery doesn't know the schema. JSON data that you post doesn't have type information -- if there is a field \"123\" we don't know if that will always be a string or whether it should actually be an integer. Additionally, if you post data that is missing an optional field, the schema that got created would be narrower than the one you wanted.\nThe best way to create the table is with a tables.insert() call (no need to run a load job to load data from GCS). You can provide exactly the schema you want, and once the table has been created you can stream data to it. \nIn some cases, customers pre-create a month worth of tables, so they only have to worry about it every 30 days. In other cases, you might want to check on startup to see if the table exists, and if not, create it.","Q_Score":3,"Tags":"python,google-bigquery","A_Id":21868123,"CreationDate":"2014-02-17T13:51:00.000","Title":"Auto-create BQ tables for streaming inserts","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried to use pymsql with sqlalchemy using this code : \n \n from sqlalchemy import create_engine\n engine = create_engine(\"mysql+pymsql:\/\/root:@localhost\/pydb\")\n conn = engine.connect()\n \nand this exception is raised here is the full stack trace : \n \n Traceback (most recent call last):\n File \"D:\\Parser\\dal__init__.py\", line 3, in \n engine = create_engine(\"mysql+pymsql:\/\/root:@localhost\/pydb\")\n File \"C:\\Python33\\lib\\site-packages\\sqlalchemy-0.9.2-py3.3.egg\\sqlalchemy\\engine__init__.py\", line 344, in create_engine\n File \"C:\\Python33\\lib\\site-packages\\sqlalchemy-0.9.2-py3.3.egg\\sqlalchemy\\engine\\strategies.py\", line 48, in create\n File \"C:\\Python33\\lib\\site-packages\\sqlalchemy-0.9.2-py3.3.egg\\sqlalchemy\\engine\\url.py\", line 163, in make_url\n File \"C:\\Python33\\lib\\site-packages\\sqlalchemy-0.9.2-py3.3.egg\\sqlalchemy\\engine\\url.py\", line 183, in _parse_rfc1738_args\n File \"C:\\Python33\\lib\\re.py\", line 214, in compile\n return _compile(pattern, flags)\n File \"C:\\Python33\\lib\\re.py\", line 281, in _compile\n p = sre_compile.compile(pattern, flags)\n File \"C:\\Python33\\lib\\sre_compile.py\", line 498, in compile\n code = _code(p, flags)\n File \"C:\\Python33\\lib\\sre_compile.py\", line 483, in _code\n _compile(code, p.data, flags)\n File \"C:\\Python33\\lib\\sre_compile.py\", line 75, in _compile\n elif _simple(av) and op is not REPEAT:\n File \"C:\\Python33\\lib\\sre_compile.py\", line 362, in _simple\n raise error(\"nothing to repeat\")\n sre_constants.error: nothing to repeat","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":21853660,"Users Score":0,"Answer":"Drop the : from your connection string after your username. It should instead be mysql+pymsql:\/\/root@localhost\/pydb","Q_Score":0,"Tags":"python,sqlalchemy,pymysql","A_Id":21866204,"CreationDate":"2014-02-18T12:20:00.000","Title":"Error when trying to use pymysql with sqlalchemy sre_constants.error: nothing to repeat","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created an excel sheet using XLWT plugin using Python. Now, I need to re-open the excel sheet and append new sheets \/ columns to the existing excel sheet. Is it possible by Python to do this?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":8628,"Q_Id":21856559,"Users Score":2,"Answer":"You read in the file using xlrd, and then 'copy' it to an xlwt Workbook using xlutils.copy.copy().\nNote that you'll need to install both xlrd and xlutils libraries.\nNote also that not everything gets copied over. Things like images and print settings are not copied, for example, and have to be reset.","Q_Score":1,"Tags":"python,xlwt","A_Id":22414279,"CreationDate":"2014-02-18T14:20:00.000","Title":"How to append to an existing excel sheet with XLWT in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm sometimes using a TextField to store data with a structure that may change often (or very complex data) into model instances, instead of modelling everything with the relational paradigm.\nI could mostly achieve the same kind of things using more models, foreignkeys and such, but it sometimes feels more straightforward to store JSON directly.\nI still didn't delve into postgres JSON type (can be good for read-queries notably, if I understand well). And for the moment I perform some json.dumps and json.loads each time I want to access this kind of data.\nI would like to know what are (theoretically) the performance and caching drawbacks of doing so (using JSON type and not), compared to using models for everything.\nHaving more knowledge about that could help me to later perform some clever comparison and profiling to enhance the overall performance.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":1302,"Q_Id":21908068,"Users Score":3,"Answer":"Storing data as json (whether in text-typed fields, or PostgreSQL's native jsontype) is a form of denormalization.\nLike most denormalization, it can be an appropriate choice when working with very difficult to model data, or where there are serious performance challenges with storing data fully normalized into entities.\nPostgreSQL reduces the impact of some of the problems caused by data denormalization by supporting some operations on json values in the database - you can iterate over json arrays or key\/value pairs, join on the results of json field extraction, etc. Most of the useful stuff was added in 9.3; in 9.2, json support is just a validating data type. In 9.4, much more powerful json features will be added, including some support for indexing in json values.\nThere's no simple one-size-fits all answer to your question, and you haven't really characterized your data or your workload. Like most database challenges \"it depends\" on what you're doing with the data.\nIn general, I would tend to say it's best to relationally model the data if it is structured and uniform. If it's unstructured and non-uniform, storage with something like json may be more appropriate.","Q_Score":1,"Tags":"python,json,django,postgresql","A_Id":21909779,"CreationDate":"2014-02-20T12:38:00.000","Title":"Django & postgres - drawbacks of storing data as json in model fields","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a django app which provides a rest api using Django-rest-framework. The API is used by clients as expected, but I also have another process(on the same node) that uses Django ORM to read the app's database, which is sqlite3.\nIs it better architecture for the process to use the rest api to interact(only reads) with the app's database? Or is there a better, perhaps more efficient way than making a ton of HTTP requests from the same node?\nThe problem with the ORM approach(besides the hacky nature) is that occasionally reads fail and must be retried. Also, I want to write to the app's db which would probably causes more sqlite concurrency issues.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":138,"Q_Id":21912993,"Users Score":0,"Answer":"It depends on what your application is doing. If your REST application reads a piece of data from SQLITE using the Django ORM and then the other app does a write you can run into some interesting race situations. To prevent that it might make sense to have both these applications as django-app in a single Django project.","Q_Score":0,"Tags":"python,django,sqlite,rest,orm","A_Id":21914906,"CreationDate":"2014-02-20T15:57:00.000","Title":"SOA versus Django ORM with multiple processes","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to design an app that uses Google AppEngine to store\/process\/query data that is then served up to mobile devices via Cloud Endpoints API in as real time as possible.\nIt is straight forward enough solution, however I am struggling to get the right balance between, performance, cost and latency on AppEngine.\nScenario (analogy) is a user checks-in (many times per day from different locations, cities, countries), and we would like to allow the user to query all the data via their device and provide as up to date information as possible.\n\nSuch as:\n\nThe number of check-ins over the last:\n24 hours\n1 week\n1 month\nAll time\nWhere is the most checked in place\/city\/country over the same time periods\nWhere is the least checked in place over the same time periods\nOther similar querying reports\n\n\nWe can use Memcache to store the most recent checkins, pushing to the Datastore every 5 minutes, but this may not scale very well and is not robust!\nUse a Cron job to run the Task Queue\/Map Reduce to get the aggregates, averages for each location every 30 mins and update the Datastore.\nThe challenge is to use as little read\/writes over the datastore because the last \"24 hours\" data is changing every 5 mins, and hence so is the last weeks data, last months data and so on. The data has to be dynamic to some degree, so it is not fixed points in time, they are always changing - here in lies the issue!\nIt is not a problem to set this up, but to set it up in an efficient manner, balancing performance\/latency for the user and cost\/quotas for us is not so easy! \nThe simple solution would be to use SQL, and run date range queries but this will not scale very well.\nWe could eventually use BigTable & BigQuery for the \"All time\" time period querying, but in order to give the users as real-time as possible data via the API for the other time periods is proving quite the challenge!\nAny suggestions of AppEngine architecture\/approaches would be seriously welcomed.\nMany thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":173,"Q_Id":21941030,"Users Score":0,"Answer":"First, writes to the datastore take milliseconds. By the time your user hits the refresh button (or whatever you offer), the data will be as \"real-time\" as it gets.\nTypically, developers become concerned with real-time when there is a synchronization\/congestion issue, i.e. each user can update something (e.g. bid on an item), and all users have to get the same data (the highest bid) in real time. In your case, what's the harm if a user gets the number of check-ins which is 1 second old?\nSecond, data in Memcache can be lost at any moment. In your proposed solution (update the datastore every 5 minutes), you risk losing all data for the 5 min period.\nI would rather use Memcache in the opposite direction: read data from datastore, put it in Memcache with 60 seconds (or more) expiration, serve all users from Memcache, then refresh it. This will minimize your reads. I would do it, of course, unless your users absolutely must know how many checkins happened in the last 60 seconds.\nThe real question for you is how to model your data to optimize writes. If you don't want to lose data, you will have to record every checkin in datastore. You can save by making sure you don't have unnecessary indexed fields, separate out frequently updated fields from the rest, etc.","Q_Score":1,"Tags":"python,google-app-engine,mapreduce,task-queue","A_Id":21962823,"CreationDate":"2014-02-21T17:23:00.000","Title":"AppEngine real time querying - cost, performance, latency balancing act and quotas","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Going through Django tutorial 1 using Python 2.7 and can't seem to resolve this error:\nOperationalError: no such table: polls_poll\nThis happens the moment I enter Poll.objects.all() into the shell.\nThings I've already tried based on research through the net:\n1) Ensured that 'polls' is listed under INSTALLED_APPS in settings.py\nNote: I've seen lots of suggestions inserting 'mysite.polls' instead of 'polls' into INSTALLED_APPS but this gives the following error: ImportError: cannot import name 'polls' from 'mysite'\n2) Run python manage.py syncdb . This creates my db.sqlite3 file successfully and seemingly without issue in my mysite folder.\n3) Finally, when I run python manage.py shell, the shell runs smoothly, however I do get some weird Runtime Warning when it starts and wonder if the polls_poll error is connected:\n\\django\\db\\backends\\sqlite3\\base.py:63: RuntimeWarning: SQLite received a naive datetime (2014-02-03 17:32:24.392000) while time zone support is active.\nAny help would be appreciated.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":13012,"Q_Id":21976383,"Users Score":11,"Answer":"I meet the same problem today and fix it I think you miss some command in tutorial 1 just do follow: \n.\/python manage.py makemigrations polls\npython manage.py sql polls\n.\/python manage.py syncdb\nthen fix it and gain the table polls and you can see the table created you should read the \"manage.py makemigrations\" command","Q_Score":5,"Tags":"python,django,shell,sqlite","A_Id":23184956,"CreationDate":"2014-02-23T23:49:00.000","Title":"Django Error: OperationalError: no such table: polls_poll","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a csv file with about 280 columns, which are possibly changing from time to time. Is there a way to import a csv file to sqlite3 and have it 'guess' the column types? \nI am using a python script to import this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1025,"Q_Id":22004809,"Users Score":0,"Answer":"make headers of the columns in csv as the same column names in sqlite3 table. Then directly read and check the type by using type() before inserting into DB.","Q_Score":2,"Tags":"python,csv,sqlite","A_Id":22005726,"CreationDate":"2014-02-25T04:44:00.000","Title":"csv import sqlite3 without specifying column types","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On OS X 10.9 and 10.9.1, the cx_Oracle works OK. But after I updated my system to OS X 10.9.2 yesterday, it cannot work. When connecting to Oracle database, DatabaseError is raised. And the error message is:\n\nORA-21561: OID generation failed\n\nCan anyone help me?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1020,"Q_Id":22060338,"Users Score":0,"Answer":"I haven't seen this on OS X but the general Linux solution is to add your hostname to \/etc\/hosts for the IP 127.0.0.1.","Q_Score":2,"Tags":"python,macos,oracle","A_Id":39339545,"CreationDate":"2014-02-27T06:04:00.000","Title":"cx_Oracle can't connect to Oracle database after updating OS X to 10.9.2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On OS X 10.9 and 10.9.1, the cx_Oracle works OK. But after I updated my system to OS X 10.9.2 yesterday, it cannot work. When connecting to Oracle database, DatabaseError is raised. And the error message is:\n\nORA-21561: OID generation failed\n\nCan anyone help me?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1020,"Q_Id":22060338,"Users Score":0,"Answer":"This can be fixed with a simple edit to your hosts file.\n\nFind the name of your local-machine by running hostname in your local-terminal\n$hostname\nEdit your local hosts file \n$vi \/etc\/hosts\nassuming $hostname gives local_machine_name append it to your localhost ,\n127.0.0.1 localhost local_machine_name\npress esc and type wq! to save\n\nCheers!","Q_Score":2,"Tags":"python,macos,oracle","A_Id":41649509,"CreationDate":"2014-02-27T06:04:00.000","Title":"cx_Oracle can't connect to Oracle database after updating OS X to 10.9.2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Amazon Linux AMI release 2013.09. I've install virtualenv and after activation then I run pip install mysql-connector-python, but when I run my app I get an error: ImportError: No module named mysql.connector. Has anyone else had trouble doing this? I can install it outside of virtualenv and my script runs without issues. Thanks in advance for any help!","AnswerCount":14,"Available Count":1,"Score":0.0428309231,"is_accepted":false,"ViewCount":73109,"Q_Id":22100757,"Users Score":3,"Answer":"Also something that can go wrong: Don't name your own module mysql\nimport mysql.connector will fail because the import gives the module in the project precedence over site packages and yours likely doesnt have a connector.py file.","Q_Score":35,"Tags":"python,mysql","A_Id":44177264,"CreationDate":"2014-02-28T16:37:00.000","Title":"Can not get mysql-connector-python to install in virtualenv","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know that Redis have 16 databases by default, but what if i need to add another database, how can i do that using redis-py?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":460,"Q_Id":22110562,"Users Score":0,"Answer":"You cannot. The number of databases is not a dynamic parameter in Redis.\nYou can change it by updating the Redis configuration file (databases parameter) and restarting the server.\nFrom a client (Python or other), you can retrieve this value using the \"GET CONFIG DATABASES\" command. But the \"SET CONFIG DATABASES xxx\" command will be rejected.","Q_Score":1,"Tags":"python,database,redis,redis-py","A_Id":22111910,"CreationDate":"2014-03-01T05:29:00.000","Title":"Insert a new database in redis using redis.StrictRedis()","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":941,"Q_Id":22128419,"Users Score":1,"Answer":"SQL Alchemy is generally not faster (esp. as it uses those driver to connect).\nHowever, SQL Alchemy will help you structure your data in a sensible way and help keep the data consistent. Will also make it easier for you to migrate to a different db if needed.","Q_Score":0,"Tags":"python,sqlalchemy,flask,flask-sqlalchemy","A_Id":22128680,"CreationDate":"2014-03-02T13:49:00.000","Title":"Should I use an ORM like SQLAlchemy for a lightweight Flask web service?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":941,"Q_Id":22128419,"Users Score":1,"Answer":"Your question is too open to anyone guarantee SQLAlchemy is not a good fit, but SQLAlchemy probably will never be your problem to handle scalability. You'll have to handle almost the same problems with or without SQLAlchemy.\nOf course SQLAlchemy has some performance impact, it is a layer above the database driver, but it also will help you a lot.\nThat said, if you want to use SQLAlchemy to help with your security (SQL escaping), you can use the SQLAlchemy just to execute your raw SQL queries, but I recommend it to fix specific bottlenecks, never to avoid the ORM.","Q_Score":0,"Tags":"python,sqlalchemy,flask,flask-sqlalchemy","A_Id":22134840,"CreationDate":"2014-03-02T13:49:00.000","Title":"Should I use an ORM like SQLAlchemy for a lightweight Flask web service?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine. \nHow can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously. \nI'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables. \nAny thoughts?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":22160820,"Users Score":0,"Answer":"This answer is relevant to .Net as language.\nWe had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT.\nYou can do this using a web browser too, but I would not suggest this, if you are targeting big files.","Q_Score":0,"Tags":"python,file-upload,amazon-web-services,amazon-s3","A_Id":22162436,"CreationDate":"2014-03-04T00:59:00.000","Title":"user upload to my S3 bucket","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails.\nMy plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. \nIs this the most efficient way to do this?\n(I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":22178513,"Users Score":0,"Answer":"I asked about a soft button earlier. If your computer program is password\/access protected you could just store it all in a pickle\/config file somewhere, I am unsure what the value of the sql file is:\nuse last_push = time.time() and check the difference to current push if seconds difference less than x do not progress, if bigger than x reset last_push and progress....\nor am I missing something","Q_Score":0,"Tags":"python","A_Id":22181923,"CreationDate":"2014-03-04T17:13:00.000","Title":"Check time since last request","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails.\nMy plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. \nIs this the most efficient way to do this?\n(I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":22178513,"Users Score":0,"Answer":"If this is the easiest solution for you to implement, go right ahead. Worst case scenario, it's too slow to be practical and you'll need to find a better way. Any other scenario, it's good enough and you can forget about it.\nHonestly, it'll almost certainly be efficient enough to serve your purposes. The number of users at any one time will very rarely exceed one. An SQL query to determine if the timestamp is over a day before the current time will be quick, enough so that even the most determined gas-hole(!) wouldn't be able to cause any damage by spam-clicking the button. I would be very surprised if you ran into any problems.","Q_Score":0,"Tags":"python","A_Id":22179026,"CreationDate":"2014-03-04T17:13:00.000","Title":"Check time since last request","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I sometimes run python scripts that access the same database concurrently. This often causes database lock errors. I would like the script to then retry ASAP as the database is never locked for long.\nIs there a better way to do this than with a try except inside a while loop and does that method have any problems?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":619,"Q_Id":22191236,"Users Score":0,"Answer":"If you are looking for concurrency, SQlite is not the answer. The engine doesn't perform good when concurrency is needed, especially when writing from different threads, even if the tables are not the same.\nIf your scripts are accessing different tables, and they have no relationships at DB level (i.e. declared FK's), you can separate them in different databases and then your concurrency issue will be solved.\nIf they are linked, but you can link them in the app level (script), you can separate them as well.\nThe best practice in those cases is implementing a lock mechanism with events, but honestly I have no idea how to implement such in phyton.","Q_Score":0,"Tags":"python,sqlite","A_Id":22222873,"CreationDate":"2014-03-05T07:23:00.000","Title":"Sqlite3 and Python: Handling a locked database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Inside an web application ( Pyramid ) I create certain objects on POST which need some work done on them ( mainly fetching something from the web ). These objects are persisted to a PostgreSQL database with the help of SQLAlchemy. Since these tasks can take a while it is not done inside the request handler but rather offloaded to a daemon process on a different host. When the object is created I take it's ID ( which is a client side generated UUID ) and send it via ZeroMQ to the daemon process. The daemon receives the ID, and fetches the object from the database, does it's work and writes the result to the database.\n\nProblem: The daemon can receive the ID before it's creating transaction is committed. Since we are using pyramid_tm, all database transactions are committed when the request handler returns without an error and I would rather like to leave it this way. On my dev system everything runs on the same box, so ZeroMQ is lightning fast. On the production system this is most likely not an issue since web application and daemon run on different hosts but I don't want to count on this. \nThis problem only recently manifested itself since we previously used MongoDB with a write_convern of 2. Having only two database servers the write on the entity always blocked the web-request until the entity was persisted ( which is obviously is not the greatest idea ).\n\nHas anyone run into a similar problem?\nHow did you solve it?\n\nI see multiple possible solutions, but most of them don't satisfy me:\n\nFlushing the transaction manually before triggering the ZMQ message. However, I currently use SQLAlchemy after_created event to trigger it and this is really nice since it decouples this process completely and thus eliminating the risk of \"forgetting\" to tell the daemon to work. Also think that I still would need a READ UNCOMMITTED isolation level on the daemon side, is this correct?\nAdding a timestamp to the ZMQ message, causing the worker thread that received the message, to wait before processing the object. This obviously limits the throughput.\nDish ZMQ completely and simply poll the database. Noooo!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2062,"Q_Id":22245407,"Users Score":0,"Answer":"This comes close to your second solution:\nCreate a buffer, drop the ids from your zeromq messages in there and let you worker poll regularly this id-pool. If it fails retrieving an object for the id from the database, let the id sit in the pool until the next poll, else remove the id from the pool.\nYou have to deal somehow with the asynchronous behaviour of your system. When the ids arrive constantly before persisting the object in the database, it doesnt matter whether pooling the ids (and re-polling the the same id) reduces throughput, because the bottleneck is earlier.\nAn upside is, you could run multiple frontends in front of this.","Q_Score":1,"Tags":"python,postgresql,sqlalchemy,zeromq","A_Id":22247025,"CreationDate":"2014-03-07T08:48:00.000","Title":"ZeroMQ is too fast for database transaction","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using mongodb 2.4.6 and python 2.7 .I have frequent executing queries.Is it possible to save the frequent qaueries results in cache.?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1377,"Q_Id":22250987,"Users Score":1,"Answer":"Yes but you will need to make one, how about memcached or redis?\nHowever as a pre-cautionary note, MongoDB does have its recently used data cached to RAM by the OS already so unless you are doing some really resource intensive aggregation query or you are using the results outside of your working set window you might not actually find that it increases performance all that much.","Q_Score":0,"Tags":"mongodb,python-2.7,caching","A_Id":22251094,"CreationDate":"2014-03-07T13:09:00.000","Title":"How to cache Mongodb Queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Introduction\nI am working on a GPS Listener, this is a service build on twisted python, this app receive at least 100 connections from gps devices, and it is working without issues, each GPS send data each 5 seconds, containing positions. ( the next week must be at least 200 gps devices connected )\nDatabase\nI am using a unique postgresql connection, this connection is shared between all gps devices connected for save and store information, postgresql is using pgbouncer as pooler\nServer\nI am using a small pc as server, and I need to find a way to have a high availability application with out loosing data\nProblem\nAccording with my high traffic on my app, I am having issues with memory data after 30 minutes start to appear as no saved, however queries are being executed on postgres ( I have checked that on last activity )\nFake Solution\nI have amke a script that restart my app, postgres ang pgbouncer, however this is a wrong solution, because each time that I restart my app, gps get disconnected, and must to reconnected again\nPosible Solution\nI am thinking on a high availability solution based on a Data Layer, where each time when database have to be restarted or something happened, a txt file store data from gps devices.\nFor get it, I am thing on a no unique connection, I am thinking on a simple connection each time one data must be saved, and then test database, like a pooler, and then if database connection is wrong, the txt file store it, until database is ok again, and the other process read txt file and send info to database\nQuestion\nSince I am thinking on a app data pooler and a single connection each time when this data must be saved for try to no lost data, I want to know\n\nIs ok making single connection each time that data is saved for this\n kind of app, knowing that connections will be done more than 100 times\n each 5 seconds?\n\nAs I said, my question is too simple, which one is the right way on working with db connections on a high traffic app? single connections per query or shared unique connection for all app.\nThe reason on looking this single question, is looking for the right way on working with db connections considering memory resources.\nI am not looking for solve postgresql issues or performance, just to know the right way on working with this kind of applications. And that is the reason on give as much of possible about my application\nNote\nOne more thing,I have seen one vote to close this question, that is related to no clear question, when the question is titled with the word \"question\" and was marked on italic, now I have marked as gray for notice people that dont read the word \"question\"\nThanks a lot","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":270,"Q_Id":22256760,"Users Score":1,"Answer":"Databases do not just lose data willy-nilly. Not losing data is pretty much number one in their job description. If it seems to be losing data, you must be misusing transactions in your application. Figure out what you are doing wrong and fix it. \nMaking and breaking a connection between your app and pgbouncer for each transaction is not good for performance, but is not terrible either; and if that is what helps you fix your transaction boundaries then do that.","Q_Score":0,"Tags":"python,database,postgresql,gps,twisted","A_Id":22409012,"CreationDate":"2014-03-07T17:28:00.000","Title":"Right way to manage a high traffic connection application","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use driving_distance function in pgRouting to work with my river network. There are 12 vertices in my river network, and I want to get the distance between all of these 12 vertices, starting from vertex_id No.1.\nThe result is fine, but I want to get other results using other vertices as starting point. I know it would not cost much time to change the SQL code everytime, but thereafter I would have more than 500 vertices in this river network, so I need to do this more efficiently.\nHow to use python to get what I want\uff1f How can I write a python script to do this\uff1f Or there are existing python script that I want\uff1f\nI am a novice with programming language, please give me any detailed advice, thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":684,"Q_Id":22279499,"Users Score":1,"Answer":"pyscopg2 Is an excellent python module that allows your scripts to connect to your postgres database and run SQL whether as inputs or as fetch queries. You can have python walk through the number of possible combinations between vertices and have it build the individual SQL queries as strings. It can then run through them and print your output into a text file.","Q_Score":0,"Tags":"python,postgresql,pgrouting","A_Id":22392600,"CreationDate":"2014-03-09T07:23:00.000","Title":"How to use python to loop through all possible results in postgresql\uff1f","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Unfortunately I have a REHL3 and Python 2.3 and no chance of upgrading.\nDoes anyone have any examples of how to interact with the DB, openning sqlplus, logging in and then I only want a simple SELECT query bring the data to a CSV and then I can figure out the rest. \nAny ideas please?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":319,"Q_Id":22300744,"Users Score":0,"Answer":"I used a bash script to produce the csv file and then manipulated the data with Python.\nThat was the only solution I could think of with Python 2.3.","Q_Score":0,"Tags":"sql,oracle,shell,oracle10g,python-2.x","A_Id":23064270,"CreationDate":"2014-03-10T12:56:00.000","Title":"Simple query to Oracle SQL using Python 2.3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to determine the best practices for storing and displaying user input in MongoDB. Obviously, in SQL databases, all user input needs to be encoded to prevent injection attacks. However, my understanding is that with MongoDB we need to be more worried about XSS attacks, so does user input need to be encoded on the server before being stored in mongo? Or, is it enough to simply encode the string immediately before it is displayed on the client side using a template library like handlebars?\nHere's the flow I'm talking about:\n\nOn the client side, user updates their name to \"