content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Best way to store and use a large text-file in python
I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up.
What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option?
A:
If you create a dictionary.py module, containing code which reads the file and builds a dictionary, this code will only be executed the first time it is imported. Further imports will return a reference to the existing module instance. As such, your classes can:
import dictionary
dictionary.words[whatever]
where dictionary.py has:
words = {}
# read file and add to 'words'
A:
Even though it is essentially a singleton at this point, the usual arguments against globals apply. For a pythonic singleton-substitute, look up the "borg" object.
That's really the only difference. Once the dictionary object is created, you are only binding new references as you pass it along unless if you explicitly perform a deep copy. It makes sense that it is centrally constructed once and only once so long as each solver instance does not require a private copy for modification.
A:
Adam, remember that in Python when you say:
a = read_dict_from_file()
b = a
... you are not actually copying a, and thus using more memory, you are merely making b another reference to the same object.
So basically any of the solutions you propose will be far better in terms of memory usage. Basically, read in the dictionary once and then hang on to a reference to that. Whether you do it with a global variable, or pass it to each instance, or something else, you'll be referencing the same object and not duplicating it.
Which one is most Pythonic? That's a whole 'nother can of worms, but here's what I would do personally:
def main(args):
run_initialization_stuff()
dictionary = read_dictionary_from_file()
solvers = [ Solver(class=x, dictionary=dictionary) for x in len(number_of_solvers) ]
HTH.
A:
Depending on what your dict contains, you may be interested in the 'shelve' or 'anydbm' modules. They give you dict-like interfaces (just strings as keys and items for 'anydbm', and strings as keys and any python object as item for 'shelve') but the data is actually in a DBM file (gdbm, ndbm, dbhash, bsddb, depending on what's available on the platform.) You probably still want to share the actual database between classes as you are asking for, but it would avoid the parsing-the-textfile step as well as the keeping-it-all-in-memory bit.
| Best way to store and use a large text-file in python | I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up.
What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option?
| [
"If you create a dictionary.py module, containing code which reads the file and builds a dictionary, this code will only be executed the first time it is imported. Further imports will return a reference to the existing module instance. As such, your classes can:\nimport dictionary\n\ndictionary.words[whatever]\n\nwhere dictionary.py has:\nwords = {}\n\n# read file and add to 'words'\n\n",
"Even though it is essentially a singleton at this point, the usual arguments against globals apply. For a pythonic singleton-substitute, look up the \"borg\" object. \nThat's really the only difference. Once the dictionary object is created, you are only binding new references as you pass it along unless if you explicitly perform a deep copy. It makes sense that it is centrally constructed once and only once so long as each solver instance does not require a private copy for modification. \n",
"Adam, remember that in Python when you say:\na = read_dict_from_file()\nb = a\n\n... you are not actually copying a, and thus using more memory, you are merely making b another reference to the same object.\nSo basically any of the solutions you propose will be far better in terms of memory usage. Basically, read in the dictionary once and then hang on to a reference to that. Whether you do it with a global variable, or pass it to each instance, or something else, you'll be referencing the same object and not duplicating it.\nWhich one is most Pythonic? That's a whole 'nother can of worms, but here's what I would do personally:\ndef main(args):\n run_initialization_stuff()\n dictionary = read_dictionary_from_file()\n solvers = [ Solver(class=x, dictionary=dictionary) for x in len(number_of_solvers) ]\n\nHTH.\n",
"Depending on what your dict contains, you may be interested in the 'shelve' or 'anydbm' modules. They give you dict-like interfaces (just strings as keys and items for 'anydbm', and strings as keys and any python object as item for 'shelve') but the data is actually in a DBM file (gdbm, ndbm, dbhash, bsddb, depending on what's available on the platform.) You probably still want to share the actual database between classes as you are asking for, but it would avoid the parsing-the-textfile step as well as the keeping-it-all-in-memory bit.\n"
] | [
11,
1,
1,
0
] | [] | [] | [
"boggle",
"python"
] | stackoverflow_0000158546_boggle_python.txt |
Q:
How many bytes per element are there in a Python list (tuple)?
For example, how much memory is required to store a list of one million (32-bit) integers?
alist = range(1000000) # or list(range(1000000)) in Python 3.0
A:
"It depends." Python allocates space for lists in such a way as to achieve amortized constant time for appending elements to the list.
In practice, what this means with the current implementation is... the list always has space allocated for a power-of-two number of elements. So range(1000000) will actually allocate a list big enough to hold 2^20 elements (~ 1.045 million).
This is only the space required to store the list structure itself (which is an array of pointers to the Python objects for each element). A 32-bit system will require 4 bytes per element, a 64-bit system will use 8 bytes per element.
Furthermore, you need space to store the actual elements. This varies widely. For small integers (-5 to 256 currently), no additional space is needed, but for larger numbers Python allocates a new object for each integer, which takes 10-100 bytes and tends to fragment memory.
Bottom line: it's complicated and Python lists are not a good way to store large homogeneous data structures. For that, use the array module or, if you need to do vectorized math, use NumPy.
PS- Tuples, unlike lists, are not designed to have elements progressively appended to them. I don't know how the allocator works, but don't even think about using it for large data structures :-)
A:
Useful links:
How to get memory size/usage of python object
Memory sizes of python objects?
if you put data into dictionary, how do we calculate the data size?
However they don't give a definitive answer. The way to go:
Measure memory consumed by Python interpreter with/without the list (use OS tools).
Use a third-party extension module which defines some sort of sizeof(PyObject).
Update:
Recipe 546530: Size of Python objects (revised)
import asizeof
N = 1000000
print asizeof.asizeof(range(N)) / N
# -> 20 (python 2.5, WinXP, 32-bit Linux)
# -> 33 (64-bit Linux)
A:
Addressing "tuple" part of the question
Declaration of CPython's PyTuple in a typical build configuration boils down to this:
struct PyTuple {
size_t refcount; // tuple's reference count
typeobject *type; // tuple type object
size_t n_items; // number of items in tuple
PyObject *items[1]; // contains space for n_items elements
};
Size of PyTuple instance is fixed during it's construction and cannot be changed afterwards. The number of bytes occupied by PyTuple can be calculated as
sizeof(size_t) x 2 + sizeof(void*) x (n_items + 1).
This gives shallow size of tuple. To get full size you also need to add total number of bytes consumed by object graph rooted in PyTuple::items[] array.
It's worth noting that tuple construction routines make sure that only single instance of empty tuple is ever created (singleton).
References:
Python.h,
object.h,
tupleobject.h,
tupleobject.c
A:
A new function, getsizeof(), takes a
Python object and returns the amount
of memory used by the object, measured
in bytes. Built-in objects return
correct results; third-party
extensions may not, but can define a
__sizeof__() method to return the object’s size.
kveretennicov@nosignal:~/py/r26rc2$ ./python
Python 2.6rc2 (r26rc2:66712, Sep 2 2008, 13:11:55)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
>>> import sys
>>> sys.getsizeof(range(1000000))
4000032
>>> sys.getsizeof(tuple(range(1000000)))
4000024
Obviously returned numbers don't include memory consumed by contained objects (sys.getsizeof(1) == 12).
A:
This is implementation specific, I'm pretty sure. Certainly it depends on the internal representation of integers - you can't assume they'll be stored as 32-bit since Python gives you arbitrarily large integers so perhaps small ints are stored more compactly.
On my Python (2.5.1 on Fedora 9 on core 2 duo) the VmSize before allocation is 6896kB, after is 22684kB. After one more million element assignment, VmSize goes to 38340kB. This very grossly indicates around 16000kB for 1000000 integers, which is around 16 bytes per integer. That suggests a lot of overhead for the list. I'd take these numbers with a large pinch of salt.
| How many bytes per element are there in a Python list (tuple)? | For example, how much memory is required to store a list of one million (32-bit) integers?
alist = range(1000000) # or list(range(1000000)) in Python 3.0
| [
"\"It depends.\" Python allocates space for lists in such a way as to achieve amortized constant time for appending elements to the list.\nIn practice, what this means with the current implementation is... the list always has space allocated for a power-of-two number of elements. So range(1000000) will actually allocate a list big enough to hold 2^20 elements (~ 1.045 million).\nThis is only the space required to store the list structure itself (which is an array of pointers to the Python objects for each element). A 32-bit system will require 4 bytes per element, a 64-bit system will use 8 bytes per element.\nFurthermore, you need space to store the actual elements. This varies widely. For small integers (-5 to 256 currently), no additional space is needed, but for larger numbers Python allocates a new object for each integer, which takes 10-100 bytes and tends to fragment memory.\nBottom line: it's complicated and Python lists are not a good way to store large homogeneous data structures. For that, use the array module or, if you need to do vectorized math, use NumPy.\nPS- Tuples, unlike lists, are not designed to have elements progressively appended to them. I don't know how the allocator works, but don't even think about using it for large data structures :-)\n",
"Useful links:\nHow to get memory size/usage of python object\nMemory sizes of python objects?\nif you put data into dictionary, how do we calculate the data size? \nHowever they don't give a definitive answer. The way to go:\n\nMeasure memory consumed by Python interpreter with/without the list (use OS tools).\nUse a third-party extension module which defines some sort of sizeof(PyObject).\n\nUpdate:\nRecipe 546530: Size of Python objects (revised)\nimport asizeof\n\nN = 1000000\nprint asizeof.asizeof(range(N)) / N\n# -> 20 (python 2.5, WinXP, 32-bit Linux)\n# -> 33 (64-bit Linux)\n\n",
"Addressing \"tuple\" part of the question\nDeclaration of CPython's PyTuple in a typical build configuration boils down to this:\nstruct PyTuple {\n size_t refcount; // tuple's reference count\n typeobject *type; // tuple type object\n size_t n_items; // number of items in tuple\n PyObject *items[1]; // contains space for n_items elements\n};\n\nSize of PyTuple instance is fixed during it's construction and cannot be changed afterwards. The number of bytes occupied by PyTuple can be calculated as\n\nsizeof(size_t) x 2 + sizeof(void*) x (n_items + 1).\n\nThis gives shallow size of tuple. To get full size you also need to add total number of bytes consumed by object graph rooted in PyTuple::items[] array. \nIt's worth noting that tuple construction routines make sure that only single instance of empty tuple is ever created (singleton).\nReferences:\nPython.h,\nobject.h,\ntupleobject.h,\ntupleobject.c\n",
"\nA new function, getsizeof(), takes a\n Python object and returns the amount\n of memory used by the object, measured\n in bytes. Built-in objects return\n correct results; third-party\n extensions may not, but can define a\n __sizeof__() method to return the object’s size.\n\nkveretennicov@nosignal:~/py/r26rc2$ ./python\nPython 2.6rc2 (r26rc2:66712, Sep 2 2008, 13:11:55) \n[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2\n>>> import sys\n>>> sys.getsizeof(range(1000000))\n4000032\n>>> sys.getsizeof(tuple(range(1000000)))\n4000024\n\nObviously returned numbers don't include memory consumed by contained objects (sys.getsizeof(1) == 12).\n",
"This is implementation specific, I'm pretty sure. Certainly it depends on the internal representation of integers - you can't assume they'll be stored as 32-bit since Python gives you arbitrarily large integers so perhaps small ints are stored more compactly. \nOn my Python (2.5.1 on Fedora 9 on core 2 duo) the VmSize before allocation is 6896kB, after is 22684kB. After one more million element assignment, VmSize goes to 38340kB. This very grossly indicates around 16000kB for 1000000 integers, which is around 16 bytes per integer. That suggests a lot of overhead for the list. I'd take these numbers with a large pinch of salt.\n"
] | [
26,
15,
6,
4,
2
] | [] | [] | [
"memory_management",
"python"
] | stackoverflow_0000135664_memory_management_python.txt |
Q:
Date change notification in a Tkinter app (win32)
Does anyone know if it is possible (and if yes, how) to bind an event (Python + Tkinter on MS Windows) to a system date change?
I know I can have .after events checking once in a while; I'm asking if I can somehow have an event fired whenever the system date/time changes, either automatically (e.g. for daylight saving time) or manually.
MS Windows sends such events to applications and Tkinter does receive them; I know, because if I have an .after timer waiting and I set the date/time after the timer's expiration, the timer event fires instantly.
A:
I know, because if I have an .after timer waiting and I set the date/time after the timer's expiration, the timer event fires instantly.
That could just mean that Tkinter (or Tk) is polling the system clock as part of the event loop to figure out when to run timers.
If you're using Windows, Mark Hammond's book notes that you can use the win32evtlogutil module to respond to changes in the Windows event log. Basically it works like this:
import win32evtlogutil
def onEvent(record):
# Do something with the event log record
win32evtlogutil.FeedEventLogRecords(onEvent)
But you'll need to get docs on the structure of the event records (I don't feel like typing out the whole chapter, sorry :-) ). Also I don't know if date changes turn up in the event log anyway.
Really, though, is it so bad to just poll the system clock? It seems easiest and I don't think it would slow you down much.
(finally, a comment: I don't know about your country, but here in NZ, daylight savings doesn't involve a date change; only the time changes (from 2am-3am, or vice-versa))
| Date change notification in a Tkinter app (win32) | Does anyone know if it is possible (and if yes, how) to bind an event (Python + Tkinter on MS Windows) to a system date change?
I know I can have .after events checking once in a while; I'm asking if I can somehow have an event fired whenever the system date/time changes, either automatically (e.g. for daylight saving time) or manually.
MS Windows sends such events to applications and Tkinter does receive them; I know, because if I have an .after timer waiting and I set the date/time after the timer's expiration, the timer event fires instantly.
| [
"\nI know, because if I have an .after timer waiting and I set the date/time after the timer's expiration, the timer event fires instantly.\n\nThat could just mean that Tkinter (or Tk) is polling the system clock as part of the event loop to figure out when to run timers.\nIf you're using Windows, Mark Hammond's book notes that you can use the win32evtlogutil module to respond to changes in the Windows event log. Basically it works like this:\nimport win32evtlogutil\ndef onEvent(record):\n # Do something with the event log record\nwin32evtlogutil.FeedEventLogRecords(onEvent)\n\nBut you'll need to get docs on the structure of the event records (I don't feel like typing out the whole chapter, sorry :-) ). Also I don't know if date changes turn up in the event log anyway.\nReally, though, is it so bad to just poll the system clock? It seems easiest and I don't think it would slow you down much.\n(finally, a comment: I don't know about your country, but here in NZ, daylight savings doesn't involve a date change; only the time changes (from 2am-3am, or vice-versa))\n"
] | [
1
] | [] | [] | [
"events",
"python",
"tkinter",
"windows"
] | stackoverflow_0000157116_events_python_tkinter_windows.txt |
Q:
What is "lambda binding" in Python?
I understand what are lambda functions in Python, but I can't find what is the meaning of "lambda binding" by searching the Python docs.
A link to read about it would be great.
A trivial explained example would be even better.
Thank you.
A:
First, a general definition:
When a program or function statement
is executed, the current values of
formal parameters are saved (on the
stack) and within the scope of the
statement, they are bound to the
values of the actual arguments made in
the call. When the statement is
exited, the original values of those
formal arguments are restored. This
protocol is fully recursive. If within
the body of a statement, something is
done that causes the formal parameters
to be bound again, to new values, the
lambda-binding scheme guarantees that
this will all happen in an orderly
manner.
Now, there is an excellent python example in a discussion here:
"...there is only one binding for x: doing x = 7 just changes the value in the pre-existing binding. That's why
def foo(x):
a = lambda: x
x = 7
b = lambda: x
return a,b
returns two functions that both return 7; if there was a new binding after the x = 7, the functions would return different values [assuming you don't call foo(7), of course. Also assuming nested_scopes]...."
A:
I've never heard that term, but one explanation could be the "default parameter" hack used to assign a value directly to a lambda's parameter. Using Swati's example:
def foo(x):
a = lambda x=x: x
x = 7
b = lambda: x
return a,b
aa, bb = foo(4)
aa() # Prints 4
bb() # Prints 7
A:
Where have you seen the phrase used?
"Binding" in Python generally refers to the process by which a variable name ends up pointing to a specific object, whether by assignment or parameter passing or some other means, e.g.:
a = dict(foo="bar", zip="zap", zig="zag") # binds a to a newly-created dict object
b = a # binds b to that same dictionary
def crunch(param):
print param
crunch(a) # binds the parameter "param" in the function crunch to that same dict again
So I would guess that "lambda binding" refers to the process of binding a lambda function to a variable name, or maybe binding its named parameters to specific objects? There's a pretty good explanation of binding in the Language Reference, at http://docs.python.org/ref/naming.html
| What is "lambda binding" in Python? | I understand what are lambda functions in Python, but I can't find what is the meaning of "lambda binding" by searching the Python docs.
A link to read about it would be great.
A trivial explained example would be even better.
Thank you.
| [
"First, a general definition:\n\nWhen a program or function statement\n is executed, the current values of\n formal parameters are saved (on the\n stack) and within the scope of the\n statement, they are bound to the\n values of the actual arguments made in\n the call. When the statement is\n exited, the original values of those\n formal arguments are restored. This\n protocol is fully recursive. If within\n the body of a statement, something is\n done that causes the formal parameters\n to be bound again, to new values, the\n lambda-binding scheme guarantees that\n this will all happen in an orderly\n manner.\n\nNow, there is an excellent python example in a discussion here:\n\"...there is only one binding for x: doing x = 7 just changes the value in the pre-existing binding. That's why\ndef foo(x): \n a = lambda: x \n x = 7 \n b = lambda: x \n return a,b\n\nreturns two functions that both return 7; if there was a new binding after the x = 7, the functions would return different values [assuming you don't call foo(7), of course. Also assuming nested_scopes]....\"\n",
"I've never heard that term, but one explanation could be the \"default parameter\" hack used to assign a value directly to a lambda's parameter. Using Swati's example:\ndef foo(x): \n a = lambda x=x: x \n x = 7 \n b = lambda: x \n return a,b\n\naa, bb = foo(4)\naa() # Prints 4\nbb() # Prints 7\n\n",
"Where have you seen the phrase used?\n\"Binding\" in Python generally refers to the process by which a variable name ends up pointing to a specific object, whether by assignment or parameter passing or some other means, e.g.:\na = dict(foo=\"bar\", zip=\"zap\", zig=\"zag\") # binds a to a newly-created dict object\nb = a # binds b to that same dictionary\n\ndef crunch(param):\n print param\n\ncrunch(a) # binds the parameter \"param\" in the function crunch to that same dict again\n\nSo I would guess that \"lambda binding\" refers to the process of binding a lambda function to a variable name, or maybe binding its named parameters to specific objects? There's a pretty good explanation of binding in the Language Reference, at http://docs.python.org/ref/naming.html\n"
] | [
14,
8,
1
] | [] | [] | [
"binding",
"lambda",
"python"
] | stackoverflow_0000160859_binding_lambda_python.txt |
Q:
Library for converting a traceback to its exception?
Just a curiosity: is there an already-coded way to convert a printed traceback back to the exception that generated it? :) Or to a sys.exc_info-like structure?
A:
Converting a traceback to the exception object wouldn't be too hard, given common exception classes (parse the last line for the exception class and the arguments given to it at instantiation.) The traceback object (the third argument returned by sys.exc_info()) is an entirely different matter, though. The traceback object actually contains the chain of frame objects that constituted the stack at the time of the exception. Including local variables, global variables, et cetera. It is impossible to recreate that just from the displayed traceback.
The best you could do would be to parse each 'File "X", line N, in Y:' line and create fake frame objects that are almost entirely empty. There would be very little value in it, as basically the only thing you would be able to do with it would be to print it. What are you trying to accomplish?
| Library for converting a traceback to its exception? | Just a curiosity: is there an already-coded way to convert a printed traceback back to the exception that generated it? :) Or to a sys.exc_info-like structure?
| [
"Converting a traceback to the exception object wouldn't be too hard, given common exception classes (parse the last line for the exception class and the arguments given to it at instantiation.) The traceback object (the third argument returned by sys.exc_info()) is an entirely different matter, though. The traceback object actually contains the chain of frame objects that constituted the stack at the time of the exception. Including local variables, global variables, et cetera. It is impossible to recreate that just from the displayed traceback. \nThe best you could do would be to parse each 'File \"X\", line N, in Y:' line and create fake frame objects that are almost entirely empty. There would be very little value in it, as basically the only thing you would be able to do with it would be to print it. What are you trying to accomplish?\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0000161367_python.txt |
Q:
Interface to versioned dictionary
I have an versioned document store which I want to access through an dict like interface.
Common usage is to access the latest revision (get, set, del), but one should be able to access specific revisions too (keys are always str/unicode or int).
from UserDict import DictMixin
class VDict(DictMixin):
def __getitem__(self, key):
if isinstance(key, tuple):
docid, rev = key
else:
docid = key
rev = None # set to tip rev
print docid, rev
# return ...
In [1]: d = VDict()
In [2]: d[2]
2 None
In [3]: d[2, 1]
2 1
This solution is a little bit tricky and I'm not sure if it is a clean, understandable interface. Should I provide a function
def getrev(self, docid, rev):
...
instead?
A:
Yes, provide a different API for getting different versions. Either a single methodcall for doing a retrieval of a particular item of a particular revision, or a methodcall for getting a 'view' of a particular revision, which you could then access like a normal dict, depending on whether such a 'view' would see much use. Or both, considering the dict-view solution would need some way to get a particular revision's item anyway:
class RevisionView(object):
def __init__(self, db, revid):
self.db = db
self.revid = revid
def __getitem__(self, item):
self.db.getrev(item, self.revid)
| Interface to versioned dictionary | I have an versioned document store which I want to access through an dict like interface.
Common usage is to access the latest revision (get, set, del), but one should be able to access specific revisions too (keys are always str/unicode or int).
from UserDict import DictMixin
class VDict(DictMixin):
def __getitem__(self, key):
if isinstance(key, tuple):
docid, rev = key
else:
docid = key
rev = None # set to tip rev
print docid, rev
# return ...
In [1]: d = VDict()
In [2]: d[2]
2 None
In [3]: d[2, 1]
2 1
This solution is a little bit tricky and I'm not sure if it is a clean, understandable interface. Should I provide a function
def getrev(self, docid, rev):
...
instead?
| [
"Yes, provide a different API for getting different versions. Either a single methodcall for doing a retrieval of a particular item of a particular revision, or a methodcall for getting a 'view' of a particular revision, which you could then access like a normal dict, depending on whether such a 'view' would see much use. Or both, considering the dict-view solution would need some way to get a particular revision's item anyway:\nclass RevisionView(object):\n def __init__(self, db, revid):\n self.db = db\n self.revid = revid\n def __getitem__(self, item):\n self.db.getrev(item, self.revid)\n\n"
] | [
4
] | [] | [] | [
"python"
] | stackoverflow_0000162656_python.txt |
Q:
Django admin site not displaying ManyToManyField relationship
I'm working on what I think is a pretty standard django site, but am having trouble getting my admin section to display the proper fields.
Here's my models.py:
class Tech(models.Model):
name = models.CharField(max_length = 30)
class Project(models.Model):
title = models.CharField(max_length = 50)
techs = models.ManyToManyField(Tech)
In other words, a Project can have different Tech objects and different tech objects can belong to different Projects (Project X was created with Python and Django, Project Y was C# and SQL Server)
However, the admin site doesn't display any UI for the Tech objects. Here's my admin.py:
class TechInline(admin.TabularInline):
model = Tech
extra = 5
class ProjectAdmin(admin.ModelAdmin):
fields = ['title']
inlines = []
list_display = ('title')
admin.site.register(Project, ProjectAdmin)
I've tried adding the TechInline class to the inlines list, but that causes a
<class 'home.projects.models.Tech'> has no ForeignKey to <class 'home.projects.models.Project'>
Error. Also tried adding techs to the fields list, but that gives a
no such table: projects_project_techs
Error. I verified, and there is no projects_project_techs table, but there is a projects_tech one. Did something perhaps get screwed up in my syncdb?
I am using Sqlite as my database if that helps.
A:
I've tried adding the TechInline class to the inlines list, but that causes a
'TechInLine' not defined
Is that a straight copy-paste? It looks like you just made a typo -- try TechInline instead of TechInLine.
If your syncdb didn't create the proper table, you can do it manually. Execute this command:
python manage.py sqlreset <myapp>
And look for the definition for the projects_project_techs table. Copy and paste it into the client for your database.
A:
Assuming your app is called "projects", the default name for your techs table will be projects_tech and the projects table will be projects_project.
The many-to-many table should be something like projects_project_techs
A:
@John Millikin - Thanks for the sqlreset tip, that put me on the right path. The sqlreset generated code that showed me that the projects_project_techs was never actually created. I ended up just deleting my deb.db database and regenerating it. techs then showed up as it should.
And just as a sidenote, I had to do an admin.site.register(Tech) to be able to create new instances of the class from the Project page too.
I'll probably post another question to see if there is a better way to implement model changes (since I'm pretty sure that is what caused my problem) without wiping the database.
| Django admin site not displaying ManyToManyField relationship | I'm working on what I think is a pretty standard django site, but am having trouble getting my admin section to display the proper fields.
Here's my models.py:
class Tech(models.Model):
name = models.CharField(max_length = 30)
class Project(models.Model):
title = models.CharField(max_length = 50)
techs = models.ManyToManyField(Tech)
In other words, a Project can have different Tech objects and different tech objects can belong to different Projects (Project X was created with Python and Django, Project Y was C# and SQL Server)
However, the admin site doesn't display any UI for the Tech objects. Here's my admin.py:
class TechInline(admin.TabularInline):
model = Tech
extra = 5
class ProjectAdmin(admin.ModelAdmin):
fields = ['title']
inlines = []
list_display = ('title')
admin.site.register(Project, ProjectAdmin)
I've tried adding the TechInline class to the inlines list, but that causes a
<class 'home.projects.models.Tech'> has no ForeignKey to <class 'home.projects.models.Project'>
Error. Also tried adding techs to the fields list, but that gives a
no such table: projects_project_techs
Error. I verified, and there is no projects_project_techs table, but there is a projects_tech one. Did something perhaps get screwed up in my syncdb?
I am using Sqlite as my database if that helps.
| [
"\nI've tried adding the TechInline class to the inlines list, but that causes a\n'TechInLine' not defined\n\nIs that a straight copy-paste? It looks like you just made a typo -- try TechInline instead of TechInLine.\nIf your syncdb didn't create the proper table, you can do it manually. Execute this command:\npython manage.py sqlreset <myapp>\n\nAnd look for the definition for the projects_project_techs table. Copy and paste it into the client for your database.\n",
"Assuming your app is called \"projects\", the default name for your techs table will be projects_tech and the projects table will be projects_project.\nThe many-to-many table should be something like projects_project_techs\n",
"@John Millikin - Thanks for the sqlreset tip, that put me on the right path. The sqlreset generated code that showed me that the projects_project_techs was never actually created. I ended up just deleting my deb.db database and regenerating it. techs then showed up as it should. \nAnd just as a sidenote, I had to do an admin.site.register(Tech) to be able to create new instances of the class from the Project page too.\nI'll probably post another question to see if there is a better way to implement model changes (since I'm pretty sure that is what caused my problem) without wiping the database.\n"
] | [
3,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000160905_django_python.txt |
Q:
Most pythonic way of counting matching elements in something iterable
I have an iterable of entries on which I would like to gather some simple statistics, say the count of all numbers divisible by two and the count of all numbers divisible by three.
My first alternative, While only iterating through the list once and avoiding the list expansion (and keeping the split loop refactoring in mind), looks rather bloated:
(alt 1)
r = xrange(1, 10)
twos = 0
threes = 0
for v in r:
if v % 2 == 0:
twos+=1
if v % 3 == 0:
threes+=1
print twos
print threes
This looks rather nice, but has the drawback of expanding the expression to a list:
(alt 2)
r = xrange(1, 10)
print len([1 for v in r if v % 2 == 0])
print len([1 for v in r if v % 3 == 0])
What I would really like is something like a function like this:
(alt 3)
def count(iterable):
n = 0
for i in iterable:
n += 1
return n
r = xrange(1, 10)
print count(1 for v in r if v % 2 == 0)
print count(1 for v in r if v % 3 == 0)
But this looks a lot like something that could be done without a function. The final variant is this:
(alt 4)
r = xrange(1, 10)
print sum(1 for v in r if v % 2 == 0)
print sum(1 for v in r if v % 3 == 0)
and while the smallest (and in my book probably the most elegant) it doesn't feel like it expresses the intent very well.
So, my question to you is:
Which alternative do you like best to gather these types of stats? Feel free to supply your own alternative if you have something better.
To clear up some confusion below:
In reality my filter predicates are more complex than just this simple test.
The objects I iterate over are larger and more complex than just numbers
My filter functions are more different and hard to parameterize into one predicate
A:
Having to iterate over the list multiple times isn't elegant IMHO.
I'd probably create a function that allows doing:
twos, threes = countmatching(xrange(1,10),
lambda a: a % 2 == 0,
lambda a: a % 3 == 0)
A starting point would be something like this:
def countmatching(iterable, *predicates):
v = [0] * len(predicates)
for e in iterable:
for i,p in enumerate(predicates):
if p(e):
v[i] += 1
return tuple(v)
Btw, "itertools recipes" has a recipe for doing much like your alt4.
def quantify(seq, pred=None):
"Count how many times the predicate is true in the sequence"
return sum(imap(pred, seq))
A:
Alt 4! But maybe you should refactor the code to a function that takes an argument which should contain the divisible number (two and three). And then you could have a better functionname.
def methodName(divNumber, r):
return sum(1 for v in r if v % divNumber == 0)
print methodName(2, xrange(1, 10))
print methodName(3, xrange(1, 10))
A:
You could use the filter function.
It filters a list (or strictly an iterable) producing a new list containing only the items for which the specified function evaluates to true.
r = xrange(1, 10)
def is_div_two(n):
return n % 2 == 0
def is_div_three(n):
return n % 3 == 0
print len(filter(is_div_two,r))
print len(filter(is_div_three,r))
This is good as it allows you keep your statistics logic contained in a function and the intent of the filter should be pretty clear.
A:
I would choose a small variant of your (alt 4):
def count(predicate, list):
print sum(1 for x in list if predicate(x))
r = xrange(1, 10)
count(lambda x: x % 2 == 0, r)
count(lambda x: x % 3 == 0, r)
# ...
If you want to change what count does, change its implementation in one place.
Note: since your predicates are complex, you'll probably want to define them in functions instead of lambdas. And so you'll probably want to put all this in a class rather than the global namespace.
A:
Well you could do one list comprehension/expression to get a set of tuples with that stat test in them and then reduce that down to get the sums.
r=xrange(10)
s=( (v % 2 == 0, v % 3 == 0) for v in r )
def add_tuples(t1,t2):
return tuple(x+y for x,y in zip(t1, t2))
sums=reduce(add_tuples, s, (0,0)) # (0,0) is starting amount
print sums[0] # sum of numbers divisible by 2
print sums[1] # sum of numbers divisible by 3
Using generator expression etc should mean you'll only run through the iterator once (unless reduce does anything odd?). Basically you'd be doing map/reduce...
A:
True booleans are coerced to unit integers, and false booleans to zero integers. So if you're happy to use scipy or numpy, make an array of integers for each element of your sequence, each array containing one element for each of your tests, and sum over the arrays. E.g.
>>> sum(scipy.array([c % 2 == 0, c % 3 == 0]) for c in xrange(10))
array([5, 4])
A:
I would definitely be looking at a numpy array instead of an iterable list if you just have numbers. You will almost certainly be able to do what you want with some terse arithmetic on the array.
A:
Not as terse as you are looking for, but more efficient, it actually works with any iterable, not just iterables you can loop over multiple times, and you can expand the things to check for without complicating it further:
r = xrange(1, 10)
counts = {
2: 0,
3: 0,
}
for v in r:
for q in counts:
if not v % q:
counts[q] += 1
# Or, more obscure:
#counts[q] += not v % q
for q in counts:
print "%s's: %s" % (q, counts[q])
A:
from itertools import groupby
from collections import defaultdict
def multiples(v):
return 2 if v%2==0 else 3 if v%3==0 else None
d = defaultdict(list)
for k, values in groupby(range(10), multiples):
if k is not None:
d[k].extend(values)
A:
The idea here is to use reduction to avoid repeated iterations. Also, this does not create any extra data structures, if memory is an issue for you. You start with a dictionary with your counters ({'div2': 0, 'div3': 0}) and increment them along the iteration.
def increment_stats(stats, n):
if n % 2 == 0: stats['div2'] += 1
if n % 3 == 0: stats['div3'] += 1
return stats
r = xrange(1, 10)
stats = reduce(increment_stats, r, {'div2': 0, 'div3': 0})
print stats
If you want to count anything more complicated than divisors, it would be appropriate to use a more object-oriented approach (with the same advantages), encapsulating the logic for stats extraction.
class Stats:
def __init__(self, div2=0, div3=0):
self.div2 = div2
self.div3 = div3
def increment(self, n):
if n % 2 == 0: self.div2 += 1
if n % 3 == 0: self.div3 += 1
return self
def __repr__(self):
return 'Stats(%d, %d)' % (self.div2, self.div3)
r = xrange(1, 10)
stats = reduce(lambda stats, n: stats.increment(n), r, Stats())
print stats
Please point out any mistakes.
@Henrik: I think the first approach is less maintainable since you have to control initialization of the dictionary in one place and update in another, as well as having to use strings to refer to each stat (instead of having attributes). And I do not think OO is overkill in this case, for you said the predicates and objects will be complex in your application. In fact if the predicates were really simple, I wouldn't even bother to use a dictionary, a single fixed size list would be just fine. Cheers :)
A:
Inspired by the OO-stab above, I had to try my hands on one as well (although this is way overkill for the problem I'm trying to solve :)
class Stat(object):
def update(self, n):
raise NotImplementedError
def get(self):
raise NotImplementedError
class TwoStat(Stat):
def __init__(self):
self._twos = 0
def update(self, n):
if n % 2 == 0: self._twos += 1
def get(self):
return self._twos
class ThreeStat(Stat):
def __init__(self):
self._threes = 0
def update(self, n):
if n % 3 == 0: self._threes += 1
def get(self):
return self._threes
class StatCalculator(object):
def __init__(self, stats):
self._stats = stats
def calculate(self, r):
for v in r:
for stat in self._stats:
stat.update(v)
return tuple(stat.get() for stat in self._stats)
s = StatCalculator([TwoStat(), ThreeStat()])
r = xrange(1, 10)
print s.calculate(r)
A:
Alt 3, for the reason that it doesn't use memory proportional to the number of "hits". Given a pathological case like xrange(one_trillion), many of the other offered solutions would fail badly.
| Most pythonic way of counting matching elements in something iterable | I have an iterable of entries on which I would like to gather some simple statistics, say the count of all numbers divisible by two and the count of all numbers divisible by three.
My first alternative, While only iterating through the list once and avoiding the list expansion (and keeping the split loop refactoring in mind), looks rather bloated:
(alt 1)
r = xrange(1, 10)
twos = 0
threes = 0
for v in r:
if v % 2 == 0:
twos+=1
if v % 3 == 0:
threes+=1
print twos
print threes
This looks rather nice, but has the drawback of expanding the expression to a list:
(alt 2)
r = xrange(1, 10)
print len([1 for v in r if v % 2 == 0])
print len([1 for v in r if v % 3 == 0])
What I would really like is something like a function like this:
(alt 3)
def count(iterable):
n = 0
for i in iterable:
n += 1
return n
r = xrange(1, 10)
print count(1 for v in r if v % 2 == 0)
print count(1 for v in r if v % 3 == 0)
But this looks a lot like something that could be done without a function. The final variant is this:
(alt 4)
r = xrange(1, 10)
print sum(1 for v in r if v % 2 == 0)
print sum(1 for v in r if v % 3 == 0)
and while the smallest (and in my book probably the most elegant) it doesn't feel like it expresses the intent very well.
So, my question to you is:
Which alternative do you like best to gather these types of stats? Feel free to supply your own alternative if you have something better.
To clear up some confusion below:
In reality my filter predicates are more complex than just this simple test.
The objects I iterate over are larger and more complex than just numbers
My filter functions are more different and hard to parameterize into one predicate
| [
"Having to iterate over the list multiple times isn't elegant IMHO.\nI'd probably create a function that allows doing:\ntwos, threes = countmatching(xrange(1,10),\n lambda a: a % 2 == 0,\n lambda a: a % 3 == 0)\n\nA starting point would be something like this:\ndef countmatching(iterable, *predicates):\n v = [0] * len(predicates)\n for e in iterable:\n for i,p in enumerate(predicates):\n if p(e):\n v[i] += 1\n return tuple(v)\n\nBtw, \"itertools recipes\" has a recipe for doing much like your alt4.\ndef quantify(seq, pred=None):\n \"Count how many times the predicate is true in the sequence\"\n return sum(imap(pred, seq))\n\n",
"Alt 4! But maybe you should refactor the code to a function that takes an argument which should contain the divisible number (two and three). And then you could have a better functionname.\ndef methodName(divNumber, r):\n return sum(1 for v in r if v % divNumber == 0)\n\n\nprint methodName(2, xrange(1, 10))\nprint methodName(3, xrange(1, 10))\n\n",
"You could use the filter function.\nIt filters a list (or strictly an iterable) producing a new list containing only the items for which the specified function evaluates to true.\nr = xrange(1, 10)\n\ndef is_div_two(n):\n return n % 2 == 0\n\ndef is_div_three(n):\n return n % 3 == 0\n\nprint len(filter(is_div_two,r))\nprint len(filter(is_div_three,r))\n\nThis is good as it allows you keep your statistics logic contained in a function and the intent of the filter should be pretty clear.\n",
"I would choose a small variant of your (alt 4):\ndef count(predicate, list):\n print sum(1 for x in list if predicate(x))\n\nr = xrange(1, 10)\n\ncount(lambda x: x % 2 == 0, r)\ncount(lambda x: x % 3 == 0, r)\n# ...\n\nIf you want to change what count does, change its implementation in one place.\nNote: since your predicates are complex, you'll probably want to define them in functions instead of lambdas. And so you'll probably want to put all this in a class rather than the global namespace.\n",
"Well you could do one list comprehension/expression to get a set of tuples with that stat test in them and then reduce that down to get the sums.\n\nr=xrange(10)\ns=( (v % 2 == 0, v % 3 == 0) for v in r )\ndef add_tuples(t1,t2):\n return tuple(x+y for x,y in zip(t1, t2))\nsums=reduce(add_tuples, s, (0,0)) # (0,0) is starting amount\n\nprint sums[0] # sum of numbers divisible by 2\nprint sums[1] # sum of numbers divisible by 3\n\n\nUsing generator expression etc should mean you'll only run through the iterator once (unless reduce does anything odd?). Basically you'd be doing map/reduce...\n",
"True booleans are coerced to unit integers, and false booleans to zero integers. So if you're happy to use scipy or numpy, make an array of integers for each element of your sequence, each array containing one element for each of your tests, and sum over the arrays. E.g.\n>>> sum(scipy.array([c % 2 == 0, c % 3 == 0]) for c in xrange(10))\narray([5, 4])\n\n",
"I would definitely be looking at a numpy array instead of an iterable list if you just have numbers. You will almost certainly be able to do what you want with some terse arithmetic on the array.\n",
"Not as terse as you are looking for, but more efficient, it actually works with any iterable, not just iterables you can loop over multiple times, and you can expand the things to check for without complicating it further:\nr = xrange(1, 10)\n\ncounts = {\n 2: 0,\n 3: 0,\n}\n\nfor v in r:\n for q in counts:\n if not v % q:\n counts[q] += 1\n # Or, more obscure:\n #counts[q] += not v % q\n\nfor q in counts:\n print \"%s's: %s\" % (q, counts[q])\n\n",
"from itertools import groupby\nfrom collections import defaultdict\n\ndef multiples(v):\n return 2 if v%2==0 else 3 if v%3==0 else None\nd = defaultdict(list)\n\nfor k, values in groupby(range(10), multiples):\n if k is not None:\n d[k].extend(values)\n\n",
"The idea here is to use reduction to avoid repeated iterations. Also, this does not create any extra data structures, if memory is an issue for you. You start with a dictionary with your counters ({'div2': 0, 'div3': 0}) and increment them along the iteration.\ndef increment_stats(stats, n):\n if n % 2 == 0: stats['div2'] += 1\n if n % 3 == 0: stats['div3'] += 1\n return stats\n\nr = xrange(1, 10)\nstats = reduce(increment_stats, r, {'div2': 0, 'div3': 0})\nprint stats\n\nIf you want to count anything more complicated than divisors, it would be appropriate to use a more object-oriented approach (with the same advantages), encapsulating the logic for stats extraction.\nclass Stats:\n\n def __init__(self, div2=0, div3=0):\n self.div2 = div2\n self.div3 = div3\n\n def increment(self, n):\n if n % 2 == 0: self.div2 += 1\n if n % 3 == 0: self.div3 += 1\n return self\n\n def __repr__(self):\n return 'Stats(%d, %d)' % (self.div2, self.div3)\n\nr = xrange(1, 10)\nstats = reduce(lambda stats, n: stats.increment(n), r, Stats())\nprint stats\n\nPlease point out any mistakes.\n@Henrik: I think the first approach is less maintainable since you have to control initialization of the dictionary in one place and update in another, as well as having to use strings to refer to each stat (instead of having attributes). And I do not think OO is overkill in this case, for you said the predicates and objects will be complex in your application. In fact if the predicates were really simple, I wouldn't even bother to use a dictionary, a single fixed size list would be just fine. Cheers :)\n",
"Inspired by the OO-stab above, I had to try my hands on one as well (although this is way overkill for the problem I'm trying to solve :)\nclass Stat(object):\n def update(self, n):\n raise NotImplementedError\n\n def get(self):\n raise NotImplementedError\n\n\nclass TwoStat(Stat):\n def __init__(self):\n self._twos = 0\n\n def update(self, n):\n if n % 2 == 0: self._twos += 1\n\n def get(self):\n return self._twos\n\n\nclass ThreeStat(Stat):\n def __init__(self):\n self._threes = 0\n\n def update(self, n):\n if n % 3 == 0: self._threes += 1\n\n def get(self):\n return self._threes\n\n\nclass StatCalculator(object):\n def __init__(self, stats):\n self._stats = stats\n\n def calculate(self, r):\n for v in r:\n for stat in self._stats:\n stat.update(v)\n return tuple(stat.get() for stat in self._stats)\n\n\ns = StatCalculator([TwoStat(), ThreeStat()])\n\nr = xrange(1, 10)\nprint s.calculate(r)\n\n",
"Alt 3, for the reason that it doesn't use memory proportional to the number of \"hits\". Given a pathological case like xrange(one_trillion), many of the other offered solutions would fail badly.\n"
] | [
20,
7,
4,
3,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"list_comprehension",
"python"
] | stackoverflow_0000157039_list_comprehension_python.txt |
Q:
WPF Alternative for python
Is there any alternative for WPF (windows presentation foundation) in python?
http://msdn.microsoft.com/en-us/library/aa970268.aspx#Programming_with_WPF
A:
Here is a list of Python GUI Toolkits.
Also, you can use IronPython to work with WPF directly.
A:
You might want to look at pygtk and glade. Here is a tutorial.
There is a long list of alternatives on the Python Wiki.
A:
Try PyQt which binds python to QT graphics library. There are some other links at the end of that article:
Anygui
PyGTK
FXPy
wxPython
win32ui
A:
If you are on Windows and you want to use WPF (as opposed to an alternative), you can use it with IronPython - a .NET version of python.
Here's a quick example: http://stevegilham.blogspot.com/2007/07/hello-wpf-in-ironpython.html
| WPF Alternative for python | Is there any alternative for WPF (windows presentation foundation) in python?
http://msdn.microsoft.com/en-us/library/aa970268.aspx#Programming_with_WPF
| [
"Here is a list of Python GUI Toolkits.\nAlso, you can use IronPython to work with WPF directly. \n",
"You might want to look at pygtk and glade. Here is a tutorial.\nThere is a long list of alternatives on the Python Wiki.\n",
"Try PyQt which binds python to QT graphics library. There are some other links at the end of that article:\n\nAnygui\nPyGTK\nFXPy\nwxPython\nwin32ui \n\n",
"If you are on Windows and you want to use WPF (as opposed to an alternative), you can use it with IronPython - a .NET version of python.\nHere's a quick example: http://stevegilham.blogspot.com/2007/07/hello-wpf-in-ironpython.html\n"
] | [
7,
3,
2,
1
] | [] | [] | [
"python",
"user_interface"
] | stackoverflow_0000163881_python_user_interface.txt |
Q:
Subclassing a class with private members
One of the really nice things about python is the simplicity with which you can name variables that have the same name as the accessor:
self.__value = 1
def value():
return self.__value
Is there a simple way of providing access to the private members of a class that I wish to subclass? Often I wish to simply work with the raw data objects inside of a class without having to use accessors and mutators all the time.
I know this seems to go against the general idea of private and public, but usually the class I am trying to subclass is one of my own which I am quite happy to expose the members from to a subclass but not to an instance of that class. Is there a clean way of providing this distinction?
A:
Not conveniently, without further breaking encapsulation. The double-underscore attribute is name-mangled by prepending '_ClassName' for the class it is being accessed in. So, if you have a 'ContainerThing' class that has a '__value' attribute, the attribute is actually being stored as '_ContainerThing__value'. Changing the class name (or refactoring where the attribute is assigned to) would mean breaking all subclasses that try to access that attribute.
This is exactly why the double-underscore name-mangling (which is not really "private", just "inconvenient") is a bad idea to use. Just use a single leading underscore. Everyone will know not to touch your 'private' attribute and you will still be able to access it in subclasses and other situations where it's darned handy. The name-mangling of double-underscore attributes is useful only to avoid name-clashes for attributes that are truly specific to a particular class, which is extremely rare. It provides no extra 'security' since even the name-mangled attributes are trivially accessible.
For the record, '__value' and 'value' (and '_value') are not the same name. The underscores are part of the name.
A:
"I know this seems to go against the general idea of private and public" Not really "against", just different from C++ and Java.
Private -- as implemented in C++ and Java is not a very useful concept. It helps, sometimes, to isolate implementation details. But it is way overused.
Python names beginning with two __ are special and you should not, as a normal thing, be defining attributes with names like this. Names with __ are special and part of the implementation. And exposed for your use.
Names beginning with one _ are "private". Sometimes they are concealed, a little. Most of the time, the "consenting adults" rule applies -- don't use them foolishly, they're subject to change without notice.
We put "private" in quotes because it's just an agreement between you and your users. You've marked things with _. Your users (and yourself) should honor that.
Often, we have method function names with a leading _ to indicate that we consider them to be "private" and subject to change without notice.
The endless getters and setters that Java requires aren't as often used in Python. Python introspection is more flexible, you have access to an object's internal dictionary of attribute values, and you have first class functions like getattr() and setattr().
Further, you have the property() function which is often used to bind getters and setters to a single name that behaves like a simple attribute, but is actually well-defined method function calls.
A:
Not sure of where to cite it from, but the following statement in regard to access protection is Pythonic canon: "We're all consenting adults here".
Just as Thomas Wouters has stated, a single leading underscore is the idiomatic way of marking an attribute as being a part of the object's internal state. Two underscores just provides name mangling to prevent easy access to the attribute.
After that, you should just expect that the client of your library won't go and shoot themselves in the foot by meddling with the "private" attributes.
| Subclassing a class with private members | One of the really nice things about python is the simplicity with which you can name variables that have the same name as the accessor:
self.__value = 1
def value():
return self.__value
Is there a simple way of providing access to the private members of a class that I wish to subclass? Often I wish to simply work with the raw data objects inside of a class without having to use accessors and mutators all the time.
I know this seems to go against the general idea of private and public, but usually the class I am trying to subclass is one of my own which I am quite happy to expose the members from to a subclass but not to an instance of that class. Is there a clean way of providing this distinction?
| [
"Not conveniently, without further breaking encapsulation. The double-underscore attribute is name-mangled by prepending '_ClassName' for the class it is being accessed in. So, if you have a 'ContainerThing' class that has a '__value' attribute, the attribute is actually being stored as '_ContainerThing__value'. Changing the class name (or refactoring where the attribute is assigned to) would mean breaking all subclasses that try to access that attribute.\nThis is exactly why the double-underscore name-mangling (which is not really \"private\", just \"inconvenient\") is a bad idea to use. Just use a single leading underscore. Everyone will know not to touch your 'private' attribute and you will still be able to access it in subclasses and other situations where it's darned handy. The name-mangling of double-underscore attributes is useful only to avoid name-clashes for attributes that are truly specific to a particular class, which is extremely rare. It provides no extra 'security' since even the name-mangled attributes are trivially accessible.\nFor the record, '__value' and 'value' (and '_value') are not the same name. The underscores are part of the name.\n",
"\"I know this seems to go against the general idea of private and public\" Not really \"against\", just different from C++ and Java.\nPrivate -- as implemented in C++ and Java is not a very useful concept. It helps, sometimes, to isolate implementation details. But it is way overused.\nPython names beginning with two __ are special and you should not, as a normal thing, be defining attributes with names like this. Names with __ are special and part of the implementation. And exposed for your use.\nNames beginning with one _ are \"private\". Sometimes they are concealed, a little. Most of the time, the \"consenting adults\" rule applies -- don't use them foolishly, they're subject to change without notice. \nWe put \"private\" in quotes because it's just an agreement between you and your users. You've marked things with _. Your users (and yourself) should honor that.\nOften, we have method function names with a leading _ to indicate that we consider them to be \"private\" and subject to change without notice.\nThe endless getters and setters that Java requires aren't as often used in Python. Python introspection is more flexible, you have access to an object's internal dictionary of attribute values, and you have first class functions like getattr() and setattr().\nFurther, you have the property() function which is often used to bind getters and setters to a single name that behaves like a simple attribute, but is actually well-defined method function calls.\n",
"Not sure of where to cite it from, but the following statement in regard to access protection is Pythonic canon: \"We're all consenting adults here\".\nJust as Thomas Wouters has stated, a single leading underscore is the idiomatic way of marking an attribute as being a part of the object's internal state. Two underscores just provides name mangling to prevent easy access to the attribute.\nAfter that, you should just expect that the client of your library won't go and shoot themselves in the foot by meddling with the \"private\" attributes.\n"
] | [
5,
3,
1
] | [] | [] | [
"encapsulation",
"oop",
"python"
] | stackoverflow_0000162798_encapsulation_oop_python.txt |
Q:
Apache sockets not closing?
I have a web application written using CherryPy, which is run locally on 127.0.0.1:4321. We use mod-rewrite and mod-proxy to have Apache act as a reverse proxy; Apache also handles our SSL encryption and may eventually be used to transfer all of our static content.
This all works just fine for small workloads. However, I recently used urllib2 to write a stress-testing script that would simulate a workload of 100 clients. After some time, each client gets a 503 error from Apache, indicating that Apache cannot connect to 127.0.0.1:4321. CherryPy is functioning properly, but my Apache error log reveals lines like the following:
[Thu Oct 02 12:55:44 2008] [error] (OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : proxy: HTTP: attempt to connect to 127.0.0.1:4321 (*) failed
Googling for this error reveals that Apache has probably run out of socket file descriptors. Since I only have 100 clients running, this implies that the connections are not being closed, either between my urllib2 connection and Apache (I am definitely calling .close() on the return value of urlopen), or between Apache and CherryPy.
I've confirmed that my urllib2 request is sending an HTTP Connection: close header, although Apache is configured with KeepAlive On if that matters.
In case it matters, I'm using Python 2.5, Apache 2.2, CherryPy 3.0.3, and the server is running on Windows Server 2003.
So what's my next step to stop this problem?
A:
SetEnv proxy-nokeepalive 1 would probably tell you right away if the problem is keepalive between Apache and CP. See the mod_proxy docs for more info.
A:
You might run the netstat command and see if you have a bunch of sockets in the TIME_WAIT state. Depending on your MaxUserPort setting you might be severly limited in the number of ports available to use. In addition the TcpTimedWaitDelay is usually set to 240 seconds so any sockets that are used cannot be reused for four minutes.
There's more good information here --> http://smallvoid.com/article/winnt-tcpip-max-limit.html
| Apache sockets not closing? | I have a web application written using CherryPy, which is run locally on 127.0.0.1:4321. We use mod-rewrite and mod-proxy to have Apache act as a reverse proxy; Apache also handles our SSL encryption and may eventually be used to transfer all of our static content.
This all works just fine for small workloads. However, I recently used urllib2 to write a stress-testing script that would simulate a workload of 100 clients. After some time, each client gets a 503 error from Apache, indicating that Apache cannot connect to 127.0.0.1:4321. CherryPy is functioning properly, but my Apache error log reveals lines like the following:
[Thu Oct 02 12:55:44 2008] [error] (OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : proxy: HTTP: attempt to connect to 127.0.0.1:4321 (*) failed
Googling for this error reveals that Apache has probably run out of socket file descriptors. Since I only have 100 clients running, this implies that the connections are not being closed, either between my urllib2 connection and Apache (I am definitely calling .close() on the return value of urlopen), or between Apache and CherryPy.
I've confirmed that my urllib2 request is sending an HTTP Connection: close header, although Apache is configured with KeepAlive On if that matters.
In case it matters, I'm using Python 2.5, Apache 2.2, CherryPy 3.0.3, and the server is running on Windows Server 2003.
So what's my next step to stop this problem?
| [
"SetEnv proxy-nokeepalive 1 would probably tell you right away if the problem is keepalive between Apache and CP. See the mod_proxy docs for more info.\n",
"You might run the netstat command and see if you have a bunch of sockets in the TIME_WAIT state. Depending on your MaxUserPort setting you might be severly limited in the number of ports available to use. In addition the TcpTimedWaitDelay is usually set to 240 seconds so any sockets that are used cannot be reused for four minutes.\nThere's more good information here --> http://smallvoid.com/article/winnt-tcpip-max-limit.html\n"
] | [
6,
5
] | [] | [] | [
"apache",
"cherrypy",
"mod_proxy",
"python",
"urllib2"
] | stackoverflow_0000163603_apache_cherrypy_mod_proxy_python_urllib2.txt |
Q:
How do I get the key value of a db.ReferenceProperty without a database hit?
Is there a way to get the key (or id) value of a db.ReferenceProperty, without dereferencing the actual entity it points to? I have been digging around - it looks like the key is stored as the property name preceeded with an _, but I have been unable to get any code working. Examples would be much appreciated. Thanks.
EDIT: Here is what I have unsuccessfully tried:
class Comment(db.Model):
series = db.ReferenceProperty(reference_class=Series);
def series_id(self):
return self._series
And in my template:
<a href="games/view-series.html?series={{comment.series_id}}#comm{{comment.key.id}}">more</a>
The result:
<a href="games/view-series.html?series=#comm59">more</a>
A:
Actually, the way that you are advocating accessing the key for a ReferenceProperty might well not exist in the future. Attributes that begin with '_' in python are generally accepted to be "protected" in that things that are closely bound and intimate with its implementation can use them, but things that are updated with the implementation must change when it changes.
However, there is a way through the public interface that you can access the key for your reference-property so that it will be safe in the future. I'll revise the above example:
class Comment(db.Model):
series = db.ReferenceProperty(reference_class=Series);
def series_id(self):
return Comment.series.get_value_for_datastore(self)
When you access properties via the class it is associated, you get the property object itself, which has a public method that can get the underlying values.
A:
You're correct - the key is stored as the property name prefixed with '_'. You should just be able to access it directly on the model object. Can you demonstrate what you're trying? I've used this technique in the past with no problems.
Edit: Have you tried calling series_id() directly, or referencing _series in your template directly? I'm not sure whether Django automatically calls methods with no arguments if you specify them in this context. You could also try putting the @property decorator on the method.
| How do I get the key value of a db.ReferenceProperty without a database hit? | Is there a way to get the key (or id) value of a db.ReferenceProperty, without dereferencing the actual entity it points to? I have been digging around - it looks like the key is stored as the property name preceeded with an _, but I have been unable to get any code working. Examples would be much appreciated. Thanks.
EDIT: Here is what I have unsuccessfully tried:
class Comment(db.Model):
series = db.ReferenceProperty(reference_class=Series);
def series_id(self):
return self._series
And in my template:
<a href="games/view-series.html?series={{comment.series_id}}#comm{{comment.key.id}}">more</a>
The result:
<a href="games/view-series.html?series=#comm59">more</a>
| [
"Actually, the way that you are advocating accessing the key for a ReferenceProperty might well not exist in the future. Attributes that begin with '_' in python are generally accepted to be \"protected\" in that things that are closely bound and intimate with its implementation can use them, but things that are updated with the implementation must change when it changes.\nHowever, there is a way through the public interface that you can access the key for your reference-property so that it will be safe in the future. I'll revise the above example:\nclass Comment(db.Model):\n series = db.ReferenceProperty(reference_class=Series);\n\n def series_id(self):\n return Comment.series.get_value_for_datastore(self)\n\nWhen you access properties via the class it is associated, you get the property object itself, which has a public method that can get the underlying values.\n",
"You're correct - the key is stored as the property name prefixed with '_'. You should just be able to access it directly on the model object. Can you demonstrate what you're trying? I've used this technique in the past with no problems.\nEdit: Have you tried calling series_id() directly, or referencing _series in your template directly? I'm not sure whether Django automatically calls methods with no arguments if you specify them in this context. You could also try putting the @property decorator on the method.\n"
] | [
13,
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000141973_google_app_engine_python.txt |
Q:
What is the difference between Ruby and Python versions of"self"?
I've done some Python but have just now starting to use Ruby
I could use a good explanation of the difference between "self" in these two languages.
Obvious on first glance:
Self is not a keyword in Python, but there is a "self-like" value no matter what you call it.
Python methods receive self as an explicit argument, whereas Ruby does not.
Ruby sometimes has methods explicitly defined as part of self using dot notation.
Initial Googling reveals
http://rubylearning.com/satishtalim/ruby_self.html
http://www.ibiblio.org/g2swap/byteofpython/read/self.html
A:
Python is designed to support more than just object-oriented programming. Preserving the same interface between methods and functions lets the two styles interoperate more cleanly.
Ruby was built from the ground up to be object-oriented. Even the literals are objects (evaluate 1.class and you get Fixnum). The language was built such that self is a reserved keyword that returns the current instance wherever you are.
If you're inside an instance method of one of your class, self is a reference to said instance.
If you're in the definition of the class itself (not in a method), self is the class itself:
class C
puts "I am a #{self}"
def instance_method
puts 'instance_method'
end
def self.class_method
puts 'class_method'
end
end
At class definition time, 'I am a C' will be printed.
The straight 'def' defines an instance method, whereas the 'def self.xxx' defines a class method.
c=C.new
c.instance_method
#=> instance_method
C.class_method
#=> class_method
A:
Despite webmat's claim, Guido wrote that explicit self is "not an implementation hack -- it is a semantic device".
The reason for explicit self in method
definition signatures is semantic
consistency. If you write
class C: def foo(self, x, y): ...
This really is the same as writing
class C: pass
def foo(self, x, y): ... C.foo = foo
This was an intentional design decision, not a result of introducing OO behaviour at a latter date.
Everything in Python -is- an object, including literals.
See also Why must 'self' be used explicitly in method definitions and calls?
A:
Well, I don't know much about Ruby. But the obvious point about Python's "self" is that it's not a "keyword" ...it's just the name of an argument that's sent to your method.
You can use any name you like for this argument. "Self" is just a convention.
For example :
class X :
def __init__(a,val) :
a.x = val
def p(b) :
print b.x
x = X(6)
x.p()
Prints the number 6 on the terminal. In the constructor the self object is actually called a. But in the p() method, it's called b.
Update : In October 2008, Guido pointed out that having an explicit self was also necessary to allow Python decorators to be general enough to work on pure functions, methods or classmethods : http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay.html
A:
self is used only as a convention, you can use spam, bacon or sausage instead of self and get the same result. It's just the first argument passed to bound methods. But stick to using self as it will confuse others and some editors.
| What is the difference between Ruby and Python versions of"self"? | I've done some Python but have just now starting to use Ruby
I could use a good explanation of the difference between "self" in these two languages.
Obvious on first glance:
Self is not a keyword in Python, but there is a "self-like" value no matter what you call it.
Python methods receive self as an explicit argument, whereas Ruby does not.
Ruby sometimes has methods explicitly defined as part of self using dot notation.
Initial Googling reveals
http://rubylearning.com/satishtalim/ruby_self.html
http://www.ibiblio.org/g2swap/byteofpython/read/self.html
| [
"Python is designed to support more than just object-oriented programming. Preserving the same interface between methods and functions lets the two styles interoperate more cleanly.\nRuby was built from the ground up to be object-oriented. Even the literals are objects (evaluate 1.class and you get Fixnum). The language was built such that self is a reserved keyword that returns the current instance wherever you are.\nIf you're inside an instance method of one of your class, self is a reference to said instance. \nIf you're in the definition of the class itself (not in a method), self is the class itself:\nclass C\n puts \"I am a #{self}\"\n def instance_method\n puts 'instance_method'\n end\n def self.class_method\n puts 'class_method'\n end\nend\n\nAt class definition time, 'I am a C' will be printed.\nThe straight 'def' defines an instance method, whereas the 'def self.xxx' defines a class method.\nc=C.new\n\nc.instance_method\n#=> instance_method\nC.class_method\n#=> class_method\n\n",
"Despite webmat's claim, Guido wrote that explicit self is \"not an implementation hack -- it is a semantic device\".\n\nThe reason for explicit self in method\n definition signatures is semantic\n consistency. If you write\nclass C: def foo(self, x, y): ...\nThis really is the same as writing\nclass C: pass\ndef foo(self, x, y): ... C.foo = foo\n\nThis was an intentional design decision, not a result of introducing OO behaviour at a latter date.\nEverything in Python -is- an object, including literals.\nSee also Why must 'self' be used explicitly in method definitions and calls?\n",
"Well, I don't know much about Ruby. But the obvious point about Python's \"self\" is that it's not a \"keyword\" ...it's just the name of an argument that's sent to your method.\nYou can use any name you like for this argument. \"Self\" is just a convention.\nFor example :\nclass X :\n def __init__(a,val) :\n a.x = val\n def p(b) :\n print b.x\n\nx = X(6)\nx.p()\n\nPrints the number 6 on the terminal. In the constructor the self object is actually called a. But in the p() method, it's called b.\nUpdate : In October 2008, Guido pointed out that having an explicit self was also necessary to allow Python decorators to be general enough to work on pure functions, methods or classmethods : http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay.html\n",
"self is used only as a convention, you can use spam, bacon or sausage instead of self and get the same result. It's just the first argument passed to bound methods. But stick to using self as it will confuse others and some editors.\n"
] | [
8,
6,
5,
5
] | [] | [] | [
"language_features",
"python",
"ruby"
] | stackoverflow_0000159990_language_features_python_ruby.txt |
Q:
What is the best way to sample/profile a PyObjC application?
Sampling with Activity Monitor/Instruments/Shark will show stack traces full of C functions for the Python interpreter. I would be helpful to see the corresponding Python symbol names. Is there some DTrace magic that can do that? Python's cProfile module can be useful for profiling individual subtrees of Python calls, but not so much for getting a picture of what's going on with the whole application in response to user events.
A:
The answer is "dtrace", but it won't work on sufficiently old macs.
http://tech.marshallfamily.com.au/archives/python-dtrace-on-os-x-leopard-part-1/
http://tech.marshallfamily.com.au/archives/python-dtrace-on-os-x-leopard-part-2/
| What is the best way to sample/profile a PyObjC application? | Sampling with Activity Monitor/Instruments/Shark will show stack traces full of C functions for the Python interpreter. I would be helpful to see the corresponding Python symbol names. Is there some DTrace magic that can do that? Python's cProfile module can be useful for profiling individual subtrees of Python calls, but not so much for getting a picture of what's going on with the whole application in response to user events.
| [
"The answer is \"dtrace\", but it won't work on sufficiently old macs.\nhttp://tech.marshallfamily.com.au/archives/python-dtrace-on-os-x-leopard-part-1/\nhttp://tech.marshallfamily.com.au/archives/python-dtrace-on-os-x-leopard-part-2/\n"
] | [
3
] | [] | [] | [
"cocoa",
"macos",
"pyobjc",
"python"
] | stackoverflow_0000157662_cocoa_macos_pyobjc_python.txt |
Q:
Python object attributes - methodology for access
Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like obj.attr ? Or perhaps write get accessors ?
What are the accepted naming styles for such things ?
Edit:
Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one.
A:
With regards to the single and double-leading underscores: both indicate the same concept of 'privateness'. That is to say, people will know the attribute (be it a method or a 'normal' data attribute or anything else) is not part of the public API of the object. People will know that to touch it directly is to invite disaster.
On top of that, the double-leading underscore attributes (but not the single-leading underscore attributes) are name-mangled to make accessing them by accident from subclasses or anywhere else outside the current class less likely. You can still access them, but not as trivially. For example:
>>> class ClassA:
... def __init__(self):
... self._single = "Single"
... self.__double = "Double"
... def getSingle(self):
... return self._single
... def getDouble(self):
... return self.__double
...
>>> class ClassB(ClassA):
... def getSingle_B(self):
... return self._single
... def getDouble_B(self):
... return self.__double
...
>>> a = ClassA()
>>> b = ClassB()
You can now trivially access a._single and b._single and get the _single attribute created by ClassA:
>>> a._single, b._single
('Single', 'Single')
>>> a.getSingle(), b.getSingle(), b.getSingle_B()
('Single', 'Single', 'Single')
But trying to access the __double attribute on the a or b instance directly won't work:
>>> a.__double
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: ClassA instance has no attribute '__double'
>>> b.__double
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: ClassB instance has no attribute '__double'
And though methods defined in ClassA can get at it directly (when called on either instance):
>>> a.getDouble(), b.getDouble()
('Double', 'Double')
Methods defined on ClassB can not:
>>> b.getDouble_B()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in getDouble_B
AttributeError: ClassB instance has no attribute '_ClassB__double'
And right in that error you get a hint about what's happening. The __double attribute name, when accessed inside a class, is being name-mangled to include the name of the class that it is being accessed in. When ClassA tries to access self.__double, it actually turns -- at compiletime -- into an access of self._ClassA__double, and likewise for ClassB. (If a method in ClassB were to assign to __double, not included in the code for brevity, it would therefor not touch ClassA's __double but create a new attribute.) There is no other protection of this attribute, so you can still access it directly if you know the right name:
>>> a._ClassA__double, b._ClassA__double
('Double', 'Double')
So why is this a problem?
Well, it's a problem any time you want to inherit and change the behaviour of any code dealing with this attribute. You either have to reimplement everything that touches this double-underscore attribute directly, or you have to guess at the class name and mangle the name manually. The problem gets worse when this double-underscore attribute is actually a method: overriding the method or calling the method in a subclass means doing the name-mangling manually, or reimplementing all the code that calls the method to not use the double-underscore name. Not to mention accessing the attribute dynamically, with getattr(): you will have to manually mangle there, too.
On the other hand, because the attribute is only trivially rewritten, it offers only superficial 'protection'. Any piece of code can still get at the attribute by manually mangling, although that will make their code dependant on the name of your class, and efforts on your side to refactor your code or rename your class (while still keeping the same user-visible name, a common practice in Python) would needlessly break their code. They can also 'trick' Python into doing the name-mangling for them by naming their class the same as yours: notice how there is no module name included in the mangled attribute name. And lastly, the double-underscore attribute is still visible in all attribute lists and all forms of introspection that don't take care to skip attributes starting with a (single) underscore.
So, if you use double-underscore names, use them exceedingly sparingly, as they can turn out quite inconvenient, and never use them for methods or anything else a subclass may ever want to reimplement, override or access directly. And realize that double-leading underscore name-mangling offers no real protection. In the end, using a single leading underscore wins you just as much and gives you less (potential, future) pain. Use a single leading underscore.
A:
The generally accepted way of doing things is just using simple attributes, like so
>>> class MyClass:
... myAttribute = 0
...
>>> c = MyClass()
>>> c.myAttribute
0
>>> c.myAttribute = 1
>>> c.myAttribute
1
If you do find yourself needing to be able to write getters and setters, then what you want to look for is "python class properties" and Ryan Tomayko's article on
Getters/Setters/Fuxors is a great place to start (albeit a little long)
A:
Edit: Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
Single underscore doesn't mean anything special to python, it is just best practice, to tell "hey you probably don't want to access this unless you know what you are doing". Double underscore however makes python mangle the name internally making it accessible only from the class where it is defined.
Double leading AND trailing underscore denotes a special function, such as __add__ which is called when using the + operator.
Read more in PEP 8, especially the "Naming Conventions" section.
A:
I think most just access them directly, no need for get/set methods.
>>> class myclass:
... x = 'hello'
...
>>>
>>> class_inst = myclass()
>>> class_inst.x
'hello'
>>> class_inst.x = 'world'
>>> class_inst.x
'world'
BTW, you can use the dir() function to see what attributes/methods are attached to your instance:
>>> dir(class_inst)
['__doc__', '__module__', 'x']
Two leading underbars, "__" are used to make a attribute or function private.
For other conventions refer to PEP 08:
http://www.python.org/dev/peps/pep-0008/
A:
Python does not need to define accessors right from the beginning, since converting attributes into properties is quick and painless. See the following for a vivid demonstration:
Recovery from Addiction
A:
There is no real point of doing getter/setters in python, you can't protect stuff anyway and if you need to execute some extra code when getting/setting the property look at the property() builtin (python -c 'help(property)')
| Python object attributes - methodology for access | Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like obj.attr ? Or perhaps write get accessors ?
What are the accepted naming styles for such things ?
Edit:
Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one.
| [
"With regards to the single and double-leading underscores: both indicate the same concept of 'privateness'. That is to say, people will know the attribute (be it a method or a 'normal' data attribute or anything else) is not part of the public API of the object. People will know that to touch it directly is to invite disaster.\nOn top of that, the double-leading underscore attributes (but not the single-leading underscore attributes) are name-mangled to make accessing them by accident from subclasses or anywhere else outside the current class less likely. You can still access them, but not as trivially. For example:\n>>> class ClassA:\n... def __init__(self):\n... self._single = \"Single\"\n... self.__double = \"Double\"\n... def getSingle(self):\n... return self._single\n... def getDouble(self):\n... return self.__double\n... \n>>> class ClassB(ClassA):\n... def getSingle_B(self):\n... return self._single\n... def getDouble_B(self):\n... return self.__double\n... \n>>> a = ClassA()\n>>> b = ClassB()\n\nYou can now trivially access a._single and b._single and get the _single attribute created by ClassA:\n>>> a._single, b._single\n('Single', 'Single')\n>>> a.getSingle(), b.getSingle(), b.getSingle_B()\n('Single', 'Single', 'Single')\n\nBut trying to access the __double attribute on the a or b instance directly won't work:\n>>> a.__double\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: ClassA instance has no attribute '__double'\n>>> b.__double\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: ClassB instance has no attribute '__double'\n\nAnd though methods defined in ClassA can get at it directly (when called on either instance):\n>>> a.getDouble(), b.getDouble()\n('Double', 'Double')\n\nMethods defined on ClassB can not:\n>>> b.getDouble_B()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<stdin>\", line 5, in getDouble_B\nAttributeError: ClassB instance has no attribute '_ClassB__double'\n\nAnd right in that error you get a hint about what's happening. The __double attribute name, when accessed inside a class, is being name-mangled to include the name of the class that it is being accessed in. When ClassA tries to access self.__double, it actually turns -- at compiletime -- into an access of self._ClassA__double, and likewise for ClassB. (If a method in ClassB were to assign to __double, not included in the code for brevity, it would therefor not touch ClassA's __double but create a new attribute.) There is no other protection of this attribute, so you can still access it directly if you know the right name:\n>>> a._ClassA__double, b._ClassA__double\n('Double', 'Double')\n\nSo why is this a problem?\nWell, it's a problem any time you want to inherit and change the behaviour of any code dealing with this attribute. You either have to reimplement everything that touches this double-underscore attribute directly, or you have to guess at the class name and mangle the name manually. The problem gets worse when this double-underscore attribute is actually a method: overriding the method or calling the method in a subclass means doing the name-mangling manually, or reimplementing all the code that calls the method to not use the double-underscore name. Not to mention accessing the attribute dynamically, with getattr(): you will have to manually mangle there, too.\nOn the other hand, because the attribute is only trivially rewritten, it offers only superficial 'protection'. Any piece of code can still get at the attribute by manually mangling, although that will make their code dependant on the name of your class, and efforts on your side to refactor your code or rename your class (while still keeping the same user-visible name, a common practice in Python) would needlessly break their code. They can also 'trick' Python into doing the name-mangling for them by naming their class the same as yours: notice how there is no module name included in the mangled attribute name. And lastly, the double-underscore attribute is still visible in all attribute lists and all forms of introspection that don't take care to skip attributes starting with a (single) underscore.\nSo, if you use double-underscore names, use them exceedingly sparingly, as they can turn out quite inconvenient, and never use them for methods or anything else a subclass may ever want to reimplement, override or access directly. And realize that double-leading underscore name-mangling offers no real protection. In the end, using a single leading underscore wins you just as much and gives you less (potential, future) pain. Use a single leading underscore.\n",
"The generally accepted way of doing things is just using simple attributes, like so\n>>> class MyClass:\n... myAttribute = 0\n... \n>>> c = MyClass()\n>>> c.myAttribute \n0\n>>> c.myAttribute = 1\n>>> c.myAttribute\n1\n\nIf you do find yourself needing to be able to write getters and setters, then what you want to look for is \"python class properties\" and Ryan Tomayko's article on\nGetters/Setters/Fuxors is a great place to start (albeit a little long)\n",
"\nEdit: Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.\n\nSingle underscore doesn't mean anything special to python, it is just best practice, to tell \"hey you probably don't want to access this unless you know what you are doing\". Double underscore however makes python mangle the name internally making it accessible only from the class where it is defined.\nDouble leading AND trailing underscore denotes a special function, such as __add__ which is called when using the + operator.\nRead more in PEP 8, especially the \"Naming Conventions\" section.\n",
"I think most just access them directly, no need for get/set methods.\n>>> class myclass:\n... x = 'hello'\n...\n>>>\n>>> class_inst = myclass()\n>>> class_inst.x\n'hello'\n>>> class_inst.x = 'world'\n>>> class_inst.x\n'world'\n\nBTW, you can use the dir() function to see what attributes/methods are attached to your instance:\n>>> dir(class_inst)\n['__doc__', '__module__', 'x']\n\nTwo leading underbars, \"__\" are used to make a attribute or function private.\nFor other conventions refer to PEP 08:\nhttp://www.python.org/dev/peps/pep-0008/\n",
"Python does not need to define accessors right from the beginning, since converting attributes into properties is quick and painless. See the following for a vivid demonstration:\nRecovery from Addiction\n",
"There is no real point of doing getter/setters in python, you can't protect stuff anyway and if you need to execute some extra code when getting/setting the property look at the property() builtin (python -c 'help(property)')\n"
] | [
58,
23,
8,
3,
1,
0
] | [
"Some people use getters and setters. Depending on which coding style you use you can name them getSpam and seteggs. But you can also make you attributes readonly or assign only. That's a bit awkward to do. One way is overriding the \n> __getattr__\n\nand \n> __setattr__\n\nmethods.\nEdit:\nWhile my answer is still true, it's not right, as I came to realize. There are better ways to make accessors in python and are not very awkward.\n"
] | [
-2
] | [
"attributes",
"object",
"oop",
"python"
] | stackoverflow_0000165883_attributes_object_oop_python.txt |
Q:
Removing a subset of a dict from within a list
This is really only easy to explain with an example, so to remove the intersection of a list from within a dict I usually do something like this:
a = {1:'', 2:'', 3:'', 4:''}
exclusion = [3, 4, 5]
# have to build up a new list or the iteration breaks
toRemove = []
for var in a.iterkeys():
if var in exclusion:
toRemove.append(var)
for var in toRemove:
del a[var]
This might seem like an unusual example, but it's surprising the number of times I've had to do something like this. Doing this with sets would be much nicer, but I clearly want to retain the 'values' for the dict.
This method is annoying because it requires two loops and an extra array. Is there a cleaner and more efficient way of doing this.
A:
Consider dict.pop:
for key in exclusion:
a.pop(key, None)
The None keeps pop from raising an exception when key isn't a key.
A:
a = dict((key,value) for (key,value) in a.iteritems() if key not in exclusion)
A:
Why not just use the keys method, instead of iterkeys? That way you can do it in one loop because it returns a list, not an iterator.
A:
You could change your exclusion list to a set, then just use intersection to get the overlap.
exclusion = set([3, 4, 5])
for key in exclusion.intersection(a):
del a[key]
| Removing a subset of a dict from within a list | This is really only easy to explain with an example, so to remove the intersection of a list from within a dict I usually do something like this:
a = {1:'', 2:'', 3:'', 4:''}
exclusion = [3, 4, 5]
# have to build up a new list or the iteration breaks
toRemove = []
for var in a.iterkeys():
if var in exclusion:
toRemove.append(var)
for var in toRemove:
del a[var]
This might seem like an unusual example, but it's surprising the number of times I've had to do something like this. Doing this with sets would be much nicer, but I clearly want to retain the 'values' for the dict.
This method is annoying because it requires two loops and an extra array. Is there a cleaner and more efficient way of doing this.
| [
"Consider dict.pop:\nfor key in exclusion:\n a.pop(key, None)\n\nThe None keeps pop from raising an exception when key isn't a key.\n",
"a = dict((key,value) for (key,value) in a.iteritems() if key not in exclusion)\n\n",
"Why not just use the keys method, instead of iterkeys? That way you can do it in one loop because it returns a list, not an iterator.\n",
"You could change your exclusion list to a set, then just use intersection to get the overlap.\nexclusion = set([3, 4, 5])\n\nfor key in exclusion.intersection(a):\n del a[key]\n\n"
] | [
14,
3,
2,
2
] | [] | [] | [
"containers",
"list",
"python"
] | stackoverflow_0000167120_containers_list_python.txt |
Q:
How do I enter a pound sterling character (£) into the Python interactive shell on Mac OS X?
Update: Thanks for the suggestions guys. After further research, I’ve reformulated the question here: Python/editline on OS X: £ sign seems to be bound to ed-prev-word
On Mac OS X I can’t enter a pound sterling sign (£) into the Python interactive shell.
Mac OS X 10.5.5
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17)
European keyboard (£ is shift-3)
When I type “£” (i.e. press shift-3) at an empty Python shell, nothing appears.
If I’ve already typed some characters, e.g.
>>> 1234567890 1234567890 1234567890
... then pressing shift-3 will make the cursor position itself after the most recent space, or the start of the line if there are no spaces left between the cursor and the start of the line.
In a normal bash shell, pressing shift-3 types a “£” as expected.
Any idea how I can type a literal “£” in the Python interactive shell?
A:
Not the best solution, but you could type:
pound = u'\u00A3'
Then you have it in a variable you can use in the rest of your session.
A:
In unicode it is 00A003. With the Unicode escape it would be u'\u00a003'.
Edit:
@ Patrick McElhaney said you might need to use 00A3.
A:
u'\N{pound sign}'
If you are using ipython, put
execute pound = u'\N{pound sign}'
in your ipythonrc file (in "Section: Python code to execute") this way you will always have "pound" defined as the pound symbol in the interactive shell.
A:
I'd imagine that the terminal emulator is eating the keystroke as a control code. Maybe see if it has a config file you can mess around with?
A:
Must be your setup, I can use the £ (Also european keyboard) under IDLE or the python command line just fine. (python 2.5).
edit: I'm using windows, so mayby its a problem with the how python works under the mac OS?
| How do I enter a pound sterling character (£) into the Python interactive shell on Mac OS X? | Update: Thanks for the suggestions guys. After further research, I’ve reformulated the question here: Python/editline on OS X: £ sign seems to be bound to ed-prev-word
On Mac OS X I can’t enter a pound sterling sign (£) into the Python interactive shell.
Mac OS X 10.5.5
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17)
European keyboard (£ is shift-3)
When I type “£” (i.e. press shift-3) at an empty Python shell, nothing appears.
If I’ve already typed some characters, e.g.
>>> 1234567890 1234567890 1234567890
... then pressing shift-3 will make the cursor position itself after the most recent space, or the start of the line if there are no spaces left between the cursor and the start of the line.
In a normal bash shell, pressing shift-3 types a “£” as expected.
Any idea how I can type a literal “£” in the Python interactive shell?
| [
"Not the best solution, but you could type:\n pound = u'\\u00A3'\n\nThen you have it in a variable you can use in the rest of your session.\n",
"In unicode it is 00A003. With the Unicode escape it would be u'\\u00a003'. \nEdit:\n@ Patrick McElhaney said you might need to use 00A3.\n",
"u'\\N{pound sign}'\nIf you are using ipython, put\nexecute pound = u'\\N{pound sign}'\nin your ipythonrc file (in \"Section: Python code to execute\") this way you will always have \"pound\" defined as the pound symbol in the interactive shell.\n",
"I'd imagine that the terminal emulator is eating the keystroke as a control code. Maybe see if it has a config file you can mess around with?\n",
"Must be your setup, I can use the £ (Also european keyboard) under IDLE or the python command line just fine. (python 2.5).\nedit: I'm using windows, so mayby its a problem with the how python works under the mac OS?\n"
] | [
5,
2,
2,
1,
0
] | [] | [] | [
"bash",
"macos",
"python",
"shell",
"terminal"
] | stackoverflow_0000167439_bash_macos_python_shell_terminal.txt |
Q:
Naming conventions in a Python library
I'm implementing a search algorithm (let's call it MyAlg) in a python package. Since the algorithm is super-duper complicated, the package has to contain an auxiliary class for algorithm options. Currently I'm developing the entire package by myself (and I'm not a programmer), however I expect 1-2 programmers to join the project later. This would be my first project that will involve external programmers. Thus, in order to make their lifes easier, how should I name this class: Options, OptionsMyAlg, MyAlgOptions or anything else?
What would you suggest me to read in this topic except for http://www.joelonsoftware.com/articles/Wrong.html ?
Thank you
Yuri
[cross posted from here: http://discuss.joelonsoftware.com/default.asp?design.4.684669.0 will update the answers in both places]
A:
I suggest you read PEP8 (styleguide for Python code).
A:
Just naming it Options should be fine. The Python standard library generally takes the philosophy that namespaces make it easy and manageable for different packages to have identically named things. For example, open is both a builtin and a function in the os module, several different modules define an Error exception class, and so on.
This is why it's generally considered bad form to say from some_module import * since it makes it unclear to which open your code refers, etc.
A:
If it all fits in one file, name the class Options. Then your users can write:
import myalg
searchOpts = myalg.Options()
searchOpts.whatever()
mySearcher = myalg.SearchAlg(searchOpts)
mySearcher.search("where's waldo?")
Note the Python Style Guide referenced in another answer suggests that packages should be named with all lowercase letters.
| Naming conventions in a Python library | I'm implementing a search algorithm (let's call it MyAlg) in a python package. Since the algorithm is super-duper complicated, the package has to contain an auxiliary class for algorithm options. Currently I'm developing the entire package by myself (and I'm not a programmer), however I expect 1-2 programmers to join the project later. This would be my first project that will involve external programmers. Thus, in order to make their lifes easier, how should I name this class: Options, OptionsMyAlg, MyAlgOptions or anything else?
What would you suggest me to read in this topic except for http://www.joelonsoftware.com/articles/Wrong.html ?
Thank you
Yuri
[cross posted from here: http://discuss.joelonsoftware.com/default.asp?design.4.684669.0 will update the answers in both places]
| [
"I suggest you read PEP8 (styleguide for Python code).\n",
"Just naming it Options should be fine. The Python standard library generally takes the philosophy that namespaces make it easy and manageable for different packages to have identically named things. For example, open is both a builtin and a function in the os module, several different modules define an Error exception class, and so on.\nThis is why it's generally considered bad form to say from some_module import * since it makes it unclear to which open your code refers, etc.\n",
"If it all fits in one file, name the class Options. Then your users can write:\nimport myalg\n\nsearchOpts = myalg.Options()\nsearchOpts.whatever()\n\nmySearcher = myalg.SearchAlg(searchOpts)\nmySearcher.search(\"where's waldo?\")\n\nNote the Python Style Guide referenced in another answer suggests that packages should be named with all lowercase letters.\n"
] | [
9,
2,
2
] | [] | [] | [
"naming_conventions",
"python"
] | stackoverflow_0000168022_naming_conventions_python.txt |
Q:
Django: How do I create a generic url routing to views?
I have a pretty standard django app, and am wondering how to set the url routing so that I don't have to explicitly map each url to a view.
For example, let's say that I have the following views: Project, Links, Profile, Contact. I'd rather not have my urlpatterns look like this:
(r'^Project/$', 'mysite.app.views.project'),
(r'^Links/$', 'mysite.app.views.links'),
(r'^Profile/$', 'mysite.app.views.profile'),
(r'^Contact/$', 'mysite.app.views.contact'),
And so on. In Pylons, it would be as simple as:
map.connect(':controller/:action/:id')
And it would automatically grab the right controller and function. Is there something similar in Django?
A:
mods = ('Project','Links','Profile','Contact')
urlpatterns = patterns('',
*(('^%s/$'%n, 'mysite.app.views.%s'%n.lower()) for n in mods)
)
A:
Unless you have a really huge number of views, writing them down explicitly is not too bad, from a style perspective.
You can shorten your example, though, by using the prefix argument of the patterns function:
urlpatterns = patterns('mysite.app.views',
(r'^Project/$', 'project'),
(r'^Links/$', 'links'),
(r'^Profile/$', 'profile'),
(r'^Contact/$', 'contact'),
)
A:
You might be able to use a special view function along these lines:
def router(request, function, module):
m =__import__(module, globals(), locals(), [function.lower()])
try:
return m.__dict__[function.lower()](request)
except KeyError:
raise Http404()
and then a urlconf like this:
(r'^(?P<function>.+)/$', router, {"module": 'mysite.app.views'}),
This code is untested but the general idea should work, even though you should remember:
Explicit is better than implicit.
| Django: How do I create a generic url routing to views? | I have a pretty standard django app, and am wondering how to set the url routing so that I don't have to explicitly map each url to a view.
For example, let's say that I have the following views: Project, Links, Profile, Contact. I'd rather not have my urlpatterns look like this:
(r'^Project/$', 'mysite.app.views.project'),
(r'^Links/$', 'mysite.app.views.links'),
(r'^Profile/$', 'mysite.app.views.profile'),
(r'^Contact/$', 'mysite.app.views.contact'),
And so on. In Pylons, it would be as simple as:
map.connect(':controller/:action/:id')
And it would automatically grab the right controller and function. Is there something similar in Django?
| [
"mods = ('Project','Links','Profile','Contact')\n\nurlpatterns = patterns('',\n *(('^%s/$'%n, 'mysite.app.views.%s'%n.lower()) for n in mods)\n)\n\n",
"Unless you have a really huge number of views, writing them down explicitly is not too bad, from a style perspective.\nYou can shorten your example, though, by using the prefix argument of the patterns function:\nurlpatterns = patterns('mysite.app.views',\n (r'^Project/$', 'project'),\n (r'^Links/$', 'links'),\n (r'^Profile/$', 'profile'),\n (r'^Contact/$', 'contact'),\n)\n\n",
"You might be able to use a special view function along these lines:\ndef router(request, function, module):\n m =__import__(module, globals(), locals(), [function.lower()])\n try:\n return m.__dict__[function.lower()](request)\n except KeyError:\n raise Http404()\n\nand then a urlconf like this:\n(r'^(?P<function>.+)/$', router, {\"module\": 'mysite.app.views'}),\n\nThis code is untested but the general idea should work, even though you should remember:\nExplicit is better than implicit.\n"
] | [
5,
5,
5
] | [] | [] | [
"django",
"pylons",
"python"
] | stackoverflow_0000168113_django_pylons_python.txt |
Q:
How can I compress a folder and email the compressed file in Python?
I would like to compress a folder and all its sub-folders/files, and email the zip file as an attachment. What would be the best way to achieve this with Python?
A:
You can use the zipfile module to compress the file using the zip standard, the email module to create the email with the attachment, and the smtplib module to send it - all using only the standard library.
Python - Batteries Included
If you don't feel like programming and would rather ask a question on stackoverflow.org instead, or (as suggested in the comments) left off the homework tag, well, here it is:
import smtplib
import zipfile
import tempfile
from email import encoders
from email.message import Message
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
def send_file_zipped(the_file, recipients, sender='[email protected]'):
zf = tempfile.TemporaryFile(prefix='mail', suffix='.zip')
zip = zipfile.ZipFile(zf, 'w')
zip.write(the_file)
zip.close()
zf.seek(0)
# Create the message
themsg = MIMEMultipart()
themsg['Subject'] = 'File %s' % the_file
themsg['To'] = ', '.join(recipients)
themsg['From'] = sender
themsg.preamble = 'I am not using a MIME-aware mail reader.\n'
msg = MIMEBase('application', 'zip')
msg.set_payload(zf.read())
encoders.encode_base64(msg)
msg.add_header('Content-Disposition', 'attachment',
filename=the_file + '.zip')
themsg.attach(msg)
themsg = themsg.as_string()
# send the message
smtp = smtplib.SMTP()
smtp.connect()
smtp.sendmail(sender, recipients, themsg)
smtp.close()
"""
# alternative to the above 4 lines if you're using gmail
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.login("username", "password")
server.sendmail(sender,recipients,themsg)
server.quit()
"""
With this function, you can just do:
send_file_zipped('result.txt', ['[email protected]'])
You're welcome.
A:
Look at zipfile for compressing a folder and it's subfolders.
Look at smtplib for an email client.
A:
You can use zipfile that ships with python, and here you can find an example of sending an email with attachments with the standard smtplib
| How can I compress a folder and email the compressed file in Python? | I would like to compress a folder and all its sub-folders/files, and email the zip file as an attachment. What would be the best way to achieve this with Python?
| [
"You can use the zipfile module to compress the file using the zip standard, the email module to create the email with the attachment, and the smtplib module to send it - all using only the standard library.\nPython - Batteries Included\nIf you don't feel like programming and would rather ask a question on stackoverflow.org instead, or (as suggested in the comments) left off the homework tag, well, here it is:\nimport smtplib\nimport zipfile\nimport tempfile\nfrom email import encoders\nfrom email.message import Message\nfrom email.mime.base import MIMEBase\nfrom email.mime.multipart import MIMEMultipart \n\ndef send_file_zipped(the_file, recipients, sender='[email protected]'):\n zf = tempfile.TemporaryFile(prefix='mail', suffix='.zip')\n zip = zipfile.ZipFile(zf, 'w')\n zip.write(the_file)\n zip.close()\n zf.seek(0)\n\n # Create the message\n themsg = MIMEMultipart()\n themsg['Subject'] = 'File %s' % the_file\n themsg['To'] = ', '.join(recipients)\n themsg['From'] = sender\n themsg.preamble = 'I am not using a MIME-aware mail reader.\\n'\n msg = MIMEBase('application', 'zip')\n msg.set_payload(zf.read())\n encoders.encode_base64(msg)\n msg.add_header('Content-Disposition', 'attachment', \n filename=the_file + '.zip')\n themsg.attach(msg)\n themsg = themsg.as_string()\n\n # send the message\n smtp = smtplib.SMTP()\n smtp.connect()\n smtp.sendmail(sender, recipients, themsg)\n smtp.close()\n\n \"\"\"\n # alternative to the above 4 lines if you're using gmail\n server = smtplib.SMTP_SSL('smtp.gmail.com', 465)\n server.login(\"username\", \"password\")\n server.sendmail(sender,recipients,themsg)\n server.quit()\n \"\"\"\n\nWith this function, you can just do:\nsend_file_zipped('result.txt', ['[email protected]'])\n\nYou're welcome.\n",
"Look at zipfile for compressing a folder and it's subfolders.\nLook at smtplib for an email client.\n",
"You can use zipfile that ships with python, and here you can find an example of sending an email with attachments with the standard smtplib\n"
] | [
19,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000169362_python.txt |
Q:
Initializing cherrypy.session early
I love CherryPy's API for sessions, except for one detail. Instead of saying cherrypy.session["spam"] I'd like to be able to just say session["spam"].
Unfortunately, I can't simply have a global from cherrypy import session in one of my modules, because the cherrypy.session object isn't created until the first time a page request is made. Is there some way to get CherryPy to initialize its session object immediately instead of on the first page request?
I have two ugly alternatives if the answer is no:
First, I can do something like this
def import_session():
global session
while not hasattr(cherrypy, "session"):
sleep(0.1)
session = cherrypy.session
Thread(target=import_session).start()
This feels like a big kludge, but I really hate writing cherrypy.session["spam"] every time, so to me it's worth it.
My second solution is to do something like
class SessionKludge:
def __getitem__(self, name):
return cherrypy.session[name]
def __setitem__(self, name, val):
cherrypy.session[name] = val
session = SessionKludge()
but this feels like an even bigger kludge and I'd need to do more work to implement the other dictionary functions such as .get
So I'd definitely prefer a simple way to initialize the object myself. Does anyone know how to do this?
A:
For CherryPy 3.1, you would need to find the right subclass of Session, run its 'setup' classmethod, and then set cherrypy.session to a ThreadLocalProxy. That all happens in cherrypy.lib.sessions.init, in the following chunks:
# Find the storage class and call setup (first time only).
storage_class = storage_type.title() + 'Session'
storage_class = globals()[storage_class]
if not hasattr(cherrypy, "session"):
if hasattr(storage_class, "setup"):
storage_class.setup(**kwargs)
# Create cherrypy.session which will proxy to cherrypy.serving.session
if not hasattr(cherrypy, "session"):
cherrypy.session = cherrypy._ThreadLocalProxy('session')
Reducing (replace FileSession with the subclass you want):
FileSession.setup(**kwargs)
cherrypy.session = cherrypy._ThreadLocalProxy('session')
The "kwargs" consist of "timeout", "clean_freq", and any subclass-specific entries from tools.sessions.* config.
| Initializing cherrypy.session early | I love CherryPy's API for sessions, except for one detail. Instead of saying cherrypy.session["spam"] I'd like to be able to just say session["spam"].
Unfortunately, I can't simply have a global from cherrypy import session in one of my modules, because the cherrypy.session object isn't created until the first time a page request is made. Is there some way to get CherryPy to initialize its session object immediately instead of on the first page request?
I have two ugly alternatives if the answer is no:
First, I can do something like this
def import_session():
global session
while not hasattr(cherrypy, "session"):
sleep(0.1)
session = cherrypy.session
Thread(target=import_session).start()
This feels like a big kludge, but I really hate writing cherrypy.session["spam"] every time, so to me it's worth it.
My second solution is to do something like
class SessionKludge:
def __getitem__(self, name):
return cherrypy.session[name]
def __setitem__(self, name, val):
cherrypy.session[name] = val
session = SessionKludge()
but this feels like an even bigger kludge and I'd need to do more work to implement the other dictionary functions such as .get
So I'd definitely prefer a simple way to initialize the object myself. Does anyone know how to do this?
| [
"For CherryPy 3.1, you would need to find the right subclass of Session, run its 'setup' classmethod, and then set cherrypy.session to a ThreadLocalProxy. That all happens in cherrypy.lib.sessions.init, in the following chunks:\n# Find the storage class and call setup (first time only).\nstorage_class = storage_type.title() + 'Session'\nstorage_class = globals()[storage_class]\nif not hasattr(cherrypy, \"session\"):\n if hasattr(storage_class, \"setup\"):\n storage_class.setup(**kwargs)\n\n# Create cherrypy.session which will proxy to cherrypy.serving.session\nif not hasattr(cherrypy, \"session\"):\n cherrypy.session = cherrypy._ThreadLocalProxy('session')\n\nReducing (replace FileSession with the subclass you want):\nFileSession.setup(**kwargs)\ncherrypy.session = cherrypy._ThreadLocalProxy('session')\n\nThe \"kwargs\" consist of \"timeout\", \"clean_freq\", and any subclass-specific entries from tools.sessions.* config.\n"
] | [
5
] | [] | [] | [
"cherrypy",
"python"
] | stackoverflow_0000168167_cherrypy_python.txt |
Q:
Google App Engine: how can I programmatically access the properties of my Model class?
I have a model class:
class Person(db.Model):
first_name = db.StringProperty(required=True)
last_name = db.StringProperty(required=True)
I have an instance of this class in p, and string s contains the value 'first_name'. I would like to do something like:
print p[s]
and
p[s] = new_value
Both of which result in a TypeError.
Does anybody know how I can achieve what I would like?
A:
If the model class is sufficiently intelligent, it should recognize the standard Python ways of doing this.
Try:
getattr(p, s)
setattr(p, s, new_value)
There is also hasattr available.
A:
With much thanks to Jim, the exact solution I was looking for is:
p.properties()[s].get_value_for_datastore(p)
To all the other respondents, thank you for your help. I also would have expected the Model class to implement the python standard way of doing this, but for whatever reason, it doesn't.
A:
getattr(p, s)
setattr(p, s, new_value)
A:
Try:
p.model_properties()[s].get_value_for_datastore(p)
See the documentation.
| Google App Engine: how can I programmatically access the properties of my Model class? | I have a model class:
class Person(db.Model):
first_name = db.StringProperty(required=True)
last_name = db.StringProperty(required=True)
I have an instance of this class in p, and string s contains the value 'first_name'. I would like to do something like:
print p[s]
and
p[s] = new_value
Both of which result in a TypeError.
Does anybody know how I can achieve what I would like?
| [
"If the model class is sufficiently intelligent, it should recognize the standard Python ways of doing this.\nTry:\ngetattr(p, s)\nsetattr(p, s, new_value)\n\nThere is also hasattr available.\n",
"With much thanks to Jim, the exact solution I was looking for is:\np.properties()[s].get_value_for_datastore(p)\n\nTo all the other respondents, thank you for your help. I also would have expected the Model class to implement the python standard way of doing this, but for whatever reason, it doesn't.\n",
"getattr(p, s)\nsetattr(p, s, new_value)\n\n",
"Try:\np.model_properties()[s].get_value_for_datastore(p)\n\nSee the documentation.\n"
] | [
7,
3,
1,
1
] | [
"p.first_name = \"New first name\"\np.put()\nor p = Person(first_name = \"Firsty\",\n last_name = \"Lasty\" )\np.put()\n"
] | [
-1
] | [
"google_app_engine",
"python",
"string"
] | stackoverflow_0000091821_google_app_engine_python_string.txt |
Q:
Issue With Python Sockets: How To Get Reliably POSTed data whatever the browser?
I wrote small Python+Ajax programs (listed at the end) with socket module to study the COMET concept of asynchronous communications.
The idea is to allow browsers to send messages real time each others via my python program.
The trick is to let the "GET messages/..." connection opened waiting for a message to answer back.
My problem is mainly on the reliability of what I have via socket.recv...
When I POST from Firefox, it is working well.
When I POST from Chrome or IE, the "data" I get in Python is empty.
Does anybody know about this problem between browsers?
Are some browsers injecting some EOF or else characters killing the receiving of "recv"?
Is there any solution known to this problem?
The server.py in Python:
import socket
connected={}
def inRequest(text):
content=''
if text[0:3]=='GET':
method='GET'
else:
method='POST'
k=len(text)-1
while k>0 and text[k]!='\n' and text[k]!='\r':
k=k-1
content=text[k+1:]
text=text[text.index(' ')+1:]
url=text[:text.index(' ')]
return {"method":method,"url":url,"content":content}
mySocket = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
mySocket.bind ( ( '', 80 ) )
mySocket.listen ( 10 )
while True:
channel, details = mySocket.accept()
data=channel.recv(4096)
req=inRequest(data)
url=req["url"]
if url=="/client.html" or url=="/clientIE.html":
f=open('C:\\async\\'+url)
channel.send ('HTTP/1.1 200 OK\n\n'+f.read())
f.close()
channel.close()
elif '/messages' in url:
if req["method"]=='POST':
target=url[10:]
if target in connected:
connected[target].send("HTTP/1.1 200 OK\n\n"+req["content"])
print req["content"]+" sent to "+target
connected[target].close()
channel.close()
elif req["method"]=='GET':
user=url[10:]
connected[user]=channel
print user+' is connected'
The client.html in HTML+Javascript:
<html>
<head>
<script>
var user=''
function post(el) {
if (window.XMLHttpRequest) {
var text=el.value;
var req=new XMLHttpRequest();
el.value='';
var target=document.getElementById('to').value
}
else if (window.ActiveXObject) {
var text=el.content;
var req=new ActiveXObject("Microsoft.XMLHTTP");
el.content='';
}
else
return;
req.open('POST','messages/'+target,true)
req.send(text);
}
function get(u) {
if (user=='')
user=u.value
var req=new XMLHttpRequest()
req.open('GET','messages/'+user,true)
req.onload=function() {
var message=document.createElement('p');
message.innerHTML=req.responseText;
document.getElementById('messages').appendChild(message);
get(user);
}
req.send(null)
}
</script>
</head>
<body>
<span>From</span>
<input id="user"/>
<input type="button" value="sign in" onclick="get(document.getElementById('user'))"/>
<span>To</span>
<input id="to"/>
<span>:</span>
<input id="message"/>
<input type="button" value="post" onclick="post(document.getElementById('message'))"/>
<div id="messages">
</div>
</body>
</html>
A:
The problem you have is that
your tcp socket handling isn't reading as much as it should
your http handling is not complete
I recommend the following lectures:
rfc2616
The sockets Networking API by Stevens
See the example below for a working http server that can process posts
index = '''
<html>
<head>
</head>
<body>
<form action="/" method="POST">
<textarea name="foo"></textarea>
<button type="submit">post</button>
</form>
<h3>data posted</h3>
<div>
%s
</div>
</body>
</html>
'''
bufsize = 4048
import socket
import re
from urlparse import urlparse
class Headers(object):
def __init__(self, headers):
self.__dict__.update(headers)
def __getitem__(self, name):
return getattr(self, name)
def get(self, name, default=None):
return getattr(self, name, default)
class Request(object):
header_re = re.compile(r'([a-zA-Z-]+):? ([^\r]+)', re.M)
def __init__(self, sock):
header_off = -1
data = ''
while header_off == -1:
data += sock.recv(bufsize)
header_off = data.find('\r\n\r\n')
header_string = data[:header_off]
self.content = data[header_off+4:]
lines = self.header_re.findall(header_string)
self.method, path = lines.pop(0)
path, protocol = path.split(' ')
self.headers = Headers(
(name.lower().replace('-', '_'), value)
for name, value in lines
)
if self.method in ['POST', 'PUT']:
content_length = int(self.headers.get('content_length', 0))
while len(self.content) < content_length:
self.content += sock.recv(bufsize)
self.query = urlparse(path)[4]
acceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
acceptor.setsockopt(
socket.SOL_SOCKET,
socket.SO_REUSEADDR,
1,
)
acceptor.bind(('', 2501 ))
acceptor.listen(10)
if __name__ == '__main__':
while True:
sock, info = acceptor.accept()
request = Request(sock)
sock.send('HTTP/1.1 200 OK\n\n' + (index % request.content) )
sock.close()
A:
I would recommend using a JS/Ajax library on the client-side just to eliminate the possibility of cross-browser issues with your code. For the same reason I would recommend using a python http server library like SimpleHTTPServer or something from Twisted if the former does not allow low-level control.
Another idea - use something like Wireshark to check what's been sent by the browsers.
A:
Thank you very much Florian, your code is working!!!!
I reuse the template and complete the main with my COMET mecanism and it is working much better
Chrome and Firefox are working perfectly well
IE has still a problem with the "long GET" system
When it received the answer to the GET it does not stop to re executing the loop to print the messages.
Investigating right now the question
Here is my updated code for very basic JQuery+Python cross browser system.
The Python program, based on Florian's code:
bufsize = 4048
import socket
import re
from urlparse import urlparse
connected={}
class Headers(object):
def __init__(self, headers):
self.__dict__.update(headers)
def __getitem__(self, name):
return getattr(self, name)
def get(self, name, default=None):
return getattr(self, name, default)
class Request(object):
header_re = re.compile(r'([a-zA-Z-]+):? ([^\r]+)', re.M)
def __init__(self, sock):
header_off = -1
data = ''
while header_off == -1:
data += sock.recv(bufsize)
header_off = data.find('\r\n\r\n')
header_string = data[:header_off]
self.content = data[header_off+4:]
furl=header_string[header_string.index(' ')+1:]
self.url=furl[:furl.index(' ')]
lines = self.header_re.findall(header_string)
self.method, path = lines.pop(0)
path, protocol = path.split(' ')
self.headers = Headers(
(name.lower().replace('-', '_'), value)
for name, value in lines
)
if self.method in ['POST', 'PUT']:
content_length = int(self.headers.get('content_length', 0))
while len(self.content) < content_length:
self.content += sock.recv(bufsize)
self.query = urlparse(path)[4]
acceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
acceptor.setsockopt(
socket.SOL_SOCKET,
socket.SO_REUSEADDR,
1,
)
acceptor.bind(('', 8007 ))
acceptor.listen(10)
if __name__ == '__main__':
while True:
sock, info = acceptor.accept()
request = Request(sock)
m=request.method
u=request.url[1:]
if m=='GET' and (u=='client.html' or u=='jquery.js'):
f=open('c:\\async\\'+u,'r')
sock.send('HTTP/1.1 200 OK\n\n'+f.read())
f.close()
sock.close()
elif 'messages' in u:
if m=='POST':
target=u[9:]
if target in connected:
connected[target].send("HTTP/1.1 200 OK\n\n"+request.content)
connected[target].close()
sock.close()
elif m=='GET':
user=u[9:]
connected[user]=sock
print user+' is connected'
And the HTML with Jquery compacted:
<html>
<head>
<style>
input {width:80px;}
span {font-size:12px;}
button {font-size:10px;}
</style>
<script type="text/javascript" src='jquery.js'></script>
<script>
var user='';
function post(el) {$.post('messages/'+$('#to').val(),$('#message').val());}
function get(u) {
if (user=='') user=u.value
$.get('messages/'+user,function(data) { $("<p>"+data+"</p>").appendTo($('#messages'));get(user);});
}
</script>
</head>
<body>
<span>From</span><input id="user"/><button onclick="get(document.getElementById('user'))">log</button>
<span>To</span><input id="to"/>
<span>:</span><input id="message"/><button onclick="post()">post</button>
<div id="messages"></div>
</body>
</html>
| Issue With Python Sockets: How To Get Reliably POSTed data whatever the browser? | I wrote small Python+Ajax programs (listed at the end) with socket module to study the COMET concept of asynchronous communications.
The idea is to allow browsers to send messages real time each others via my python program.
The trick is to let the "GET messages/..." connection opened waiting for a message to answer back.
My problem is mainly on the reliability of what I have via socket.recv...
When I POST from Firefox, it is working well.
When I POST from Chrome or IE, the "data" I get in Python is empty.
Does anybody know about this problem between browsers?
Are some browsers injecting some EOF or else characters killing the receiving of "recv"?
Is there any solution known to this problem?
The server.py in Python:
import socket
connected={}
def inRequest(text):
content=''
if text[0:3]=='GET':
method='GET'
else:
method='POST'
k=len(text)-1
while k>0 and text[k]!='\n' and text[k]!='\r':
k=k-1
content=text[k+1:]
text=text[text.index(' ')+1:]
url=text[:text.index(' ')]
return {"method":method,"url":url,"content":content}
mySocket = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
mySocket.bind ( ( '', 80 ) )
mySocket.listen ( 10 )
while True:
channel, details = mySocket.accept()
data=channel.recv(4096)
req=inRequest(data)
url=req["url"]
if url=="/client.html" or url=="/clientIE.html":
f=open('C:\\async\\'+url)
channel.send ('HTTP/1.1 200 OK\n\n'+f.read())
f.close()
channel.close()
elif '/messages' in url:
if req["method"]=='POST':
target=url[10:]
if target in connected:
connected[target].send("HTTP/1.1 200 OK\n\n"+req["content"])
print req["content"]+" sent to "+target
connected[target].close()
channel.close()
elif req["method"]=='GET':
user=url[10:]
connected[user]=channel
print user+' is connected'
The client.html in HTML+Javascript:
<html>
<head>
<script>
var user=''
function post(el) {
if (window.XMLHttpRequest) {
var text=el.value;
var req=new XMLHttpRequest();
el.value='';
var target=document.getElementById('to').value
}
else if (window.ActiveXObject) {
var text=el.content;
var req=new ActiveXObject("Microsoft.XMLHTTP");
el.content='';
}
else
return;
req.open('POST','messages/'+target,true)
req.send(text);
}
function get(u) {
if (user=='')
user=u.value
var req=new XMLHttpRequest()
req.open('GET','messages/'+user,true)
req.onload=function() {
var message=document.createElement('p');
message.innerHTML=req.responseText;
document.getElementById('messages').appendChild(message);
get(user);
}
req.send(null)
}
</script>
</head>
<body>
<span>From</span>
<input id="user"/>
<input type="button" value="sign in" onclick="get(document.getElementById('user'))"/>
<span>To</span>
<input id="to"/>
<span>:</span>
<input id="message"/>
<input type="button" value="post" onclick="post(document.getElementById('message'))"/>
<div id="messages">
</div>
</body>
</html>
| [
"The problem you have is that\n\nyour tcp socket handling isn't reading as much as it should\nyour http handling is not complete\n\nI recommend the following lectures:\n\nrfc2616\nThe sockets Networking API by Stevens\n\nSee the example below for a working http server that can process posts\nindex = '''\n<html>\n <head>\n </head>\n <body>\n <form action=\"/\" method=\"POST\">\n <textarea name=\"foo\"></textarea>\n <button type=\"submit\">post</button>\n </form>\n <h3>data posted</h3>\n <div>\n %s\n </div>\n </body>\n</html>\n'''\n\nbufsize = 4048\nimport socket\nimport re\nfrom urlparse import urlparse\n\nclass Headers(object):\n def __init__(self, headers):\n self.__dict__.update(headers)\n\n def __getitem__(self, name):\n return getattr(self, name)\n\n def get(self, name, default=None):\n return getattr(self, name, default)\n\nclass Request(object):\n header_re = re.compile(r'([a-zA-Z-]+):? ([^\\r]+)', re.M)\n\n def __init__(self, sock):\n header_off = -1\n data = ''\n while header_off == -1:\n data += sock.recv(bufsize)\n header_off = data.find('\\r\\n\\r\\n')\n header_string = data[:header_off]\n self.content = data[header_off+4:]\n\n lines = self.header_re.findall(header_string)\n self.method, path = lines.pop(0)\n path, protocol = path.split(' ')\n self.headers = Headers(\n (name.lower().replace('-', '_'), value)\n for name, value in lines\n )\n\n if self.method in ['POST', 'PUT']:\n content_length = int(self.headers.get('content_length', 0))\n while len(self.content) < content_length:\n self.content += sock.recv(bufsize)\n\n self.query = urlparse(path)[4]\n\nacceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nacceptor.setsockopt(\n socket.SOL_SOCKET,\n socket.SO_REUSEADDR,\n 1,\n)\nacceptor.bind(('', 2501 ))\nacceptor.listen(10)\n\nif __name__ == '__main__':\n while True:\n sock, info = acceptor.accept()\n request = Request(sock)\n sock.send('HTTP/1.1 200 OK\\n\\n' + (index % request.content) )\n sock.close()\n\n",
"I would recommend using a JS/Ajax library on the client-side just to eliminate the possibility of cross-browser issues with your code. For the same reason I would recommend using a python http server library like SimpleHTTPServer or something from Twisted if the former does not allow low-level control.\nAnother idea - use something like Wireshark to check what's been sent by the browsers.\n",
"Thank you very much Florian, your code is working!!!!\nI reuse the template and complete the main with my COMET mecanism and it is working much better\nChrome and Firefox are working perfectly well\nIE has still a problem with the \"long GET\" system\nWhen it received the answer to the GET it does not stop to re executing the loop to print the messages.\nInvestigating right now the question\nHere is my updated code for very basic JQuery+Python cross browser system.\nThe Python program, based on Florian's code:\nbufsize = 4048\nimport socket\nimport re\nfrom urlparse import urlparse\nconnected={}\nclass Headers(object):\n def __init__(self, headers):\n self.__dict__.update(headers)\n\n def __getitem__(self, name):\n return getattr(self, name)\n\n def get(self, name, default=None):\n return getattr(self, name, default)\n\nclass Request(object):\n header_re = re.compile(r'([a-zA-Z-]+):? ([^\\r]+)', re.M)\n\n def __init__(self, sock):\n header_off = -1\n data = ''\n while header_off == -1:\n data += sock.recv(bufsize)\n header_off = data.find('\\r\\n\\r\\n')\n header_string = data[:header_off]\n self.content = data[header_off+4:]\n furl=header_string[header_string.index(' ')+1:]\n self.url=furl[:furl.index(' ')]\n lines = self.header_re.findall(header_string)\n self.method, path = lines.pop(0)\n path, protocol = path.split(' ')\n self.headers = Headers(\n (name.lower().replace('-', '_'), value)\n for name, value in lines\n )\n if self.method in ['POST', 'PUT']:\n content_length = int(self.headers.get('content_length', 0))\n while len(self.content) < content_length:\n self.content += sock.recv(bufsize)\n self.query = urlparse(path)[4]\n\nacceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nacceptor.setsockopt(\n socket.SOL_SOCKET,\n socket.SO_REUSEADDR,\n 1,\n)\nacceptor.bind(('', 8007 ))\nacceptor.listen(10)\n\nif __name__ == '__main__':\n while True:\n sock, info = acceptor.accept()\n request = Request(sock)\n m=request.method\n u=request.url[1:]\n if m=='GET' and (u=='client.html' or u=='jquery.js'):\n f=open('c:\\\\async\\\\'+u,'r')\n sock.send('HTTP/1.1 200 OK\\n\\n'+f.read())\n f.close()\n sock.close()\n elif 'messages' in u:\n if m=='POST':\n target=u[9:]\n if target in connected:\n connected[target].send(\"HTTP/1.1 200 OK\\n\\n\"+request.content)\n connected[target].close()\n sock.close()\n elif m=='GET':\n user=u[9:]\n connected[user]=sock\n print user+' is connected'\n\nAnd the HTML with Jquery compacted:\n <html>\n<head>\n <style>\n input {width:80px;}\n span {font-size:12px;}\n button {font-size:10px;}\n </style>\n <script type=\"text/javascript\" src='jquery.js'></script>\n <script>\n var user='';\n function post(el) {$.post('messages/'+$('#to').val(),$('#message').val());}\n function get(u) {\n if (user=='') user=u.value\n $.get('messages/'+user,function(data) { $(\"<p>\"+data+\"</p>\").appendTo($('#messages'));get(user);});\n }\n </script>\n</head>\n<body>\n<span>From</span><input id=\"user\"/><button onclick=\"get(document.getElementById('user'))\">log</button>\n<span>To</span><input id=\"to\"/>\n<span>:</span><input id=\"message\"/><button onclick=\"post()\">post</button>\n<div id=\"messages\"></div>\n</body>\n</html>\n\n"
] | [
2,
0,
0
] | [] | [] | [
"comet",
"javascript",
"python",
"sockets"
] | stackoverflow_0000167426_comet_javascript_python_sockets.txt |
Q:
USB Driver Development on a Mac using Python
I would like to write a driver to talk to my Suunto t3 watch in Python on a Mac. My day job is doing basic web work in C# so my familiarity with Python and developing on a Mac is limited.
Can you suggest how one would start doing driver development in general and then more specifically on a Mac. I.e. how to easily see what data is being transmitted to the device? I have Python 2.5 (MacPorts) up and running.
A:
The Mac already has the underlying infrastructure to support USB, so you'll need a Python library that can take advantage of it. For any Python project that needs serial support, whether it's USB, RS-232 or GPIB, I'd recommend the PyVisa library at SourceForge. See http://pyvisa.sourceforge.net/.
If your device doesn't have a VISA driver, you'll have to deal with the USB system directly. You can use another library on SourceForge for that: http://pyusb.berlios.de/
A:
If the watch supports a standard USB device class specification such as HID or serial communication, there might already be a Macintosh driver for it built into the OS. Otherwise, you're going to have to get information about the vendor commands used to communicate with it from one of three sources: the manufacturer; reverse engineering the protocol used by the Windows driver; or from others who have already reverse engineered the protocol in order to support the device on Linux or BSD.
USB is a packet-based bus and it's very important to understand the various transaction types. Reading the USB specification is a good place to start.
You can see what data is being transmitted to the device using a USB bus analyzer, which is an expensive proposition for a hobbyist but is well within the reach of most businesses doing USB development. For example, the Catalyst Conquest is $1199. Another established manufacturer is LeCroy (formerly CATC). There are also software USB analyzers that hook into the OS's USB stack, but they don't show all of the traffic on the bus, and may not be as reliable.
I'm not a Mac expert, so take this paragraph with a grain of salt: Apple has a driver development kit called the I/O Kit, which apparently requires you to write your driver in C++, unless they also have some sort of user-mode driver framework. If you're writing it in Python, it will probably be more like a Python library that interfaces to someone else's (Apple's?) generic USB driver.
| USB Driver Development on a Mac using Python | I would like to write a driver to talk to my Suunto t3 watch in Python on a Mac. My day job is doing basic web work in C# so my familiarity with Python and developing on a Mac is limited.
Can you suggest how one would start doing driver development in general and then more specifically on a Mac. I.e. how to easily see what data is being transmitted to the device? I have Python 2.5 (MacPorts) up and running.
| [
"The Mac already has the underlying infrastructure to support USB, so you'll need a Python library that can take advantage of it. For any Python project that needs serial support, whether it's USB, RS-232 or GPIB, I'd recommend the PyVisa library at SourceForge. See http://pyvisa.sourceforge.net/.\nIf your device doesn't have a VISA driver, you'll have to deal with the USB system directly. You can use another library on SourceForge for that: http://pyusb.berlios.de/\n",
"If the watch supports a standard USB device class specification such as HID or serial communication, there might already be a Macintosh driver for it built into the OS. Otherwise, you're going to have to get information about the vendor commands used to communicate with it from one of three sources: the manufacturer; reverse engineering the protocol used by the Windows driver; or from others who have already reverse engineered the protocol in order to support the device on Linux or BSD.\nUSB is a packet-based bus and it's very important to understand the various transaction types. Reading the USB specification is a good place to start.\nYou can see what data is being transmitted to the device using a USB bus analyzer, which is an expensive proposition for a hobbyist but is well within the reach of most businesses doing USB development. For example, the Catalyst Conquest is $1199. Another established manufacturer is LeCroy (formerly CATC). There are also software USB analyzers that hook into the OS's USB stack, but they don't show all of the traffic on the bus, and may not be as reliable.\nI'm not a Mac expert, so take this paragraph with a grain of salt: Apple has a driver development kit called the I/O Kit, which apparently requires you to write your driver in C++, unless they also have some sort of user-mode driver framework. If you're writing it in Python, it will probably be more like a Python library that interfaces to someone else's (Apple's?) generic USB driver.\n"
] | [
4,
3
] | [] | [] | [
"drivers",
"macos",
"python",
"usb"
] | stackoverflow_0000170278_drivers_macos_python_usb.txt |
Q:
How would I implement a bit map?
I wish to implement a 2d bit map class in Python. The class would have the following requirements:
Allow the creating of arbitrarily sized 2d bitmaps. i.e. to create an 8 x 8 bitmap (8 bytes), something like:
bitmap = Bitmap(8,8)
provide an API to access the bits in this 2d map as boolean or even integer values, i.e.:
if bitmap[1, 2] or bitmap.get(0, 1)
Able to retrieve the data as packed Binary data. Essentially it would be each row of the bit map concatenated and returned as Binary data. It may be padded to the nearest byte or something similar.
bitmap.data()
Be able to create new maps from the binary data retrieved:
new_bitmap = Bitmap(8, 8, bitmap.data())
I know Python is able to perform binary operations, but I'd like some suggestions as how best to use them to implement this class.
A:
Bit-Packing numpy ( SciPY ) arrays does what you are looking for.
The example shows 4x3 bit (Boolean) array packed into 4 8-bit bytes. unpackbits unpacks uint8 arrays into a Boolean output array that you can use in computations.
>>> a = np.array([[[1,0,1],
... [0,1,0]],
... [[1,1,0],
... [0,0,1]]])
>>> b = np.packbits(a,axis=-1)
>>> b
array([[[160],[64]],[[192],[32]]], dtype=uint8)
If you need 1-bit pixel images, PIL is the place to look.
A:
No need to create this yourself.
Use the very good Python Imaging Library (PIL)
| How would I implement a bit map? | I wish to implement a 2d bit map class in Python. The class would have the following requirements:
Allow the creating of arbitrarily sized 2d bitmaps. i.e. to create an 8 x 8 bitmap (8 bytes), something like:
bitmap = Bitmap(8,8)
provide an API to access the bits in this 2d map as boolean or even integer values, i.e.:
if bitmap[1, 2] or bitmap.get(0, 1)
Able to retrieve the data as packed Binary data. Essentially it would be each row of the bit map concatenated and returned as Binary data. It may be padded to the nearest byte or something similar.
bitmap.data()
Be able to create new maps from the binary data retrieved:
new_bitmap = Bitmap(8, 8, bitmap.data())
I know Python is able to perform binary operations, but I'd like some suggestions as how best to use them to implement this class.
| [
"Bit-Packing numpy ( SciPY ) arrays does what you are looking for.\nThe example shows 4x3 bit (Boolean) array packed into 4 8-bit bytes. unpackbits unpacks uint8 arrays into a Boolean output array that you can use in computations.\n>>> a = np.array([[[1,0,1],\n... [0,1,0]],\n... [[1,1,0],\n... [0,0,1]]])\n>>> b = np.packbits(a,axis=-1)\n>>> b\narray([[[160],[64]],[[192],[32]]], dtype=uint8)\n\nIf you need 1-bit pixel images, PIL is the place to look.\n",
"No need to create this yourself.\nUse the very good Python Imaging Library (PIL)\n"
] | [
7,
4
] | [] | [] | [
"class",
"python"
] | stackoverflow_0000171512_class_python.txt |
Q:
Emulation of lex like functionality in Perl or Python
Here's the deal. Is there a way to have strings tokenized in a line based on multiple regexes?
One example:
I have to get all href tags, their corresponding text and some other text based on a different regex.
So I have 3 expressions and would like to tokenize the line and extract tokens of text matching every expression.
I have actually done this using flex (not to be confused with Adobe), which is an implementation of the good old lex. lex provides
an elegant way to do this by executing "actions" based on expressions. One can control the way
lex reading a file too (block / line based read).
The problem is that flex actually produces C/ C++ code which actually does the tokenizing job. I have a
make file which wraps all these things.
I was wondering if perl /python can in some way do the same thing. Its just that I would like to do everything
I like in a single programming language itself.
Tokenizing is just one of the things that I want to do as part of my application.
Apart from perl or python can any language (functional also) do this?
I did read about PLY and ANTLR here (Parsing, where can I learn about it).
But is there a way to do it naturally in python itself? pardon my ignorance, but are these tools used in any popular products / services?
Thank you.
A:
Look at documentation for following modules on CPAN
HTML::TreeBuilder
HTML::TableExtract
and
Parse::RecDescent
I've used these modules to process quite large and complex web-pages.
A:
If you're specifically after parsing links out of web-pages, then Perl's WWW::Mechanize module will figure things out for you in a very elegant fashion. Here's a sample program that grabs the first page of Stack Overflow and parses out all the links, printing their text and corresponding URLs:
#!/usr/bin/perl
use strict;
use warnings;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new;
$mech->get("http://stackoverflow.com/");
$mech->success or die "Oh no! Couldn't fetch stackoverflow.com";
foreach my $link ($mech->links) {
print "* [",$link->text, "] points to ", $link->url, "\n";
}
In the main loop, each $link is a WWW::Mechanize::Link object, so you're not just constrained to getting the text and URL.
All the best,
Paul
A:
Sounds like you really just want to parse HTML, I recommend looking at any of the wonderful packages for doing so:
BeautifulSoup
lxml.html
html5lib
Or! You can use a parser like one of the following:
PyParsing
DParser - A GLR parser with good python bindings.
ANTLR - A recursive decent parser generator that can generate python code.
This example is from the BeautifulSoup Documentation:
from BeautifulSoup import BeautifulSoup, SoupStrainer
import re
links = SoupStrainer('a')
[tag for tag in BeautifulSoup(doc, parseOnlyThese=links)]
# [<a href="http://www.bob.com/">success</a>,
# <a href="http://www.bob.com/plasma">experiments</a>,
# <a href="http://www.boogabooga.net/">BoogaBooga</a>]
linksToBob = SoupStrainer('a', href=re.compile('bob.com/'))
[tag for tag in BeautifulSoup(doc, parseOnlyThese=linksToBob)]
# [<a href="http://www.bob.com/">success</a>,
# <a href="http://www.bob.com/plasma">experiments</a>]
A:
Have you looked at PyParsing?
From their homepage:
Here is a program to parse "Hello, World!" (or any greeting of the form ", !"):
from pyparsing import Word, alphas
greet = Word( alphas ) + "," + Word( alphas ) + "!" # <-- grammar defined here
hello = "Hello, World!"
print hello, "->", greet.parseString( hello )
The program outputs the following:
Hello, World! -> ['Hello', ',', 'World', '!']
A:
If your problem has anything at all to do with web scraping, I recommend looking at Web::Scraper , which provides easy element selection via XPath respectively CSS selectors. I have a (German) talk on Web::Scraper , but if you run it through babelfish or just look at the code samples, that can help you to get a quick overview of the syntax.
Hand-parsing HTML is onerous and won't give you much over using one of the premade HTML parsers. If your HTML is of very limited variation, you can get by by using clever regular expressions, but if you're already breaking out hard-core parser tools, it sounds as if your HTML is far more regular than what is sane to parse with regular expressions.
A:
Also check out pQuery it as a really nice Perlish way of doing this kind of stuff....
use pQuery;
pQuery( 'http://www.perl.com' )->find( 'a' )->each(
sub {
my $pQ = pQuery( $_ );
say $pQ->text, ' -> ', $pQ->toHtml;
}
);
# prints all HTML anchors on www.perl.com
# => link text -> anchor HTML
However if your requirement is beyond HTML/Web then here is the earlier "Hello World!" example in Parse::RecDescent...
use strict;
use warnings;
use Parse::RecDescent;
my $grammar = q{
alpha : /\w+/
sep : /,|\s/
end : '!'
greet : alpha sep alpha end { shift @item; return \@item }
};
my $parse = Parse::RecDescent->new( $grammar );
my $hello = "Hello, World!";
print "$hello -> @{ $parse->greet( $hello ) }";
# => Hello, World! -> Hello , World !
Probably too much of a large hammer to crack this nut ;-)
A:
From perlop:
A useful idiom for lex -like scanners
is /\G.../gc . You can combine
several regexps like this to process a
string part-by-part, doing different
actions depending on which regexp
matched. Each regexp tries to match
where the previous one leaves off.
LOOP:
{
print(" digits"), redo LOOP if /\G\d+\b[,.;]?\s*/gc;
print(" lowercase"), redo LOOP if /\G[a-z]+\b[,.;]?\s*/gc;
print(" UPPERCASE"), redo LOOP if /\G[A-Z]+\b[,.;]?\s*/gc;
print(" Capitalized"), redo LOOP if /\G[A-Z][a-z]+\b[,.;]?\s*/gc;
print(" MiXeD"), redo LOOP if /\G[A-Za-z]+\b[,.;]?\s*/gc;
print(" alphanumeric"), redo LOOP if /\G[A-Za-z0-9]+\b[,.;]?\s*/gc;
print(" line-noise"), redo LOOP if /\G[^A-Za-z0-9]+/gc;
print ". That's all!\n";
}
A:
Modifying Bruno's example to include error checking:
my $input = "...";
while (1) {
if ($input =~ /\G(\w+)/gc) { print "word: '$1'\n"; next }
if ($input =~ /\G(\s+)/gc) { print "whitespace: '$1'\n"; next }
if ($input !~ /\G\z/gc) { print "tokenizing error at character " . pos($input) . "\n" }
print "done!\n"; last;
}
(Note that using scalar //g is unfortunately the one place where you really can't avoid using the $1, etc. variables.)
| Emulation of lex like functionality in Perl or Python | Here's the deal. Is there a way to have strings tokenized in a line based on multiple regexes?
One example:
I have to get all href tags, their corresponding text and some other text based on a different regex.
So I have 3 expressions and would like to tokenize the line and extract tokens of text matching every expression.
I have actually done this using flex (not to be confused with Adobe), which is an implementation of the good old lex. lex provides
an elegant way to do this by executing "actions" based on expressions. One can control the way
lex reading a file too (block / line based read).
The problem is that flex actually produces C/ C++ code which actually does the tokenizing job. I have a
make file which wraps all these things.
I was wondering if perl /python can in some way do the same thing. Its just that I would like to do everything
I like in a single programming language itself.
Tokenizing is just one of the things that I want to do as part of my application.
Apart from perl or python can any language (functional also) do this?
I did read about PLY and ANTLR here (Parsing, where can I learn about it).
But is there a way to do it naturally in python itself? pardon my ignorance, but are these tools used in any popular products / services?
Thank you.
| [
"Look at documentation for following modules on CPAN\nHTML::TreeBuilder\nHTML::TableExtract\nand\nParse::RecDescent\nI've used these modules to process quite large and complex web-pages.\n",
"If you're specifically after parsing links out of web-pages, then Perl's WWW::Mechanize module will figure things out for you in a very elegant fashion. Here's a sample program that grabs the first page of Stack Overflow and parses out all the links, printing their text and corresponding URLs:\n#!/usr/bin/perl\nuse strict;\nuse warnings;\nuse WWW::Mechanize;\n\nmy $mech = WWW::Mechanize->new;\n\n$mech->get(\"http://stackoverflow.com/\");\n\n$mech->success or die \"Oh no! Couldn't fetch stackoverflow.com\";\n\nforeach my $link ($mech->links) {\n print \"* [\",$link->text, \"] points to \", $link->url, \"\\n\";\n}\n\nIn the main loop, each $link is a WWW::Mechanize::Link object, so you're not just constrained to getting the text and URL.\nAll the best,\nPaul\n",
"Sounds like you really just want to parse HTML, I recommend looking at any of the wonderful packages for doing so:\n\nBeautifulSoup\nlxml.html\nhtml5lib\n\nOr! You can use a parser like one of the following:\n\nPyParsing\nDParser - A GLR parser with good python bindings.\nANTLR - A recursive decent parser generator that can generate python code.\n\nThis example is from the BeautifulSoup Documentation:\nfrom BeautifulSoup import BeautifulSoup, SoupStrainer\nimport re\n\nlinks = SoupStrainer('a')\n[tag for tag in BeautifulSoup(doc, parseOnlyThese=links)]\n# [<a href=\"http://www.bob.com/\">success</a>, \n# <a href=\"http://www.bob.com/plasma\">experiments</a>, \n# <a href=\"http://www.boogabooga.net/\">BoogaBooga</a>]\n\nlinksToBob = SoupStrainer('a', href=re.compile('bob.com/'))\n[tag for tag in BeautifulSoup(doc, parseOnlyThese=linksToBob)]\n# [<a href=\"http://www.bob.com/\">success</a>, \n# <a href=\"http://www.bob.com/plasma\">experiments</a>]\n\n",
"Have you looked at PyParsing?\nFrom their homepage:\nHere is a program to parse \"Hello, World!\" (or any greeting of the form \", !\"):\nfrom pyparsing import Word, alphas\ngreet = Word( alphas ) + \",\" + Word( alphas ) + \"!\" # <-- grammar defined here\nhello = \"Hello, World!\"\nprint hello, \"->\", greet.parseString( hello )\n\nThe program outputs the following:\nHello, World! -> ['Hello', ',', 'World', '!']\n\n",
"If your problem has anything at all to do with web scraping, I recommend looking at Web::Scraper , which provides easy element selection via XPath respectively CSS selectors. I have a (German) talk on Web::Scraper , but if you run it through babelfish or just look at the code samples, that can help you to get a quick overview of the syntax.\nHand-parsing HTML is onerous and won't give you much over using one of the premade HTML parsers. If your HTML is of very limited variation, you can get by by using clever regular expressions, but if you're already breaking out hard-core parser tools, it sounds as if your HTML is far more regular than what is sane to parse with regular expressions.\n",
"Also check out pQuery it as a really nice Perlish way of doing this kind of stuff....\nuse pQuery;\n\npQuery( 'http://www.perl.com' )->find( 'a' )->each( \n sub {\n my $pQ = pQuery( $_ ); \n say $pQ->text, ' -> ', $pQ->toHtml;\n }\n);\n\n# prints all HTML anchors on www.perl.com\n# => link text -> anchor HTML\n\nHowever if your requirement is beyond HTML/Web then here is the earlier \"Hello World!\" example in Parse::RecDescent...\nuse strict;\nuse warnings;\nuse Parse::RecDescent;\n\nmy $grammar = q{\n alpha : /\\w+/\n sep : /,|\\s/\n end : '!'\n greet : alpha sep alpha end { shift @item; return \\@item }\n};\n\nmy $parse = Parse::RecDescent->new( $grammar );\nmy $hello = \"Hello, World!\";\nprint \"$hello -> @{ $parse->greet( $hello ) }\";\n\n# => Hello, World! -> Hello , World !\n\nProbably too much of a large hammer to crack this nut ;-)\n",
"From perlop:\n\nA useful idiom for lex -like scanners\n is /\\G.../gc . You can combine\n several regexps like this to process a\n string part-by-part, doing different\n actions depending on which regexp\n matched. Each regexp tries to match\n where the previous one leaves off.\n LOOP:\n {\n print(\" digits\"), redo LOOP if /\\G\\d+\\b[,.;]?\\s*/gc;\n print(\" lowercase\"), redo LOOP if /\\G[a-z]+\\b[,.;]?\\s*/gc;\n print(\" UPPERCASE\"), redo LOOP if /\\G[A-Z]+\\b[,.;]?\\s*/gc;\n print(\" Capitalized\"), redo LOOP if /\\G[A-Z][a-z]+\\b[,.;]?\\s*/gc;\n print(\" MiXeD\"), redo LOOP if /\\G[A-Za-z]+\\b[,.;]?\\s*/gc;\n print(\" alphanumeric\"), redo LOOP if /\\G[A-Za-z0-9]+\\b[,.;]?\\s*/gc;\n print(\" line-noise\"), redo LOOP if /\\G[^A-Za-z0-9]+/gc;\n print \". That's all!\\n\";\n }\n\n\n",
"Modifying Bruno's example to include error checking:\nmy $input = \"...\";\nwhile (1) {\n if ($input =~ /\\G(\\w+)/gc) { print \"word: '$1'\\n\"; next }\n if ($input =~ /\\G(\\s+)/gc) { print \"whitespace: '$1'\\n\"; next }\n\n if ($input !~ /\\G\\z/gc) { print \"tokenizing error at character \" . pos($input) . \"\\n\" }\n print \"done!\\n\"; last;\n}\n\n(Note that using scalar //g is unfortunately the one place where you really can't avoid using the $1, etc. variables.)\n"
] | [
8,
7,
5,
3,
2,
2,
1,
0
] | [] | [] | [
"lex",
"parsing",
"perl",
"python"
] | stackoverflow_0000160889_lex_parsing_perl_python.txt |
Q:
Is it possible to pass arguments into event bindings?
I haven't found an answer elsewhere and this doesn't appear to have been asked yet on SO.
When creating an event binding in wxPython, is it possible to pass additional arguments to the event? For example, this is the normal way:
b = wx.Button(self, 10, "Default Button", (20, 20))
self.Bind(wx.EVT_BUTTON, self.OnClick, b)
def OnClick(self, event):
self.log.write("Click! (%d)\n" % event.GetId())
But is it possible to have another argument passed to the method? Such that the method can tell if more than one widget is calling it but still return the same value?
It would greatly reduce copy & pasting the same code but with different callers.
A:
You can always use a lambda or another function to wrap up your method and pass another argument, not WX specific.
b = wx.Button(self, 10, "Default Button", (20, 20))
self.Bind(wx.EVT_BUTTON, lambda event: self.OnClick(event, 'somevalue'), b)
def OnClick(self, event, somearg):
self.log.write("Click! (%d)\n" % event.GetId())
If you're out to reduce the amount of code to type, you might also try a little automatism like:
class foo(whateverwxobject):
def better_bind(self, type, instance, handler, *args, **kwargs):
self.Bind(type, lambda event: handler(event, *args, **kwargs), instance)
def __init__(self):
self.better_bind(wx.EVT_BUTTON, b, self.OnClick, 'somevalue')
A:
The nicest way would be to make a generator of event handlers, e.g.:
def getOnClick(self, additionalArgument):
def OnClick(event):
self.log.write("Click! (%d), arg: %s\n"
% (event.GetId(), additionalArgument))
return OnClick
Now you bind it with:
b = wx.Button(self, 10, "Default Button", (20, 20))
b.Bind(wx.EVT_BUTTON, self.getOnClick('my additional data'))
| Is it possible to pass arguments into event bindings? | I haven't found an answer elsewhere and this doesn't appear to have been asked yet on SO.
When creating an event binding in wxPython, is it possible to pass additional arguments to the event? For example, this is the normal way:
b = wx.Button(self, 10, "Default Button", (20, 20))
self.Bind(wx.EVT_BUTTON, self.OnClick, b)
def OnClick(self, event):
self.log.write("Click! (%d)\n" % event.GetId())
But is it possible to have another argument passed to the method? Such that the method can tell if more than one widget is calling it but still return the same value?
It would greatly reduce copy & pasting the same code but with different callers.
| [
"You can always use a lambda or another function to wrap up your method and pass another argument, not WX specific.\nb = wx.Button(self, 10, \"Default Button\", (20, 20))\n self.Bind(wx.EVT_BUTTON, lambda event: self.OnClick(event, 'somevalue'), b)\ndef OnClick(self, event, somearg):\n self.log.write(\"Click! (%d)\\n\" % event.GetId())\n\nIf you're out to reduce the amount of code to type, you might also try a little automatism like:\nclass foo(whateverwxobject):\n def better_bind(self, type, instance, handler, *args, **kwargs):\n self.Bind(type, lambda event: handler(event, *args, **kwargs), instance)\n\n def __init__(self):\n self.better_bind(wx.EVT_BUTTON, b, self.OnClick, 'somevalue')\n\n",
"The nicest way would be to make a generator of event handlers, e.g.:\ndef getOnClick(self, additionalArgument):\n def OnClick(event):\n self.log.write(\"Click! (%d), arg: %s\\n\" \n % (event.GetId(), additionalArgument))\n return OnClick\n\nNow you bind it with:\nb = wx.Button(self, 10, \"Default Button\", (20, 20))\nb.Bind(wx.EVT_BUTTON, self.getOnClick('my additional data'))\n\n"
] | [
49,
14
] | [] | [] | [
"events",
"python",
"wxpython"
] | stackoverflow_0000173687_events_python_wxpython.txt |
Q:
How do I blink/control Macbook keyboard LEDs programmatically?
Do you know how I can switch on/off (blink) Macbook keyboard led (caps lock,numlock) under Mac OS X (preferably Tiger)?
I've googled for this, but have got no results, so I am asking for help.
I would like to add this feature as notifications (eg. new message received on Adium, new mail received).
I would prefer applescript, python, but if it's impossible, any code would be just fine.
I will appreciate any kind of guidance.
A:
http://googlemac.blogspot.com/2008/04/manipulating-keyboard-leds-through.html
| How do I blink/control Macbook keyboard LEDs programmatically? | Do you know how I can switch on/off (blink) Macbook keyboard led (caps lock,numlock) under Mac OS X (preferably Tiger)?
I've googled for this, but have got no results, so I am asking for help.
I would like to add this feature as notifications (eg. new message received on Adium, new mail received).
I would prefer applescript, python, but if it's impossible, any code would be just fine.
I will appreciate any kind of guidance.
| [
"http://googlemac.blogspot.com/2008/04/manipulating-keyboard-leds-through.html\n"
] | [
4
] | [] | [] | [
"blink",
"keyboard",
"macos",
"python"
] | stackoverflow_0000173905_blink_keyboard_macos_python.txt |
Q:
How do you develop against OpenID locally
I'm developing a website (in Django) that uses OpenID to authenticate users. As I'm currently only running on my local machine I can't authenticate using one of the OpenID providers on the web. So I figure I need to run a local OpenID server that simply lets me type in a username and then passes that back to my main app.
Does such an OpenID dev server exist? Is this the best way to go about it?
A:
The libraries at OpenID Enabled ship with examples that are sufficient to run a local test provider. Look in the examples/djopenid/ directory of the python-openid source distribution. Running that will give you an instance of this test provider.
A:
I have no problems testing with myopenid.com. I thought there would be a problem testing on my local machine but it just worked. (I'm using ASP.NET with DotNetOpenId library).
The 'realm' and return url must contain the port number like 'http://localhost:93359'.
I assume it works OK because the provider does a client side redirect.
A:
I'm also looking into this. I too am working on a Django project that might utilize Open Id. For references, check out:
PHPMyId
OpenId's page
Hopefully someone here has tackled this issue.
A:
I'm using phpMyID to authenticate at StackOverflow right now. Generates a standard HTTP auth realm and works perfectly. It should be exactly what you need.
A:
You could probably use the django OpenID library to write a provider to test against. Have one that always authenticates and one that always fails.
A:
Why not run an OpenID provider from your local machine?
If you are a .Net developer there is an OpenID provider library for .Net at Google Code. This uses the standard .Net profile provider mechanism and wraps it with an OpenID layer. We are using it to add OpenID to our custom authentication engine.
If you are working in another language/platform there are a number of OpenID implementation avalaiable from the OpenID community site here.
A:
You shouldn't be having trouble developing against your own machine. What error are you getting?
An OpenID provider will ask you to give your site (in this case http://localhost:8000 or similar) access to your identity. If you click ok then it will redirect you that url. I've never had problems with livejournal and I expect that myopenid.com will work too.
If you're having problems developing locally I suggest that the problem you're having is unrelated to the url being localhost, but something else. Without an error message or problem description it's impossible to say more.
Edit: It turns out that Yahoo do things differently to other OpenID providers that I've come across and disallow redirections to ip address, sites without a correct tld in their domain name and those that run on ports other than 80 or 443. See here for a post from a Yahoo developer on this subject. This post offers a work around, but I would suggest that for development myopenid.com would be far simpler than working around Yahoo, or running your own provider.
| How do you develop against OpenID locally | I'm developing a website (in Django) that uses OpenID to authenticate users. As I'm currently only running on my local machine I can't authenticate using one of the OpenID providers on the web. So I figure I need to run a local OpenID server that simply lets me type in a username and then passes that back to my main app.
Does such an OpenID dev server exist? Is this the best way to go about it?
| [
"The libraries at OpenID Enabled ship with examples that are sufficient to run a local test provider. Look in the examples/djopenid/ directory of the python-openid source distribution. Running that will give you an instance of this test provider.\n",
"I have no problems testing with myopenid.com. I thought there would be a problem testing on my local machine but it just worked. (I'm using ASP.NET with DotNetOpenId library).\nThe 'realm' and return url must contain the port number like 'http://localhost:93359'.\nI assume it works OK because the provider does a client side redirect.\n",
"I'm also looking into this. I too am working on a Django project that might utilize Open Id. For references, check out:\n\nPHPMyId\nOpenId's page\n\nHopefully someone here has tackled this issue.\n",
"I'm using phpMyID to authenticate at StackOverflow right now. Generates a standard HTTP auth realm and works perfectly. It should be exactly what you need.\n",
"You could probably use the django OpenID library to write a provider to test against. Have one that always authenticates and one that always fails.\n",
"Why not run an OpenID provider from your local machine?\nIf you are a .Net developer there is an OpenID provider library for .Net at Google Code. This uses the standard .Net profile provider mechanism and wraps it with an OpenID layer. We are using it to add OpenID to our custom authentication engine.\nIf you are working in another language/platform there are a number of OpenID implementation avalaiable from the OpenID community site here.\n",
"You shouldn't be having trouble developing against your own machine. What error are you getting?\nAn OpenID provider will ask you to give your site (in this case http://localhost:8000 or similar) access to your identity. If you click ok then it will redirect you that url. I've never had problems with livejournal and I expect that myopenid.com will work too.\nIf you're having problems developing locally I suggest that the problem you're having is unrelated to the url being localhost, but something else. Without an error message or problem description it's impossible to say more.\nEdit: It turns out that Yahoo do things differently to other OpenID providers that I've come across and disallow redirections to ip address, sites without a correct tld in their domain name and those that run on ports other than 80 or 443. See here for a post from a Yahoo developer on this subject. This post offers a work around, but I would suggest that for development myopenid.com would be far simpler than working around Yahoo, or running your own provider.\n"
] | [
15,
9,
3,
3,
3,
1,
1
] | [] | [] | [
"django",
"openid",
"python"
] | stackoverflow_0000172040_django_openid_python.txt |
Q:
SQLAlchemy and kinterbasdb in separate apps under mod_wsgi
I'm trying to develop an app using turbogears and sqlalchemy.
There is already an existing app using kinterbasdb directly under mod_wsgi on the same server.
When both apps are used, neither seems to recognize that kinterbasdb is already initialized
Is there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions?
A:
I thought I posted my solution already...
Modifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file
and patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True
before calling self.dbapi.init(... was the only way I could manage to get this scenario up and running.
The SQLAlchemy 0.4.7 patch:
diff -Naur SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py
--- SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py 2008-07-26 12:43:52.000000000 -0400
+++ SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py 2008-10-01 10:51:22.000000000 -0400
@@ -291,7 +291,8 @@
global _initialized_kb
if not _initialized_kb and self.dbapi is not None:
_initialized_kb = True
- self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)
+ if not self.dbapi.initialized:
+ self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)
return ([], opts)
def create_execution_context(self, *args, **kwargs):
| SQLAlchemy and kinterbasdb in separate apps under mod_wsgi | I'm trying to develop an app using turbogears and sqlalchemy.
There is already an existing app using kinterbasdb directly under mod_wsgi on the same server.
When both apps are used, neither seems to recognize that kinterbasdb is already initialized
Is there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions?
| [
"I thought I posted my solution already...\nModifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file\nand patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True\nbefore calling self.dbapi.init(... was the only way I could manage to get this scenario up and running.\nThe SQLAlchemy 0.4.7 patch:\n\ndiff -Naur SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py\n--- SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py 2008-07-26 12:43:52.000000000 -0400\n+++ SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py 2008-10-01 10:51:22.000000000 -0400\n@@ -291,7 +291,8 @@\n global _initialized_kb\n if not _initialized_kb and self.dbapi is not None:\n _initialized_kb = True\n- self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)\n+ if not self.dbapi.initialized:\n+ self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)\n return ([], opts)\n\n def create_execution_context(self, *args, **kwargs):\n\n\n"
] | [
2
] | [] | [] | [
"kinterbasdb",
"python",
"sqlalchemy"
] | stackoverflow_0000155029_kinterbasdb_python_sqlalchemy.txt |
Q:
SQL Absolute value across columns
I have a table that looks something like this:
word big expensive smart fast
dog 9 -10 -20 4
professor 2 4 40 -7
ferrari 7 50 0 48
alaska 10 0 1 0
gnat -3 0 0 0
The + and - values are associated with the word, so professor is smart and dog is not smart. Alaska is big, as a proportion of the total value associated with its entries, and the opposite is true of gnat.
Is there a good way to get the absolute value of the number farthest from zero, and some token whether absolute value =/= value? Relatedly, how might I calculate whether the results for a given value are proportionately large with respect to the other values? I would write something to format the output to the effect of: "dog: not smart, probably not expensive; professor smart; ferrari: fast, expensive; alaska: big; gnat: probably small." (The formatting is not a question, just an illustration, I am stuck on the underlying queries.)
Also, the rest of the program is python, so if there is any python solution with normal dbapi modules or a more abstract module, any help appreciated.
A:
Words listed by absolute value of big:
select word, big from myTable order by abs(big)
totals for each category:
select sum(abs(big)) as sumbig,
sum(abs(expensive)) as sumexpensive,
sum(abs(smart)) as sumsmart,
sum(abs(fast)) as sumfast
from MyTable;
A:
abs value fartherest from zero:
select max(abs(mycol)) from mytbl
will be zero if the value is negative:
select n+abs(mycol)
from zzz
where abs(mycol)=(select max(abs(mycol)) from mytbl);
A:
The problem seems to be that you mainly want to work within one row, and these type of questions are hard to answer in SQL.
I'd try to turn the structure you mentioned into a more "atomic" fact table like
word property value
either by redesigning the underlying table (if possible and if that makes sense regarding the rest of the application), or by defining a view that does this for you, like
select word, 'big' as property, big as value from soquestion
UNION ALLL
select word, 'expensive', expensive from soquestion
UNION ALL
...
This allows you to ask for the max value for each word:
select word, max(value),
(select property from soquestion t2
where t1.word = t2.word and t2.value = max(t1.value))
from soquestion t1
group by word
Still a little awkward, but most logic will be in SQL, not in your programming language of choice.
A:
Can you use the built-in database aggregate functions like MAX(column)?
A:
Asking the question helped clarify the issue; here is a function that gets more at what I am trying to do. Is there a way to represent some of the stuff in ¶2 above, or a more efficient way to do in SQL or python what I am trying to accomplish in show_distinct?
#!/usr/bin/env python
import sqlite3
conn = sqlite3.connect('so_question.sqlite')
cur = conn.cursor()
cur.execute('create table soquestion (word, big, expensive, smart, fast)')
cur.execute("insert into soquestion values ('dog', 9, -10, -20, 4)")
cur.execute("insert into soquestion values ('professor', 2, 4, 40, -7)")
cur.execute("insert into soquestion values ('ferrari', 7, 50, 0, 48)")
cur.execute("insert into soquestion values ('alaska', 10, 0, 1, 0)")
cur.execute("insert into soquestion values ('gnat', -3, 0, 0, 0)")
cur.execute("select * from soquestion")
all = cur.fetchall()
definition_list = ['word', 'big', 'expensive', 'smart', 'fast']
def show_distinct(db_tuple, def_list=definition_list):
minimum = min(db_tuple[1:])
maximum = max(db_tuple[1:])
if abs(minimum) > maximum:
print db_tuple[0], 'is not', def_list[list(db_tuple).index(minimum)]
elif maximum > abs(minimum):
print db_tuple[0], 'is', def_list[list(db_tuple).index(maximum)]
else:
print 'no distinct value'
for item in all:
show_distinct(item)
Running this gives:
dog is not smart
professor is smart
ferrari is expensive
alaska is big
gnat is not big
>>>
| SQL Absolute value across columns | I have a table that looks something like this:
word big expensive smart fast
dog 9 -10 -20 4
professor 2 4 40 -7
ferrari 7 50 0 48
alaska 10 0 1 0
gnat -3 0 0 0
The + and - values are associated with the word, so professor is smart and dog is not smart. Alaska is big, as a proportion of the total value associated with its entries, and the opposite is true of gnat.
Is there a good way to get the absolute value of the number farthest from zero, and some token whether absolute value =/= value? Relatedly, how might I calculate whether the results for a given value are proportionately large with respect to the other values? I would write something to format the output to the effect of: "dog: not smart, probably not expensive; professor smart; ferrari: fast, expensive; alaska: big; gnat: probably small." (The formatting is not a question, just an illustration, I am stuck on the underlying queries.)
Also, the rest of the program is python, so if there is any python solution with normal dbapi modules or a more abstract module, any help appreciated.
| [
"Words listed by absolute value of big:\nselect word, big from myTable order by abs(big)\n\ntotals for each category:\nselect sum(abs(big)) as sumbig, \n sum(abs(expensive)) as sumexpensive, \n sum(abs(smart)) as sumsmart,\n sum(abs(fast)) as sumfast\n from MyTable;\n\n",
"abs value fartherest from zero:\nselect max(abs(mycol)) from mytbl\n\nwill be zero if the value is negative:\nselect n+abs(mycol)\n from zzz\n where abs(mycol)=(select max(abs(mycol)) from mytbl);\n\n",
"The problem seems to be that you mainly want to work within one row, and these type of questions are hard to answer in SQL.\nI'd try to turn the structure you mentioned into a more \"atomic\" fact table like\nword property value\n\neither by redesigning the underlying table (if possible and if that makes sense regarding the rest of the application), or by defining a view that does this for you, like\nselect word, 'big' as property, big as value from soquestion\nUNION ALLL\nselect word, 'expensive', expensive from soquestion\nUNION ALL\n...\n\nThis allows you to ask for the max value for each word:\nselect word, max(value), \n (select property from soquestion t2 \n where t1.word = t2.word and t2.value = max(t1.value))\nfrom soquestion t1\ngroup by word\n\nStill a little awkward, but most logic will be in SQL, not in your programming language of choice.\n",
"Can you use the built-in database aggregate functions like MAX(column)?\n",
"Asking the question helped clarify the issue; here is a function that gets more at what I am trying to do. Is there a way to represent some of the stuff in ¶2 above, or a more efficient way to do in SQL or python what I am trying to accomplish in show_distinct?\n#!/usr/bin/env python\n\nimport sqlite3\n\nconn = sqlite3.connect('so_question.sqlite')\ncur = conn.cursor()\n\ncur.execute('create table soquestion (word, big, expensive, smart, fast)')\ncur.execute(\"insert into soquestion values ('dog', 9, -10, -20, 4)\")\ncur.execute(\"insert into soquestion values ('professor', 2, 4, 40, -7)\")\ncur.execute(\"insert into soquestion values ('ferrari', 7, 50, 0, 48)\")\ncur.execute(\"insert into soquestion values ('alaska', 10, 0, 1, 0)\")\ncur.execute(\"insert into soquestion values ('gnat', -3, 0, 0, 0)\")\n\ncur.execute(\"select * from soquestion\")\nall = cur.fetchall()\n\ndefinition_list = ['word', 'big', 'expensive', 'smart', 'fast']\n\ndef show_distinct(db_tuple, def_list=definition_list):\n minimum = min(db_tuple[1:])\n maximum = max(db_tuple[1:])\n if abs(minimum) > maximum:\n print db_tuple[0], 'is not', def_list[list(db_tuple).index(minimum)]\n elif maximum > abs(minimum):\n print db_tuple[0], 'is', def_list[list(db_tuple).index(maximum)]\n else:\n print 'no distinct value'\n\nfor item in all:\n show_distinct(item)\n\nRunning this gives:\n\n dog is not smart\n professor is smart\n ferrari is expensive\n alaska is big\n gnat is not big\n >>> \n\n"
] | [
3,
3,
1,
0,
0
] | [] | [] | [
"mysql",
"oracle",
"postgresql",
"python",
"sql"
] | stackoverflow_0000177284_mysql_oracle_postgresql_python_sql.txt |
Q:
Accessing python egg's own metadata
I've produced a python egg using setuptools and would like to access it's metadata at runtime. I currently got working this:
import pkg_resources
dist = pkg_resources.get_distribution("my_project")
print(dist.version)
but this would probably work incorrectly if I had multiple versions of the same egg installed. And if I have both installed egg and development version, then running this code from development version would pick up version of the installed egg.
So, how do I get metadata for my egg not some random matching egg installed on my system?
A:
I am somewhat new to Python as well, but from what I understand:
Although you can install multiple versions of the "same" egg (having the same name), only one of them will be available to any particular piece of code at runtime (based on your discovery method). So if your egg is the one calling this code, it must have already been selected as the version of my_project for this code, and your access will be to your own version.
A:
Exactly. So you should only be able to get the information for the currently available egg (singular) of a library. If you have multiple eggs of the same library in your site-packages folder, check the easy-install.pth in the same folder to see which egg is really used :-)
On a site note: This is exactly the point of systems like zc.buildout which lets you define the exact version of a library that will be made available to you for example while developing an application or serving a web application. So you can for example use version 1.0 for one project and 1.2 for another.
| Accessing python egg's own metadata | I've produced a python egg using setuptools and would like to access it's metadata at runtime. I currently got working this:
import pkg_resources
dist = pkg_resources.get_distribution("my_project")
print(dist.version)
but this would probably work incorrectly if I had multiple versions of the same egg installed. And if I have both installed egg and development version, then running this code from development version would pick up version of the installed egg.
So, how do I get metadata for my egg not some random matching egg installed on my system?
| [
"I am somewhat new to Python as well, but from what I understand: \nAlthough you can install multiple versions of the \"same\" egg (having the same name), only one of them will be available to any particular piece of code at runtime (based on your discovery method). So if your egg is the one calling this code, it must have already been selected as the version of my_project for this code, and your access will be to your own version.\n",
"Exactly. So you should only be able to get the information for the currently available egg (singular) of a library. If you have multiple eggs of the same library in your site-packages folder, check the easy-install.pth in the same folder to see which egg is really used :-)\nOn a site note: This is exactly the point of systems like zc.buildout which lets you define the exact version of a library that will be made available to you for example while developing an application or serving a web application. So you can for example use version 1.0 for one project and 1.2 for another. \n"
] | [
4,
0
] | [] | [] | [
"pkg_resources",
"python",
"setuptools"
] | stackoverflow_0000177910_pkg_resources_python_setuptools.txt |
Q:
Django/Python - Grouping objects by common set from a many-to-many relationships
This is a part algorithm-logic question (how to do it), part implementation question (how to do it best!). I'm working with Django, so I thought I'd share with that.
In Python, it's worth mentioning that the problem is somewhat related to how-do-i-use-pythons-itertoolsgroupby.
Suppose you're given two Django Model-derived classes:
from django.db import models
class Car(models.Model):
mods = models.ManyToManyField(Representative)
and
from django.db import models
class Mods(models.Model):
...
How does one get a list of Cars, grouped by Cars with a common set of Mods?
I.e. I want to get a class likeso:
Cars_by_common_mods = [
{ mods: { 'a' }, cars: { 'W1', 'W2' } },
{ mods: { 'a', 'b' }, cars: { 'X1', 'X2', 'X3' }, },
{ mods: { 'b' }, cars: { 'Y1', 'Y2' } },
{ mods: { 'a', 'b', 'c' }, cars: { 'Z1' } },
]
I've been thinking of something like:
def cars_by_common_mods():
cars = Cars.objects.all()
mod_list = []
for car in cars:
mod_list.append( { 'car': car, 'mods': list(car.mods.all()) }
ret = []
for key, mods_group in groupby(list(mods), lambda x: set(x.mods)):
ret.append(mods_group)
return ret
However, that doesn't work because (perhaps among other reasons) the groupby doesn't seem to group by the mods sets. I guess the mod_list has to be sorted to work with groupby. All to say, I'm confident there's something simple and elegant out there that will be both enlightening and illuminating.
Cheers & thanks!
A:
Have you tried sorting the list first? The algorithm you proposed should work, albeit with lots of database hits.
import itertools
cars = [
{'car': 'X2', 'mods': [1,2]},
{'car': 'Y2', 'mods': [2]},
{'car': 'W2', 'mods': [1]},
{'car': 'X1', 'mods': [1,2]},
{'car': 'W1', 'mods': [1]},
{'car': 'Y1', 'mods': [2]},
{'car': 'Z1', 'mods': [1,2,3]},
{'car': 'X3', 'mods': [1,2]},
]
cars.sort(key=lambda car: car['mods'])
cars_by_common_mods = {}
for k, g in itertools.groupby(cars, lambda car: car['mods']):
cars_by_common_mods[frozenset(k)] = [car['car'] for car in g]
print cars_by_common_mods
Now, about those queries:
import collections
import itertools
from operator import itemgetter
from django.db import connection
cursor = connection.cursor()
cursor.execute('SELECT car_id, mod_id FROM someapp_car_mod ORDER BY 1, 2')
cars = collections.defaultdict(list)
for row in cursor.fetchall():
cars[row[0]].append(row[1])
# Here's one I prepared earlier, which emulates the sample data we've been working
# with so far, but using the car id instead of the previous string.
cars = {
1: [1,2],
2: [2],
3: [1],
4: [1,2],
5: [1],
6: [2],
7: [1,2,3],
8: [1,2],
}
sorted_cars = sorted(cars.iteritems(), key=itemgetter(1))
cars_by_common_mods = []
for k, g in itertools.groupby(sorted_cars, key=itemgetter(1)):
cars_by_common_mods.append({'mods': k, 'cars': map(itemgetter(0), g)})
print cars_by_common_mods
# Which, for the sample data gives me (reformatted by hand for clarity)
[{'cars': [3, 5], 'mods': [1]},
{'cars': [1, 4, 8], 'mods': [1, 2]},
{'cars': [7], 'mods': [1, 2, 3]},
{'cars': [2, 6], 'mods': [2]}]
Now that you've got your lists of car ids and mod ids, if you need the complete objects to work with, you could do a single query for each to get a complete list for each model and create a lookup dict for those, keyed by their ids - then, I believe, Bob is your proverbial father's brother.
A:
check regroup. it's only for templates, but i guess this kind of classification belongs to the presentation layer anyway.
A:
You have a few problems here.
You didn't sort your list before calling groupby, and this is required. From itertools documentation:
Generally, the iterable needs to already be sorted on the same key function.
Then, you don't duplicate the list returned by groupby. Again, documentation states:
The returned group is itself an iterator that shares the underlying iterable with
groupby(). Because the source is shared, when the groupby object is advanced, the
previous group is no longer visible. So, if that data is needed later, it should
be stored as a list:
groups = []
uniquekeys = []
for k, g in groupby(data, keyfunc):
groups.append(list(g)) # Store group iterator as a list
uniquekeys.append(k)
And final mistake is using sets as keys. They don't work here. A quick fix is to cast them to sorted tuples (there could be a better solution, but I cannot think of it now).
So, in your example, the last part should look like this:
sortMethod = lambda x: tuple(sorted(set(x.mods)))
sortedMods = sorted(list(mods), key=sortMethod)
for key, mods_group in groupby(sortedMods, sortMethod):
ret.append(list(mods_group))
A:
If performance is a concern (i.e. lots of cars on a page, or a high-traffic site), denormalization makes sense, and simplifies your problem as a side effect.
Be aware that denormalizing many-to-many relations might be a bit tricky though. I haven't run into any such code examples yet.
A:
Thank you all for the helpful replies. I've been plugging away at this problem. A 'best' solution still eludes me, but I've some thoughts.
I should mention that the statistics of the data-set I'm working with. In 75% of the cases there will be one Mod. In 24% of the cases, two. In 1% of the cases there will be zero, or three or more. For every Mod, there is at least one unique Car, though a Mod may be applied to numerous Cars.
Having said that, I've considered (but not implemented) something like-so:
class ModSet(models.Model):
mods = models.ManyToManyField(Mod)
and change cars to
class Car(models.Model):
modset = models.ForeignKey(ModSet)
It's trivial to group by Car.modset: I can use regroup, as suggested by Javier, for example. It seems a simpler and reasonably elegant solution; thoughts would be much appreciated.
| Django/Python - Grouping objects by common set from a many-to-many relationships | This is a part algorithm-logic question (how to do it), part implementation question (how to do it best!). I'm working with Django, so I thought I'd share with that.
In Python, it's worth mentioning that the problem is somewhat related to how-do-i-use-pythons-itertoolsgroupby.
Suppose you're given two Django Model-derived classes:
from django.db import models
class Car(models.Model):
mods = models.ManyToManyField(Representative)
and
from django.db import models
class Mods(models.Model):
...
How does one get a list of Cars, grouped by Cars with a common set of Mods?
I.e. I want to get a class likeso:
Cars_by_common_mods = [
{ mods: { 'a' }, cars: { 'W1', 'W2' } },
{ mods: { 'a', 'b' }, cars: { 'X1', 'X2', 'X3' }, },
{ mods: { 'b' }, cars: { 'Y1', 'Y2' } },
{ mods: { 'a', 'b', 'c' }, cars: { 'Z1' } },
]
I've been thinking of something like:
def cars_by_common_mods():
cars = Cars.objects.all()
mod_list = []
for car in cars:
mod_list.append( { 'car': car, 'mods': list(car.mods.all()) }
ret = []
for key, mods_group in groupby(list(mods), lambda x: set(x.mods)):
ret.append(mods_group)
return ret
However, that doesn't work because (perhaps among other reasons) the groupby doesn't seem to group by the mods sets. I guess the mod_list has to be sorted to work with groupby. All to say, I'm confident there's something simple and elegant out there that will be both enlightening and illuminating.
Cheers & thanks!
| [
"Have you tried sorting the list first? The algorithm you proposed should work, albeit with lots of database hits.\nimport itertools\n\ncars = [\n {'car': 'X2', 'mods': [1,2]},\n {'car': 'Y2', 'mods': [2]},\n {'car': 'W2', 'mods': [1]},\n {'car': 'X1', 'mods': [1,2]},\n {'car': 'W1', 'mods': [1]},\n {'car': 'Y1', 'mods': [2]},\n {'car': 'Z1', 'mods': [1,2,3]},\n {'car': 'X3', 'mods': [1,2]},\n]\n\ncars.sort(key=lambda car: car['mods'])\n\ncars_by_common_mods = {}\nfor k, g in itertools.groupby(cars, lambda car: car['mods']):\n cars_by_common_mods[frozenset(k)] = [car['car'] for car in g]\n\nprint cars_by_common_mods\n\nNow, about those queries:\nimport collections\nimport itertools\nfrom operator import itemgetter\n\nfrom django.db import connection\n\ncursor = connection.cursor()\ncursor.execute('SELECT car_id, mod_id FROM someapp_car_mod ORDER BY 1, 2')\ncars = collections.defaultdict(list)\nfor row in cursor.fetchall():\n cars[row[0]].append(row[1])\n\n# Here's one I prepared earlier, which emulates the sample data we've been working\n# with so far, but using the car id instead of the previous string.\ncars = {\n 1: [1,2],\n 2: [2],\n 3: [1],\n 4: [1,2],\n 5: [1],\n 6: [2],\n 7: [1,2,3],\n 8: [1,2],\n}\n\nsorted_cars = sorted(cars.iteritems(), key=itemgetter(1))\ncars_by_common_mods = []\nfor k, g in itertools.groupby(sorted_cars, key=itemgetter(1)):\n cars_by_common_mods.append({'mods': k, 'cars': map(itemgetter(0), g)})\n\nprint cars_by_common_mods\n\n# Which, for the sample data gives me (reformatted by hand for clarity)\n[{'cars': [3, 5], 'mods': [1]},\n {'cars': [1, 4, 8], 'mods': [1, 2]},\n {'cars': [7], 'mods': [1, 2, 3]},\n {'cars': [2, 6], 'mods': [2]}]\n\nNow that you've got your lists of car ids and mod ids, if you need the complete objects to work with, you could do a single query for each to get a complete list for each model and create a lookup dict for those, keyed by their ids - then, I believe, Bob is your proverbial father's brother.\n",
"check regroup. it's only for templates, but i guess this kind of classification belongs to the presentation layer anyway.\n",
"You have a few problems here.\nYou didn't sort your list before calling groupby, and this is required. From itertools documentation:\n\nGenerally, the iterable needs to already be sorted on the same key function.\n\nThen, you don't duplicate the list returned by groupby. Again, documentation states:\n\nThe returned group is itself an iterator that shares the underlying iterable with\n groupby(). Because the source is shared, when the groupby object is advanced, the\n previous group is no longer visible. So, if that data is needed later, it should \n be stored as a list:\ngroups = []\nuniquekeys = []\nfor k, g in groupby(data, keyfunc):\n groups.append(list(g)) # Store group iterator as a list\n uniquekeys.append(k)\n\n\nAnd final mistake is using sets as keys. They don't work here. A quick fix is to cast them to sorted tuples (there could be a better solution, but I cannot think of it now).\nSo, in your example, the last part should look like this:\nsortMethod = lambda x: tuple(sorted(set(x.mods)))\nsortedMods = sorted(list(mods), key=sortMethod)\nfor key, mods_group in groupby(sortedMods, sortMethod):\n ret.append(list(mods_group))\n\n",
"If performance is a concern (i.e. lots of cars on a page, or a high-traffic site), denormalization makes sense, and simplifies your problem as a side effect.\nBe aware that denormalizing many-to-many relations might be a bit tricky though. I haven't run into any such code examples yet.\n",
"Thank you all for the helpful replies. I've been plugging away at this problem. A 'best' solution still eludes me, but I've some thoughts.\nI should mention that the statistics of the data-set I'm working with. In 75% of the cases there will be one Mod. In 24% of the cases, two. In 1% of the cases there will be zero, or three or more. For every Mod, there is at least one unique Car, though a Mod may be applied to numerous Cars.\nHaving said that, I've considered (but not implemented) something like-so:\nclass ModSet(models.Model):\n mods = models.ManyToManyField(Mod)\n\nand change cars to \nclass Car(models.Model):\n modset = models.ForeignKey(ModSet)\n\nIt's trivial to group by Car.modset: I can use regroup, as suggested by Javier, for example. It seems a simpler and reasonably elegant solution; thoughts would be much appreciated.\n"
] | [
4,
2,
1,
1,
0
] | [] | [] | [
"algorithm",
"django",
"puzzle",
"python"
] | stackoverflow_0000160298_algorithm_django_puzzle_python.txt |
Q:
What symmetric cypher to use for encrypting messages?
I haven't a clue about encryption at all. But I need it. How?
Say you have a system of nodes communicating with each other on a network via asynchronous messages. The nodes do not maintain session information about other nodes (this is a design restriction).
Say you want to make sure only your nodes can read the messages being sent. I believe encryption is the sollution to that.
Since the nodes are not maintaining a session and communication must work in a stateless, connectionless fashion, I am guessing that asymmetric encryption is ruled out.
So here is what I would like to do:
messages are sent as UDP datagrams
each message contains a timestamp to make messages differ (counter replay attacks)
each message is encrypted with a shared secret symmetric key and sent over the network
other end can decrypt with shared secret symmetric key
Keys can obviously be compromised by compromising any single node. At the same time, in this scenario, access to any single compromised node reveals all interesting information anyway, so the key is not the weakest link.
What cypher should I use for this encryption? What key length?
I would prefer to use something supported by ezPyCrypto.
Assuming, as most point out, I go with AES. What modes should I be using?
I couldn't figure out how to do it with ezPyCrypto, PyCrypto seems to be hung on a moderator swap and googles keyczar does not explain how to set this up - I fear if I don't just get it, then I run a risk of introducing insecurity. So barebones would be better. This guy claims to have a nice module for AES in python, but he also asserts that this is his first python project - Allthough he is probably loads smarter than I, maybe he got tripped up?
EDIT: I moved the search for the python implementation to another question to stop clobber...
A:
Your first thought should be channel security - either SSL/TLS, or IPSec.
Admittedly, these both have a certain amount of setup overhead, IPSec more than SSL/TLS, especially when it comes to PKI etc. - but it more than pays for itself in simplicity of development, reliability, security, and more. Just make sure you're using strong cipher suites, as appropriate to the protocol.
If neither SSL/TLS or IPSec fits your scenario/environment, your next choice should be AES (aka Rijndael).
Use keys at least 256 bits long, if you want you can go longer.
Keys should be randomly generated, by a cryptographically secure random number generator (and not a simple rnd() call).
Set the cipher mode to CBC.
Use PKCS7 padding.
Generate a unique, crypto-random Initialization Vector (IV).
Don't forget to properly protect and manage your keys, and maybe consider periodic key rotations.
Depending on your data, you may want to also implement a keyed hash, to provide for message integrity - use SHA-256 for hashing.
There are also rare situations where you may want to go with a stream cipher, but thats usually more complicated and I would recommend you avoid it your first time out.
Now, I'm not familiar ezpycrypto (or really python in general), and cant really state that it supports all this; but everything here is pretty standard and recommended best practice, if your crypto library doesnt support it, I would suggest finding one that does ;-).
A:
I haven't a clue about encryption at all. But I need it. How?
DANGER! If you don't know much about cryptography, don't try to implement it yourself. Cryptography is hard to get right. There are many, many different ways to break the security of a cryptographic system beyond actually cracking the key (which is usually very hard).
If you just slap a cipher on your streaming data, without careful key management and other understanding of the subtleties of cryptographic systems, you will likely open yourself up to all kinds of vulnerabilities. For example, the scheme you describe will be vulnerable to man-in-the-middle attacks without some specific plan for key distribution among the nodes, and may be vulnerable to chosen-plaintext and/or known-plaintext attacks depending on how your distributed system communicates with the outside world, and the exact choice of cipher and mode of operation.
So... you will have to read up on crypto in general before you can use it securely.
A:
Assuming the use of symmetric crypto, then AES should be your default choice, unless you have a good very reason to select otherwise.
There was a long, involved competition to select AES, and the winner was carefully chosen. Even Bruce Schneier, crypto god, has said that the AES winner is a better choice than the algorithm (TwoFish) that he submitted to the competition.
A:
AES 256 is generally the preferred choice, but depending on your location (or your customers' location) you may have legal constraints, and will be forced to use something weaker.
Also note that you should use a random IV for each communication and pass it along with the message (this will also save the need for a timestamp).
If possible, try not to depend on the algorithm, and pass the algorithm along with the message. The node will then look at the header, and decide on the algorithm that will be used for decryption. That way you can easily switch algorithms when a certain deployment calls for it.
A:
I'd probably go for AES.
A:
Asymmetric encryption would work in this scenario as well. Simply have each node publish it's public key. Any node that wants to communicate with that node need only encrypt the message with that node's public key. One advantage of using asymmetric keys is that it becomes easier to change and distribute keys -- since the public keys can be distributed openly, each node need only update it's public-private key pair and republish. You don't need some protocol for the entire network (or each node pair) to agree on a new symmetric key.
A:
Why not create a VPN among the nodes that must communicate securely?
Then, you don't have to bother coding up your own security solution, and you're not restricted to a static, shared key (which, if compromised, will allow all captured traffic to be decrypted after the fact).
| What symmetric cypher to use for encrypting messages? | I haven't a clue about encryption at all. But I need it. How?
Say you have a system of nodes communicating with each other on a network via asynchronous messages. The nodes do not maintain session information about other nodes (this is a design restriction).
Say you want to make sure only your nodes can read the messages being sent. I believe encryption is the sollution to that.
Since the nodes are not maintaining a session and communication must work in a stateless, connectionless fashion, I am guessing that asymmetric encryption is ruled out.
So here is what I would like to do:
messages are sent as UDP datagrams
each message contains a timestamp to make messages differ (counter replay attacks)
each message is encrypted with a shared secret symmetric key and sent over the network
other end can decrypt with shared secret symmetric key
Keys can obviously be compromised by compromising any single node. At the same time, in this scenario, access to any single compromised node reveals all interesting information anyway, so the key is not the weakest link.
What cypher should I use for this encryption? What key length?
I would prefer to use something supported by ezPyCrypto.
Assuming, as most point out, I go with AES. What modes should I be using?
I couldn't figure out how to do it with ezPyCrypto, PyCrypto seems to be hung on a moderator swap and googles keyczar does not explain how to set this up - I fear if I don't just get it, then I run a risk of introducing insecurity. So barebones would be better. This guy claims to have a nice module for AES in python, but he also asserts that this is his first python project - Allthough he is probably loads smarter than I, maybe he got tripped up?
EDIT: I moved the search for the python implementation to another question to stop clobber...
| [
"Your first thought should be channel security - either SSL/TLS, or IPSec.\nAdmittedly, these both have a certain amount of setup overhead, IPSec more than SSL/TLS, especially when it comes to PKI etc. - but it more than pays for itself in simplicity of development, reliability, security, and more. Just make sure you're using strong cipher suites, as appropriate to the protocol.\nIf neither SSL/TLS or IPSec fits your scenario/environment, your next choice should be AES (aka Rijndael).\nUse keys at least 256 bits long, if you want you can go longer.\nKeys should be randomly generated, by a cryptographically secure random number generator (and not a simple rnd() call).\nSet the cipher mode to CBC.\nUse PKCS7 padding.\nGenerate a unique, crypto-random Initialization Vector (IV).\nDon't forget to properly protect and manage your keys, and maybe consider periodic key rotations.\nDepending on your data, you may want to also implement a keyed hash, to provide for message integrity - use SHA-256 for hashing.\nThere are also rare situations where you may want to go with a stream cipher, but thats usually more complicated and I would recommend you avoid it your first time out.\nNow, I'm not familiar ezpycrypto (or really python in general), and cant really state that it supports all this; but everything here is pretty standard and recommended best practice, if your crypto library doesnt support it, I would suggest finding one that does ;-).\n",
"\nI haven't a clue about encryption at all. But I need it. How?\n\nDANGER! If you don't know much about cryptography, don't try to implement it yourself. Cryptography is hard to get right. There are many, many different ways to break the security of a cryptographic system beyond actually cracking the key (which is usually very hard).\nIf you just slap a cipher on your streaming data, without careful key management and other understanding of the subtleties of cryptographic systems, you will likely open yourself up to all kinds of vulnerabilities. For example, the scheme you describe will be vulnerable to man-in-the-middle attacks without some specific plan for key distribution among the nodes, and may be vulnerable to chosen-plaintext and/or known-plaintext attacks depending on how your distributed system communicates with the outside world, and the exact choice of cipher and mode of operation.\nSo... you will have to read up on crypto in general before you can use it securely.\n",
"Assuming the use of symmetric crypto, then AES should be your default choice, unless you have a good very reason to select otherwise. \nThere was a long, involved competition to select AES, and the winner was carefully chosen. Even Bruce Schneier, crypto god, has said that the AES winner is a better choice than the algorithm (TwoFish) that he submitted to the competition.\n",
"AES 256 is generally the preferred choice, but depending on your location (or your customers' location) you may have legal constraints, and will be forced to use something weaker.\nAlso note that you should use a random IV for each communication and pass it along with the message (this will also save the need for a timestamp).\nIf possible, try not to depend on the algorithm, and pass the algorithm along with the message. The node will then look at the header, and decide on the algorithm that will be used for decryption. That way you can easily switch algorithms when a certain deployment calls for it.\n",
"I'd probably go for AES.\n",
"Asymmetric encryption would work in this scenario as well. Simply have each node publish it's public key. Any node that wants to communicate with that node need only encrypt the message with that node's public key. One advantage of using asymmetric keys is that it becomes easier to change and distribute keys -- since the public keys can be distributed openly, each node need only update it's public-private key pair and republish. You don't need some protocol for the entire network (or each node pair) to agree on a new symmetric key.\n",
"Why not create a VPN among the nodes that must communicate securely? \nThen, you don't have to bother coding up your own security solution, and you're not restricted to a static, shared key (which, if compromised, will allow all captured traffic to be decrypted after the fact).\n"
] | [
7,
7,
4,
3,
1,
1,
1
] | [] | [] | [
"encryption",
"python",
"security"
] | stackoverflow_0000172392_encryption_python_security.txt |
Q:
Python, unit-testing and mocking imports
I am in a project where we are starting refactoring some massive code base. One problem that immediately sprang up is that each file imports a lot of other files. How do I in an elegant way mock this in my unit test without having to alter the actual code so I can start to write unit-tests?
As an example: The file with the functions I want to test, imports ten other files which is part of our software and not python core libs.
I want to be able to run the unit tests as separately as possible and for now I am only going to test functions that does not depend on things from the files that are being imported.
Thanks for all the answers.
I didn't really know what I wanted to do from the start but now I think I know.
Problem was that some imports was only possible when the whole application was running because of some third-party auto-magic. So I had to make some stubs for these modules in a directory which I pointed out with sys.path
Now I can import the file which contains the functions I want to write tests for in my unit-test file without complaints about missing modules.
A:
If you want to import a module while at the same time ensuring that it doesn't import anything, you can replace the __import__ builtin function.
For example, use this class:
class ImportWrapper(object):
def __init__(self, real_import):
self.real_import = real_import
def wrapper(self, wantedModules):
def inner(moduleName, *args, **kwargs):
if moduleName in wantedModules:
print "IMPORTING MODULE", moduleName
self.real_import(*args, **kwargs)
else:
print "NOT IMPORTING MODULE", moduleName
return inner
def mock_import(self, moduleName, wantedModules):
__builtins__.__import__ = self.wrapper(wantedModules)
try:
__import__(moduleName, globals(), locals(), [], -1)
finally:
__builtins__.__import__ = self.real_import
And in your test code, instead of writing import myModule, write:
wrapper = ImportWrapper(__import__)
wrapper.mock_import('myModule', [])
The second argument to mock_import is a list of module names you do want to import in inner module.
This example can be modified further to e.g. import other module than desired instead of just not importing it, or even mocking the module object with some custom object of your own.
A:
"imports a lot of other files"? Imports a lot of other files that are part of your customized code base? Or imports a lot of other files that are part of the Python distribution? Or imports a lot of other open source project files?
If your imports don't work, you have a "simple" PYTHONPATH problem. Get all of your various project directories onto a PYTHONPATH that you can use for testing. We have a rather complex path, in Windows we manage it like this
@set Part1=c:\blah\blah\blah
@set Part2=c:\some\other\path
@set that=g:\shared\stuff
set PYTHONPATH=%part1%;%part2%;%that%
We keep each piece of the path separate so that we (a) know where things come from and (b) can manage change when we move things around.
Since the PYTHONPATH is searched in order, we can control what gets used by adjusting the order on the path.
Once you have "everything", it becomes a question of trust.
Either
you trust something (i.e., the Python code base) and just import it.
Or
You don't trust something (i.e., your own code) and you
test it separately and
mock it for stand-alone testing.
Would you test the Python libraries? If so, you've got a lot of work. If not, then, you should perhaps only mock out the things you're actually going to test.
A:
If you really want to muck around with the python import mechanism, take a look at the ihooks module. It provides tools for changing the behavior of the __import__ built-in. But it's not clear from your question why you need to do this.
A:
No difficult manipulation is necessary if you want a quick-and-dirty fix before your unit-tests.
If the unit tests are in the same file as the code you wish to test, simply delete unwanted module from the globals() dictionary.
Here is a rather lengthy example: suppose you have a module impp.py with contents:
value = 5
Now, in your test file you can write:
>>> import impp
>>> print globals().keys()
>>> def printVal():
>>> print impp.value
['printVal', '__builtins__', '__file__', 'impp', '__name__', '__doc__']
Note that impp is among the globals, because it was imported. Calling the function printVal that uses impp module still works:
>>> printVal()
5
But now, if you remove impp key from globals()...
>>> del globals()['impp']
>>> print globals().keys()
['printVal', '__builtins__', '__file__', '__name__', '__doc__']
...and try to call printVal(), you'll get:
>>> printVal()
Traceback (most recent call last):
File "test_imp.py", line 13, in <module>
printVal()
File "test_imp.py", line 5, in printVal
print impp.value
NameError: global name 'impp' is not defined
...which is probably exactly what you're trying to achieve.
To use it in your unit-tests, you can delete the globals just before running the test suite, e.g. in __main__:
if __name__ == '__main__':
del globals()['impp']
unittest.main()
A:
In your comment above, you say you want to convince python that certain modules have already been imported. This still seems like a strange goal, but if that's really what you want to do, in principle you can sneak around behind the import mechanism's back, and change sys.modules. Not sure how this'd work for package imports, but should be fine for absolute imports.
| Python, unit-testing and mocking imports | I am in a project where we are starting refactoring some massive code base. One problem that immediately sprang up is that each file imports a lot of other files. How do I in an elegant way mock this in my unit test without having to alter the actual code so I can start to write unit-tests?
As an example: The file with the functions I want to test, imports ten other files which is part of our software and not python core libs.
I want to be able to run the unit tests as separately as possible and for now I am only going to test functions that does not depend on things from the files that are being imported.
Thanks for all the answers.
I didn't really know what I wanted to do from the start but now I think I know.
Problem was that some imports was only possible when the whole application was running because of some third-party auto-magic. So I had to make some stubs for these modules in a directory which I pointed out with sys.path
Now I can import the file which contains the functions I want to write tests for in my unit-test file without complaints about missing modules.
| [
"If you want to import a module while at the same time ensuring that it doesn't import anything, you can replace the __import__ builtin function.\nFor example, use this class:\nclass ImportWrapper(object):\n def __init__(self, real_import):\n self.real_import = real_import\n\n def wrapper(self, wantedModules):\n def inner(moduleName, *args, **kwargs):\n if moduleName in wantedModules:\n print \"IMPORTING MODULE\", moduleName\n self.real_import(*args, **kwargs)\n else:\n print \"NOT IMPORTING MODULE\", moduleName\n return inner\n\n def mock_import(self, moduleName, wantedModules):\n __builtins__.__import__ = self.wrapper(wantedModules)\n try:\n __import__(moduleName, globals(), locals(), [], -1)\n finally:\n __builtins__.__import__ = self.real_import\n\nAnd in your test code, instead of writing import myModule, write:\nwrapper = ImportWrapper(__import__)\nwrapper.mock_import('myModule', [])\n\nThe second argument to mock_import is a list of module names you do want to import in inner module.\nThis example can be modified further to e.g. import other module than desired instead of just not importing it, or even mocking the module object with some custom object of your own.\n",
"\"imports a lot of other files\"? Imports a lot of other files that are part of your customized code base? Or imports a lot of other files that are part of the Python distribution? Or imports a lot of other open source project files?\nIf your imports don't work, you have a \"simple\" PYTHONPATH problem. Get all of your various project directories onto a PYTHONPATH that you can use for testing. We have a rather complex path, in Windows we manage it like this\n@set Part1=c:\\blah\\blah\\blah\n@set Part2=c:\\some\\other\\path\n@set that=g:\\shared\\stuff\nset PYTHONPATH=%part1%;%part2%;%that%\n\nWe keep each piece of the path separate so that we (a) know where things come from and (b) can manage change when we move things around.\nSince the PYTHONPATH is searched in order, we can control what gets used by adjusting the order on the path.\nOnce you have \"everything\", it becomes a question of trust.\nEither\n\nyou trust something (i.e., the Python code base) and just import it.\n\nOr\n\nYou don't trust something (i.e., your own code) and you\n\ntest it separately and\nmock it for stand-alone testing.\n\n\n\nWould you test the Python libraries? If so, you've got a lot of work. If not, then, you should perhaps only mock out the things you're actually going to test.\n",
"If you really want to muck around with the python import mechanism, take a look at the ihooks module. It provides tools for changing the behavior of the __import__ built-in. But it's not clear from your question why you need to do this.\n",
"No difficult manipulation is necessary if you want a quick-and-dirty fix before your unit-tests.\nIf the unit tests are in the same file as the code you wish to test, simply delete unwanted module from the globals() dictionary.\nHere is a rather lengthy example: suppose you have a module impp.py with contents:\nvalue = 5\n\nNow, in your test file you can write:\n>>> import impp\n>>> print globals().keys()\n>>> def printVal():\n>>> print impp.value\n['printVal', '__builtins__', '__file__', 'impp', '__name__', '__doc__']\n\nNote that impp is among the globals, because it was imported. Calling the function printVal that uses impp module still works:\n>>> printVal()\n5\n\nBut now, if you remove impp key from globals()...\n>>> del globals()['impp']\n>>> print globals().keys()\n['printVal', '__builtins__', '__file__', '__name__', '__doc__']\n\n...and try to call printVal(), you'll get:\n>>> printVal()\nTraceback (most recent call last):\n File \"test_imp.py\", line 13, in <module>\n printVal()\n File \"test_imp.py\", line 5, in printVal\n print impp.value\nNameError: global name 'impp' is not defined\n\n...which is probably exactly what you're trying to achieve.\nTo use it in your unit-tests, you can delete the globals just before running the test suite, e.g. in __main__:\nif __name__ == '__main__':\n del globals()['impp']\n unittest.main()\n\n",
"In your comment above, you say you want to convince python that certain modules have already been imported. This still seems like a strange goal, but if that's really what you want to do, in principle you can sneak around behind the import mechanism's back, and change sys.modules. Not sure how this'd work for package imports, but should be fine for absolute imports.\n"
] | [
8,
1,
1,
0,
0
] | [] | [] | [
"python",
"python_import",
"refactoring",
"unit_testing"
] | stackoverflow_0000178458_python_python_import_refactoring_unit_testing.txt |
Q:
Python class factory ... or?
We have a database library in C# that we can use like this:
DatabaseConnection conn = DatabaseConnection.FromConnectionString("...");
This library hides many of the differences between different database engines, like SQL function names, parameter names and specifications, etc.
Internally, the DatabaseConnection class is an abstract class implementing some of the basic methods, but the FromConnectionString method runs through a list of registered specialized types that handles the actual differences, and constructs an object of the right class. In other words, I don't get a DatabaseConnection object back, I get a MSSQLDatabaseConnection or OracleDatabaseConnection object back instead, which of course inherit from DatabaseConnection.
The connection string contains information about what kind of database engine and version this connection is for.
I'd like to create a similar library in Python. Is the right approach to make something that can be constructed like this?
conn = DatabaseConnection("...")
or using a class method?
conn = DatabaseConnection.FromConnectionString("...")
is the first even possible, that is... constructing an object like this and getting back something else, a specialized object, depending on data in the passed string?
Ok, let me ask a different question... What is the pythonic way of doing this?
I basically want to have the DatabaseConnection base class in Python as well, implementing the common methods, and specialize in derived classes, and have a method or function somewhere that based on the connection string constructs and returns the right type of object.
A:
This is possible in Python, but is probably not the best way to do it. The class factory pattern is essentially a workaround for languages that don't have first class classes. Since Python does have first class classes, you can store a class in a variable, and use that class directly to create instances. To change what class is created, store a different class in the variable.
For example:
class class1:
def greet(self):
print "hi"
class class2:
def greet(self):
print "hello"
maker = class1
obj1 = maker()
maker = class2
obj2 = maker()
obj1.greet() # prints "hi"
obj2.greet() # prints "hello"
A:
Python doesn't care why type you return.
def DatabaseConnection( str ):
if ( IsOracle( str ) ):
return OracleConnection( str )
else:
return SomeOtherConnection( str )
A:
The first one is absolutely possible, and preferable in my opinion. In python, there's really not a whole lot of magic behind constructors. For all intents and purposes, they're just like any other function. I've used this design pattern a few times to indicate that a class shouldn't be instantiated directly, for example:
def DatabaseConnectionFromString(connection_string)
return _DatabaseConnection(connection_string)
def DatabaseConnectionFromSomethingElse(something_else)
connection_string = convert_something_else_into_string(something_else)
return _DatabaseConnection(connection_string)
class _DatabaseConnection(object):
def __init__(self, connection_string):
self.connection_string = connection_string
Of course, that's a contrived example, but that should give you a general idea.
EDIT: This is also one of the areas where inheritance isn't quite as frowned upon in python as well. You can also do this:
DatabaseConnection(object):
def __init__(self, connection_string):
self.connection_string = connection_string
DatabaseConnectionFromSomethingElse(object)
def __init__(self, something_else):
self.connection_string = convert_something_else_into_string(something_else)
Sorry that's so verbose, but I wanted to make it clear.
| Python class factory ... or? | We have a database library in C# that we can use like this:
DatabaseConnection conn = DatabaseConnection.FromConnectionString("...");
This library hides many of the differences between different database engines, like SQL function names, parameter names and specifications, etc.
Internally, the DatabaseConnection class is an abstract class implementing some of the basic methods, but the FromConnectionString method runs through a list of registered specialized types that handles the actual differences, and constructs an object of the right class. In other words, I don't get a DatabaseConnection object back, I get a MSSQLDatabaseConnection or OracleDatabaseConnection object back instead, which of course inherit from DatabaseConnection.
The connection string contains information about what kind of database engine and version this connection is for.
I'd like to create a similar library in Python. Is the right approach to make something that can be constructed like this?
conn = DatabaseConnection("...")
or using a class method?
conn = DatabaseConnection.FromConnectionString("...")
is the first even possible, that is... constructing an object like this and getting back something else, a specialized object, depending on data in the passed string?
Ok, let me ask a different question... What is the pythonic way of doing this?
I basically want to have the DatabaseConnection base class in Python as well, implementing the common methods, and specialize in derived classes, and have a method or function somewhere that based on the connection string constructs and returns the right type of object.
| [
"This is possible in Python, but is probably not the best way to do it. The class factory pattern is essentially a workaround for languages that don't have first class classes. Since Python does have first class classes, you can store a class in a variable, and use that class directly to create instances. To change what class is created, store a different class in the variable.\nFor example:\nclass class1:\n def greet(self):\n print \"hi\"\n\nclass class2:\n def greet(self):\n print \"hello\"\n\nmaker = class1\nobj1 = maker()\n\nmaker = class2\nobj2 = maker()\n\nobj1.greet() # prints \"hi\"\nobj2.greet() # prints \"hello\"\n\n",
"Python doesn't care why type you return.\ndef DatabaseConnection( str ): \n if ( IsOracle( str ) ): \n return OracleConnection( str ) \n else: \n return SomeOtherConnection( str )\n\n",
"The first one is absolutely possible, and preferable in my opinion. In python, there's really not a whole lot of magic behind constructors. For all intents and purposes, they're just like any other function. I've used this design pattern a few times to indicate that a class shouldn't be instantiated directly, for example:\ndef DatabaseConnectionFromString(connection_string)\n return _DatabaseConnection(connection_string)\n\ndef DatabaseConnectionFromSomethingElse(something_else)\n connection_string = convert_something_else_into_string(something_else)\n return _DatabaseConnection(connection_string)\n\nclass _DatabaseConnection(object):\n def __init__(self, connection_string):\n self.connection_string = connection_string\n\nOf course, that's a contrived example, but that should give you a general idea.\nEDIT: This is also one of the areas where inheritance isn't quite as frowned upon in python as well. You can also do this:\nDatabaseConnection(object):\n def __init__(self, connection_string):\n self.connection_string = connection_string\n\nDatabaseConnectionFromSomethingElse(object)\n def __init__(self, something_else):\n self.connection_string = convert_something_else_into_string(something_else)\n\nSorry that's so verbose, but I wanted to make it clear.\n"
] | [
5,
3,
1
] | [] | [] | [
"c#",
"python"
] | stackoverflow_0000179985_c#_python.txt |
Q:
What is the best way to change text contained in an XML file using Python?
Let's say I have an existing trivial XML file named 'MyData.xml' that contains the following:
<?xml version="1.0" encoding="utf-8" ?>
<myElement>foo</myElement>
I want to change the text value of 'foo' to 'bar' resulting in the following:
<?xml version="1.0" encoding="utf-8" ?>
<myElement>bar</myElement>
Once I am done, I want to save the changes.
What is the easiest and simplest way to accomplish all this?
A:
Use Python's minidom
Basically you will take the following steps:
Read XML data into DOM object
Use DOM methods to modify the document
Save new DOM object to new XML document
The python spec should hold your hand rather nicely though this process.
A:
For quick, non-critical XML manipulations, i really like P4X. It let's you write like this:
import p4x
doc = p4x.P4X (open(file).read)
doc.myElement = 'bar'
A:
This is what I wrote based on @Ryan's answer:
from xml.dom.minidom import parse
import os
# create a backup of original file
new_file_name = 'MyData.xml'
old_file_name = new_file_name + "~"
os.rename(new_file_name, old_file_name)
# change text value of element
doc = parse(old_file_name)
node = doc.getElementsByTagName('myElement')
node[0].firstChild.nodeValue = 'bar'
# persist changes to new file
xml_file = open(new_file_name, "w")
doc.writexml(xml_file, encoding="utf-8")
xml_file.close()
Not sure if this was the easiest and simplest approach but it does work. (@Javier's answer has less lines of code but requires non-standard library)
A:
You also might want to check out Uche Ogbuji's excellent XML Data Binding Library, Amara:
http://uche.ogbuji.net/tech/4suite/amara
(Documentation here:
http://platea.pntic.mec.es/~jmorilla/amara/manual/)
The cool thing about Amara is that it turns an XML document in to a Python object, so you can just do stuff like:
record = doc.xml_create_element(u'Record')
nameElem = doc.xml_create_element(u'Name', content=unicode(name))
record.xml_append(nameElem)
valueElem = doc.xml_create_element(u'Value', content=unicode(value))
record.xml_append(valueElem
(which creates a Record element that contains Name and Value elements (which in turn contain the values of the name and value variables)).
| What is the best way to change text contained in an XML file using Python? | Let's say I have an existing trivial XML file named 'MyData.xml' that contains the following:
<?xml version="1.0" encoding="utf-8" ?>
<myElement>foo</myElement>
I want to change the text value of 'foo' to 'bar' resulting in the following:
<?xml version="1.0" encoding="utf-8" ?>
<myElement>bar</myElement>
Once I am done, I want to save the changes.
What is the easiest and simplest way to accomplish all this?
| [
"Use Python's minidom\nBasically you will take the following steps:\n\nRead XML data into DOM object\nUse DOM methods to modify the document\nSave new DOM object to new XML document\n\nThe python spec should hold your hand rather nicely though this process. \n",
"For quick, non-critical XML manipulations, i really like P4X. It let's you write like this:\nimport p4x\ndoc = p4x.P4X (open(file).read)\ndoc.myElement = 'bar'\n\n",
"This is what I wrote based on @Ryan's answer:\nfrom xml.dom.minidom import parse\nimport os\n\n# create a backup of original file\nnew_file_name = 'MyData.xml'\nold_file_name = new_file_name + \"~\"\nos.rename(new_file_name, old_file_name)\n\n# change text value of element\ndoc = parse(old_file_name)\nnode = doc.getElementsByTagName('myElement')\nnode[0].firstChild.nodeValue = 'bar'\n\n# persist changes to new file\nxml_file = open(new_file_name, \"w\")\ndoc.writexml(xml_file, encoding=\"utf-8\")\nxml_file.close()\n\nNot sure if this was the easiest and simplest approach but it does work. (@Javier's answer has less lines of code but requires non-standard library)\n",
"You also might want to check out Uche Ogbuji's excellent XML Data Binding Library, Amara:\nhttp://uche.ogbuji.net/tech/4suite/amara\n(Documentation here:\nhttp://platea.pntic.mec.es/~jmorilla/amara/manual/)\nThe cool thing about Amara is that it turns an XML document in to a Python object, so you can just do stuff like:\nrecord = doc.xml_create_element(u'Record')\n\nnameElem = doc.xml_create_element(u'Name', content=unicode(name))\n\nrecord.xml_append(nameElem)\n\nvalueElem = doc.xml_create_element(u'Value', content=unicode(value))\n\nrecord.xml_append(valueElem\n\n(which creates a Record element that contains Name and Value elements (which in turn contain the values of the name and value variables)).\n"
] | [
4,
3,
3,
1
] | [] | [] | [
"python",
"text",
"xml"
] | stackoverflow_0000179287_python_text_xml.txt |
Q:
Django UserProfile... without a password
I'd like to create a subset of Users that don't have a login... basically as a way to add a photographer field to photos without having a full blown account associated with that person (since in many cases, they'll never actually log in to the site). A caveat is that I'd also like to be able to enable an account for them later.
So, I think the question becomes what's the best way to set up a "People" table that ties to the User table without actually extending the User table with UserProfile.
A:
A user profile (as returned by django.contrib.auth.models.User.get_profile) doesn't extend the User table - the model you specify as the profile model with the AUTH_PROFILE_MODULE setting is just a model which has a ForeignKey to User. get_profile and the setting are really just a convenience API for accessing an instance of a specific model which has a ForeignKey to a specific User instance.
As such, one option is to create a profile model in which the ForeignKey to User can be null and associate your Photo model with this profile model instead of the User model. This would allow you to create a profile for a non-existent user and attach a registered User to the profile at a later date.
A:
Users that can't login? Just given them a totally random password.
import random
user.set_password( str(random.random()) )
They'll never be able to log on.
A:
Supply your own authentication routine, then you can check (or not check) anything you like. We do this so if they fail on normal username, we can also let them in on email/password (although that's not what I'm showing below).
in settings.py:
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'userprofile.my_authenticate.MyLoginBackend', # if they fail the normal test
)
in userprofile/my_authenticate.py:
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.models import User
class MyLoginBackend(ModelBackend):
"""Return User record if username + (some test) is valid.
Return None if no match.
"""
def authenticate(self, username=None, password=None, request=None):
try:
user = User.objects.get(username=username)
# plus any other test of User/UserProfile, etc.
return user # indicates success
except User.DoesNotExist:
return None
# authenticate
# class MyLoginBackend
A:
From the documentation on django auth, if you want to use the User model, it's mandatory to have a username and password, there are no "anonymous accounts". I guess you could create accounts with a default password and then give the opportunity for people to enable a "real" account (by setting a password themselves).
To set up a "People" table that ties to the User table you just have to use a ForeignKey field (that's actually the recommended way of adding additional info to the User model, and not inheritance)
A:
Using a model with a ForeignKey field linking to User might not work as you want because you need anonymous access. I'm not sure if that's going to work, but you might try what happens if you let it have a ForeignKey to AnonymousUser (whose id is always None!) instead.
If you try it, post your results here, I'd be curious.
A:
The django.contrib.auth.models.User exists solely for the purpose of using default authentication backend (database-based). If you write your own backend, you can make some accounts passwordless, while keeping normal accounts with passwords. Django documentation has a chapter on this.
A:
Another upvote for insin's answer: handle this through a UserProfile. James Bennett has a great article about extending django.contrib.auth.models.User. He walks through a couple methods, explains their pros/cons and lands on the UserProfile way as ideal.
| Django UserProfile... without a password | I'd like to create a subset of Users that don't have a login... basically as a way to add a photographer field to photos without having a full blown account associated with that person (since in many cases, they'll never actually log in to the site). A caveat is that I'd also like to be able to enable an account for them later.
So, I think the question becomes what's the best way to set up a "People" table that ties to the User table without actually extending the User table with UserProfile.
| [
"A user profile (as returned by django.contrib.auth.models.User.get_profile) doesn't extend the User table - the model you specify as the profile model with the AUTH_PROFILE_MODULE setting is just a model which has a ForeignKey to User. get_profile and the setting are really just a convenience API for accessing an instance of a specific model which has a ForeignKey to a specific User instance.\nAs such, one option is to create a profile model in which the ForeignKey to User can be null and associate your Photo model with this profile model instead of the User model. This would allow you to create a profile for a non-existent user and attach a registered User to the profile at a later date.\n",
"Users that can't login? Just given them a totally random password.\nimport random\nuser.set_password( str(random.random()) )\n\nThey'll never be able to log on.\n",
"Supply your own authentication routine, then you can check (or not check) anything you like. We do this so if they fail on normal username, we can also let them in on email/password (although that's not what I'm showing below).\nin settings.py:\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n 'userprofile.my_authenticate.MyLoginBackend', # if they fail the normal test\n )\n\nin userprofile/my_authenticate.py:\nfrom django.contrib.auth.backends import ModelBackend\nfrom django.contrib.auth.models import User\n\nclass MyLoginBackend(ModelBackend):\n \"\"\"Return User record if username + (some test) is valid.\n Return None if no match.\n \"\"\"\n\n def authenticate(self, username=None, password=None, request=None):\n try:\n user = User.objects.get(username=username)\n # plus any other test of User/UserProfile, etc.\n return user # indicates success\n except User.DoesNotExist:\n return None\n # authenticate\n# class MyLoginBackend\n\n",
"From the documentation on django auth, if you want to use the User model, it's mandatory to have a username and password, there are no \"anonymous accounts\". I guess you could create accounts with a default password and then give the opportunity for people to enable a \"real\" account (by setting a password themselves).\nTo set up a \"People\" table that ties to the User table you just have to use a ForeignKey field (that's actually the recommended way of adding additional info to the User model, and not inheritance)\n",
"Using a model with a ForeignKey field linking to User might not work as you want because you need anonymous access. I'm not sure if that's going to work, but you might try what happens if you let it have a ForeignKey to AnonymousUser (whose id is always None!) instead.\nIf you try it, post your results here, I'd be curious.\n",
"The django.contrib.auth.models.User exists solely for the purpose of using default authentication backend (database-based). If you write your own backend, you can make some accounts passwordless, while keeping normal accounts with passwords. Django documentation has a chapter on this.\n",
"Another upvote for insin's answer: handle this through a UserProfile. James Bennett has a great article about extending django.contrib.auth.models.User. He walks through a couple methods, explains their pros/cons and lands on the UserProfile way as ideal.\n"
] | [
8,
3,
3,
0,
0,
0,
0
] | [] | [] | [
"database",
"django",
"django_authentication",
"django_users",
"python"
] | stackoverflow_0000172066_database_django_django_authentication_django_users_python.txt |
Q:
WSGI Middleware recommendations
I have heard that there is lots of interesting and useful WSGI middleware around. However, I am not sure which ones (apart from the ones that are part of pylons) are useful and stable. What is your favourite WSGI middleware?
A:
WSGI.org has a fairly comprehensive list of WSGI Middleware & Utilities.
| WSGI Middleware recommendations | I have heard that there is lots of interesting and useful WSGI middleware around. However, I am not sure which ones (apart from the ones that are part of pylons) are useful and stable. What is your favourite WSGI middleware?
| [
"WSGI.org has a fairly comprehensive list of WSGI Middleware & Utilities.\n"
] | [
2
] | [] | [] | [
"pylons",
"python",
"wsgi"
] | stackoverflow_0000178001_pylons_python_wsgi.txt |
Q:
Python, beyond the basics
I've gotten to grips with the basics of Python and I've got a small holiday which I want to use some of to learn a little more Python. The problem is that I have no idea what to learn or where to start. I'm primarily web development but in this case I don't know how much difference it will make.
A:
Well, there are great ressources for advanced Python programming :
Dive Into Python (read it for free)
Online python cookbooks (e.g. here and there)
O'Reilly's Python Cookbook (see amazon)
A funny riddle game : Python Challenge
Here is a list of subjects you must master if you want to write "Python" on your resume :
list comprehensions
iterators and generators
decorators
They are what make Python such a cool language (with the standard library of course, that I keep discovering everyday).
A:
Depending on exactly what you mean by "gotten to grips with the basics", I'd suggest reading through Dive Into Python and typing/executing all the chapter code, then get something like Programming Collective Intelligence and working through it - you'll learn python quite well, not to mention some quite excellent algorithms that'll come in handy to a web developer.
A:
Something great to play around with, though not a project, is The Python Challenge. I've found it quite useful in improving my python skills, and it gives your brain a good workout at the same time.
A:
I honestly loved the book Programming Python. It has a large assortment of small projects, most of which can be completed in an evening at a leisurely pace. They get you acquainted with most of the standard library and will likely hold your interest. Most importantly these small projects are actually useful in a "day to day" sense. The book pretty much only assumes you know and understand the bare essentials of Python as a language, rather than knowledge of it's huge API library.
I think you'll find it'll be well worth working through.
A:
I'll plug Building Skills in Python. Plus, if you want something more challenging, Building Skills in OO Design is a rather large and complex series of exercises.
A:
The Python Cookbook is absolutely essential if you want to master idiomatic Python. Besides, that's the book that made me fall in love with the language.
A:
I'd suggest writing a non-trivial webapp using either Django or Pylons, something that does some number crunching.
No better way to learn a new language than commiting yourself to a problem and learning as you go!
A:
Write a web app, likely in Django - the docs will teach you a lot of good Python style.
Use some of the popular libraries like Pygments or the Universal Feed Parser. Both of these make extremely useful functions, which are hard to get right, available in a well-documented API.
In general, I'd stay away from libs that aren't well documented -
you'll bang your head on the wall trying to reverse-engineer them -
and libraries that are wrappers around C libraries, if you don't have
any C experience. I worked on wxPython code when I was still learning
Python, which was my first language, and at the time it was little
more than a wrapper around wxWidgets. That code was easily the ugliest
I've ever written.
I didn't get that much out of Dive Into Python, except for the dynamic import chapter - that's not really well-documented elsewhere.
A:
People tend to say something along the lines of "The best way to learn is by doing" but I've always found that unless you're specifically learning a language to contribute to some project it's difficult to actually find little problems to tackle to keep yourself going.
A good solution to this is Project Euler, which has a list of various programming\mathematics challenges ranging from simple to quite brain-taxing. As an example, the first challenge is:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
And by problem #50 it's already getting a little tougher
Which prime, below one-million, can be written as the sum of the most consecutive primes
There are 208 in total, but I think some new ones get added here and there.
While I already knew python fairly well before starting Project Euler, I found that I learned some cool tricks purely through using the language so much. Good luck!
A:
Search "Alex Martelli", "Alex Martelli patterns" and "Thomas Wouters" on Google video. There's plenty of interesting talks on advanced Python, design patterns in Python, and so on.
| Python, beyond the basics | I've gotten to grips with the basics of Python and I've got a small holiday which I want to use some of to learn a little more Python. The problem is that I have no idea what to learn or where to start. I'm primarily web development but in this case I don't know how much difference it will make.
| [
"Well, there are great ressources for advanced Python programming :\n\nDive Into Python (read it for free)\nOnline python cookbooks (e.g. here and there)\nO'Reilly's Python Cookbook (see amazon)\nA funny riddle game : Python Challenge \n\nHere is a list of subjects you must master if you want to write \"Python\" on your resume :\n\nlist comprehensions\niterators and generators\ndecorators\n\nThey are what make Python such a cool language (with the standard library of course, that I keep discovering everyday).\n",
"Depending on exactly what you mean by \"gotten to grips with the basics\", I'd suggest reading through Dive Into Python and typing/executing all the chapter code, then get something like Programming Collective Intelligence and working through it - you'll learn python quite well, not to mention some quite excellent algorithms that'll come in handy to a web developer.\n",
"Something great to play around with, though not a project, is The Python Challenge. I've found it quite useful in improving my python skills, and it gives your brain a good workout at the same time.\n",
"I honestly loved the book Programming Python. It has a large assortment of small projects, most of which can be completed in an evening at a leisurely pace. They get you acquainted with most of the standard library and will likely hold your interest. Most importantly these small projects are actually useful in a \"day to day\" sense. The book pretty much only assumes you know and understand the bare essentials of Python as a language, rather than knowledge of it's huge API library.\nI think you'll find it'll be well worth working through.\n",
"I'll plug Building Skills in Python. Plus, if you want something more challenging, Building Skills in OO Design is a rather large and complex series of exercises.\n",
"The Python Cookbook is absolutely essential if you want to master idiomatic Python. Besides, that's the book that made me fall in love with the language.\n",
"I'd suggest writing a non-trivial webapp using either Django or Pylons, something that does some number crunching.\nNo better way to learn a new language than commiting yourself to a problem and learning as you go!\n",
"Write a web app, likely in Django - the docs will teach you a lot of good Python style.\nUse some of the popular libraries like Pygments or the Universal Feed Parser. Both of these make extremely useful functions, which are hard to get right, available in a well-documented API.\nIn general, I'd stay away from libs that aren't well documented -\nyou'll bang your head on the wall trying to reverse-engineer them -\nand libraries that are wrappers around C libraries, if you don't have\nany C experience. I worked on wxPython code when I was still learning\nPython, which was my first language, and at the time it was little\nmore than a wrapper around wxWidgets. That code was easily the ugliest\nI've ever written.\nI didn't get that much out of Dive Into Python, except for the dynamic import chapter - that's not really well-documented elsewhere.\n",
"People tend to say something along the lines of \"The best way to learn is by doing\" but I've always found that unless you're specifically learning a language to contribute to some project it's difficult to actually find little problems to tackle to keep yourself going.\nA good solution to this is Project Euler, which has a list of various programming\\mathematics challenges ranging from simple to quite brain-taxing. As an example, the first challenge is:\n\nIf we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.\n\nAnd by problem #50 it's already getting a little tougher\n\nWhich prime, below one-million, can be written as the sum of the most consecutive primes\n\nThere are 208 in total, but I think some new ones get added here and there.\nWhile I already knew python fairly well before starting Project Euler, I found that I learned some cool tricks purely through using the language so much. Good luck!\n",
"Search \"Alex Martelli\", \"Alex Martelli patterns\" and \"Thomas Wouters\" on Google video. There's plenty of interesting talks on advanced Python, design patterns in Python, and so on.\n"
] | [
16,
5,
3,
2,
2,
2,
1,
1,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000092230_python.txt |
Q:
wxPython: displaying multiple widgets in same frame
I would like to be able to display Notebook and a TxtCtrl wx widgets in a single frame. Below is an example adapted from the wxpython wiki; is it possible to change their layout (maybe with something like wx.SplitterWindow) to display the text box below the Notebook in the same frame?
import wx
import wx.lib.sheet as sheet
class MySheet(sheet.CSheet):
def __init__(self, parent):
sheet.CSheet.__init__(self, parent)
self.SetLabelBackgroundColour('#CCFF66')
self.SetNumberRows(50)
self.SetNumberCols(50)
class Notebook(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(600, 600))
menubar = wx.MenuBar()
file = wx.Menu()
file.Append(101, 'Quit', '' )
menubar.Append(file, "&File")
self.SetMenuBar(menubar)
wx.EVT_MENU(self, 101, self.OnQuit)
nb = wx.Notebook(self, -1, style=wx.NB_BOTTOM)
self.sheet1 = MySheet(nb)
self.sheet2 = MySheet(nb)
self.sheet3 = MySheet(nb)
nb.AddPage(self.sheet1, "Sheet1")
nb.AddPage(self.sheet2, "Sheet2")
nb.AddPage(self.sheet3, "Sheet3")
self.sheet1.SetFocus()
self.StatusBar()
def StatusBar(self):
self.statusbar = self.CreateStatusBar()
def OnQuit(self, event):
self.Close()
class MyFrame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, wx.DefaultPosition, wx.Size(450, 400))
self.text = wx.TextCtrl(self, -1, style = wx.TE_MULTILINE)
self.Center()
class MyApp(wx.App):
def OnInit(self):
frame = Notebook(None, -1, 'notebook.py')
frame.Show(True)
frame.Center()
frame2 = MyFrame(None, -1, '')
frame2.Show(True)
self.SetTopWindow(frame2)
return True
app = MyApp(0)
app.MainLoop()
A:
Making two widgets appear on the same frame is easy, actually. You should use sizers to accomplish this.
In your example, you can change your Notebook class implementation to something like this:
class Notebook(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(600, 600))
menubar = wx.MenuBar()
file = wx.Menu()
file.Append(101, 'Quit', '' )
menubar.Append(file, "&File")
self.SetMenuBar(menubar)
wx.EVT_MENU(self, 101, self.OnQuit)
nb = wx.Notebook(self, -1, style=wx.NB_BOTTOM)
self.sheet1 = MySheet(nb)
self.sheet2 = MySheet(nb)
self.sheet3 = MySheet(nb)
nb.AddPage(self.sheet1, "Sheet1")
nb.AddPage(self.sheet2, "Sheet2")
nb.AddPage(self.sheet3, "Sheet3")
self.sheet1.SetFocus()
self.StatusBar()
# new code begins here:
# add your text ctrl:
self.text = wx.TextCtrl(self, -1, style = wx.TE_MULTILINE)
# create a new sizer for both controls:
sizer = wx.BoxSizer(wx.VERTICAL)
# add notebook first, with size factor 2:
sizer.Add(nb, 2)
# then text, size factor 1, maximized
sizer.Add(self.text, 1, wx.EXPAND)
# assign the sizer to Frame:
self.SetSizerAndFit(sizer)
Only the __init__ method is changed. Note that you can manipulate the proportions between the notebook and text control by changing the second argument of the Add method.
You can learn more about sizers from the official Sizer overview article.
A:
You can use a splitter, yes.
Also, it makes sense to create a Panel, place your widgets in it (with sizers), and add this panel to the Frame.
| wxPython: displaying multiple widgets in same frame | I would like to be able to display Notebook and a TxtCtrl wx widgets in a single frame. Below is an example adapted from the wxpython wiki; is it possible to change their layout (maybe with something like wx.SplitterWindow) to display the text box below the Notebook in the same frame?
import wx
import wx.lib.sheet as sheet
class MySheet(sheet.CSheet):
def __init__(self, parent):
sheet.CSheet.__init__(self, parent)
self.SetLabelBackgroundColour('#CCFF66')
self.SetNumberRows(50)
self.SetNumberCols(50)
class Notebook(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(600, 600))
menubar = wx.MenuBar()
file = wx.Menu()
file.Append(101, 'Quit', '' )
menubar.Append(file, "&File")
self.SetMenuBar(menubar)
wx.EVT_MENU(self, 101, self.OnQuit)
nb = wx.Notebook(self, -1, style=wx.NB_BOTTOM)
self.sheet1 = MySheet(nb)
self.sheet2 = MySheet(nb)
self.sheet3 = MySheet(nb)
nb.AddPage(self.sheet1, "Sheet1")
nb.AddPage(self.sheet2, "Sheet2")
nb.AddPage(self.sheet3, "Sheet3")
self.sheet1.SetFocus()
self.StatusBar()
def StatusBar(self):
self.statusbar = self.CreateStatusBar()
def OnQuit(self, event):
self.Close()
class MyFrame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, wx.DefaultPosition, wx.Size(450, 400))
self.text = wx.TextCtrl(self, -1, style = wx.TE_MULTILINE)
self.Center()
class MyApp(wx.App):
def OnInit(self):
frame = Notebook(None, -1, 'notebook.py')
frame.Show(True)
frame.Center()
frame2 = MyFrame(None, -1, '')
frame2.Show(True)
self.SetTopWindow(frame2)
return True
app = MyApp(0)
app.MainLoop()
| [
"Making two widgets appear on the same frame is easy, actually. You should use sizers to accomplish this.\nIn your example, you can change your Notebook class implementation to something like this:\nclass Notebook(wx.Frame):\n def __init__(self, parent, id, title):\n wx.Frame.__init__(self, parent, id, title, size=(600, 600))\n menubar = wx.MenuBar()\n file = wx.Menu()\n file.Append(101, 'Quit', '' )\n menubar.Append(file, \"&File\")\n self.SetMenuBar(menubar)\n wx.EVT_MENU(self, 101, self.OnQuit)\n nb = wx.Notebook(self, -1, style=wx.NB_BOTTOM)\n self.sheet1 = MySheet(nb)\n self.sheet2 = MySheet(nb)\n self.sheet3 = MySheet(nb)\n nb.AddPage(self.sheet1, \"Sheet1\")\n nb.AddPage(self.sheet2, \"Sheet2\")\n nb.AddPage(self.sheet3, \"Sheet3\")\n self.sheet1.SetFocus()\n self.StatusBar()\n # new code begins here:\n # add your text ctrl:\n self.text = wx.TextCtrl(self, -1, style = wx.TE_MULTILINE)\n # create a new sizer for both controls:\n sizer = wx.BoxSizer(wx.VERTICAL)\n # add notebook first, with size factor 2:\n sizer.Add(nb, 2)\n # then text, size factor 1, maximized\n sizer.Add(self.text, 1, wx.EXPAND)\n # assign the sizer to Frame:\n self.SetSizerAndFit(sizer)\n\nOnly the __init__ method is changed. Note that you can manipulate the proportions between the notebook and text control by changing the second argument of the Add method.\nYou can learn more about sizers from the official Sizer overview article.\n",
"You can use a splitter, yes.\nAlso, it makes sense to create a Panel, place your widgets in it (with sizers), and add this panel to the Frame.\n"
] | [
9,
1
] | [] | [] | [
"layout",
"python",
"user_interface",
"wxpython",
"wxwidgets"
] | stackoverflow_0000181573_layout_python_user_interface_wxpython_wxwidgets.txt |
Q:
Os.path : can you explain this behavior?
I love Python because it comes batteries included, and I use built-in functions, a lot, to do the dirty job for me.
I have always been using happily the os.path module to deal with file path but recently I ended up with unexpected results on Python 2.5 under Ubuntu linux, while dealing with string that represent windows file paths :
filepath = r"c:\ttemp\FILEPA~1.EXE"
print os.path.basename(filepath)
'c:\\ttemp\\FILEPA~1.EXE']
print os.path.splitdrive(filepath)
('', 'c:\ttemp\\FILEPA~1.EXE')
WTF ?
It ends up the same way with filepath = u"c:\ttemp\FILEPA~1.EXE" and filepath = "c:\ttemp\FILEPA~1.EXE".
Do you have a clue ? Ubuntu use UTF8 but I don't feel like it has something to do with it. Maybe my Python install is messed up but I did not perform any particular tweak on it that I can remember.
A:
If you want to manipulate Windows paths on linux you should use the ntpath module (this is the module that is imported as os.path on windows - posixpath is imported as os.path on linux)
>>> import ntpath
>>> filepath = r"c:\ttemp\FILEPA~1.EXE"
>>> print ntpath.basename(filepath)
FILEPA~1.EXE
>>> print ntpath.splitdrive(filepath)
('c:', '\\ttemp\\FILEPA~1.EXE')
A:
From a os.path documentation:
os.path.splitdrive(path)
Split the pathname path into a pair (drive, tail) where drive is either a drive specification or the empty string. On systems which do not use drive specifications, drive will always be the empty string. In all cases, drive + tail will be the same as path.
If you running this on unix, it doesnt use drive specifications, hence - drive will be empty string.
If you want to solve windows paths on any platform, you can just use a simple regexp:
import re
(drive, tail) = re.compile('([a-zA-Z]\:){0,1}(.*)').match(filepath).groups()
drive will be a drive letter followed by : (eg. c:, u:) or None, and tail the whole rest :)
A:
See the documentation here, specifically:
splitdrive(p) Split a pathname into
drive and path. On Posix, drive is
always empty.
So this won't work on a Linux box.
| Os.path : can you explain this behavior? | I love Python because it comes batteries included, and I use built-in functions, a lot, to do the dirty job for me.
I have always been using happily the os.path module to deal with file path but recently I ended up with unexpected results on Python 2.5 under Ubuntu linux, while dealing with string that represent windows file paths :
filepath = r"c:\ttemp\FILEPA~1.EXE"
print os.path.basename(filepath)
'c:\\ttemp\\FILEPA~1.EXE']
print os.path.splitdrive(filepath)
('', 'c:\ttemp\\FILEPA~1.EXE')
WTF ?
It ends up the same way with filepath = u"c:\ttemp\FILEPA~1.EXE" and filepath = "c:\ttemp\FILEPA~1.EXE".
Do you have a clue ? Ubuntu use UTF8 but I don't feel like it has something to do with it. Maybe my Python install is messed up but I did not perform any particular tweak on it that I can remember.
| [
"If you want to manipulate Windows paths on linux you should use the ntpath module (this is the module that is imported as os.path on windows - posixpath is imported as os.path on linux)\n>>> import ntpath\n>>> filepath = r\"c:\\ttemp\\FILEPA~1.EXE\"\n>>> print ntpath.basename(filepath)\nFILEPA~1.EXE\n>>> print ntpath.splitdrive(filepath)\n('c:', '\\\\ttemp\\\\FILEPA~1.EXE')\n\n",
"From a os.path documentation:\nos.path.splitdrive(path)\nSplit the pathname path into a pair (drive, tail) where drive is either a drive specification or the empty string. On systems which do not use drive specifications, drive will always be the empty string. In all cases, drive + tail will be the same as path.\nIf you running this on unix, it doesnt use drive specifications, hence - drive will be empty string. \nIf you want to solve windows paths on any platform, you can just use a simple regexp:\nimport re\n(drive, tail) = re.compile('([a-zA-Z]\\:){0,1}(.*)').match(filepath).groups() \n\ndrive will be a drive letter followed by : (eg. c:, u:) or None, and tail the whole rest :)\n",
"See the documentation here, specifically: \n\nsplitdrive(p) Split a pathname into\n drive and path. On Posix, drive is\n always empty.\n\nSo this won't work on a Linux box.\n"
] | [
26,
4,
1
] | [] | [] | [
"path",
"python"
] | stackoverflow_0000182253_path_python.txt |
Q:
Setup Python environment on Windows
How do I setup a Python environment on windows computer so I can start writing and running Python scripts, is there an install bundle? Also which database should i use?
I should of mentioned that I am using this for web based applications. Does it require apache? or does it use another http server? What is the standard setup for Python running web apps?
A:
Download the Python 2.6 Windows installer from python.org (direct link). If you're just learning, use the included SQLite library so you don't have to fiddle with database servers.
Most web development frameworks (Django, Turbogears, etc) come with a built-in webserver command that runs on the local computer without Apache.
A:
Bundle: go with Activestate's Python, which bundles many useful win32-related libraries. It has no version 2.6 yet, but most code you'll find online refers to 2.5 and lower anyway.
Database: any of the popular open-source DBs are simple to configure. But as John already suggested, for simple beginning stuff just use SQLite which already comes bundled with Python.
Web server: depends on the scale. You can configure Apache, yes, but for trying simple things the following is a quite complete web server in Python that will also serve CGI scripts writte in Python:
import CGIHTTPServer
import BaseHTTPServer
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
cgi_directories = ["/cgi"]
PORT = 9999
httpd = BaseHTTPServer.HTTPServer(("", PORT), Handler)
print "serving at port", PORT
httpd.serve_forever()
A:
I strongly recommend ActiveState Python for python on windows development. It comes with Win32Com and various other goodies, has a mature and clean installer, a chm version of the docs and works really well. I use this all of the time.
As for a database, Activestate comes with odbc support, which plays very nicely with SQL server. I've also had it working with Sybase and DB2/400 (although the connection strings for the latter tend to be rather convoluted). For Oracle, I recommend CX_Oracle as the best interface library. Native drivers for most proprietary and open-source databases (such as MySQL and PostGreSQL) also exist. Recent versions of Python (from 2.5 onwards IIRC) come with SQLite bundled as standard.
A:
Don't forget to install pywin32 after installing the official (command line) installer. This will define additional start menu items and the highly useful PythonWin IDE.
An installer for both is available at Activestate (no 2.6 yet). The Activestate distribution contains additional documentation.
A:
Might I suggest taking a look at Karrigell? It's really a nice Python web framework if you don't require everything Django and Turbogears offers. It might be easier for you to work with web frameworks until you get comfortable with them.
For development, I recommend downloading the latest SPE IDE. It should provide you nearly all the tools you will need, plus it includes wxGlade for GUI development.
A:
Django tutorial How to install Django provides a good example how a web-development Python environment may look.
| Setup Python environment on Windows | How do I setup a Python environment on windows computer so I can start writing and running Python scripts, is there an install bundle? Also which database should i use?
I should of mentioned that I am using this for web based applications. Does it require apache? or does it use another http server? What is the standard setup for Python running web apps?
| [
"Download the Python 2.6 Windows installer from python.org (direct link). If you're just learning, use the included SQLite library so you don't have to fiddle with database servers.\n\nMost web development frameworks (Django, Turbogears, etc) come with a built-in webserver command that runs on the local computer without Apache.\n",
"Bundle: go with Activestate's Python, which bundles many useful win32-related libraries. It has no version 2.6 yet, but most code you'll find online refers to 2.5 and lower anyway.\nDatabase: any of the popular open-source DBs are simple to configure. But as John already suggested, for simple beginning stuff just use SQLite which already comes bundled with Python.\nWeb server: depends on the scale. You can configure Apache, yes, but for trying simple things the following is a quite complete web server in Python that will also serve CGI scripts writte in Python:\nimport CGIHTTPServer\nimport BaseHTTPServer\n\nclass Handler(CGIHTTPServer.CGIHTTPRequestHandler):\n cgi_directories = [\"/cgi\"]\n\nPORT = 9999\n\nhttpd = BaseHTTPServer.HTTPServer((\"\", PORT), Handler)\nprint \"serving at port\", PORT\nhttpd.serve_forever()\n\n",
"I strongly recommend ActiveState Python for python on windows development. It comes with Win32Com and various other goodies, has a mature and clean installer, a chm version of the docs and works really well. I use this all of the time.\nAs for a database, Activestate comes with odbc support, which plays very nicely with SQL server. I've also had it working with Sybase and DB2/400 (although the connection strings for the latter tend to be rather convoluted). For Oracle, I recommend CX_Oracle as the best interface library. Native drivers for most proprietary and open-source databases (such as MySQL and PostGreSQL) also exist. Recent versions of Python (from 2.5 onwards IIRC) come with SQLite bundled as standard.\n",
"Don't forget to install pywin32 after installing the official (command line) installer. This will define additional start menu items and the highly useful PythonWin IDE.\nAn installer for both is available at Activestate (no 2.6 yet). The Activestate distribution contains additional documentation.\n",
"Might I suggest taking a look at Karrigell? It's really a nice Python web framework if you don't require everything Django and Turbogears offers. It might be easier for you to work with web frameworks until you get comfortable with them.\nFor development, I recommend downloading the latest SPE IDE. It should provide you nearly all the tools you will need, plus it includes wxGlade for GUI development.\n",
"Django tutorial How to install Django provides a good example how a web-development Python environment may look.\n"
] | [
7,
4,
2,
1,
1,
0
] | [] | [] | [
"database",
"development_environment",
"installation",
"python",
"windows"
] | stackoverflow_0000182053_database_development_environment_installation_python_windows.txt |
Q:
Code to verify updates from the Google Safe Browsing API
In order to verify the data coming from the Google Safe Browsing API, you can calculate a Message Authentication Code (MAC) for each update. The instructions to do this (from Google) are:
The MAC is computed from an MD5 Digest
over the following information:
client_key|separator|table
data|separator|client_key. The
separator is the string:coolgoog: -
that is a colon followed by "coolgoog"
followed by a colon. The resulting
128-bit MD5 digest is websafe base-64
encoded.
There's also example data to check against:
client key: "8eirwN1kTwCzgWA2HxTaRQ=="
response:
[goog-black-hash 1.180 update][mac=dRalfTU+bXwUhlk0NCGJtQ==]
+8070465bdf3b9c6ad6a89c32e8162ef1
+86fa593a025714f89d6bc8c9c5a191ac
+bbbd7247731cbb7ec1b3a5814ed4bc9d
*Note that there are tabs at the end of each line.
I'm unable to get a match. Please either point out where I'm going wrong, or just write the couple of lines of Python code necessary to do this!
FWIW, I expected to be able to do something like this:
>>> s = "+8070465bdf3b9c6ad6a89c32e8162ef1\t\n+86fa593a025714f89d6bc8c9c5a191ac\t\n+bbbd7247731cbb7ec1b3a5814ed4bc9d\t"
>>> c = "8eirwN1kTwCzgWA2HxTaRQ=="
>>> hashlib.md5("%s%s%s%s%s" % (c, ":coolgoog:", s, ":coolgoog:", c)).digest().encode("base64")
'qfb50mxpHrS82yTofPkcEg==\n'
But as you can see, 'qfb50mxpHrS82yTofPkcEg==\n' != 'dRalfTU+bXwUhlk0NCGJtQ=='.
A:
Anders' answer gives the necessary information, but isn't that clear: the client key needs to be decoded before it is combined. (The example above is also missing a newline at the end of the final table data).
So the working code is:
>>> s = "+8070465bdf3b9c6ad6a89c32e8162ef1\t\n+86fa593a025714f89d6bc8c9c5a191ac\t\n+bbbd7247731cbb7ec1b3a5814ed4bc9d\t\n"
>>> c = "8eirwN1kTwCzgWA2HxTaRQ==".decode('base64')
>>> hashlib.md5("%s%s%s%s%s" % (c, ":coolgoog:", s, ":coolgoog:", c)).digest().encode("base64")
'dRalfTU+bXwUhlk0NCGJtQ==\n'
A:
c="8eirwN1kTwCzgWA2HxTaRQ==".decode('base64')
| Code to verify updates from the Google Safe Browsing API | In order to verify the data coming from the Google Safe Browsing API, you can calculate a Message Authentication Code (MAC) for each update. The instructions to do this (from Google) are:
The MAC is computed from an MD5 Digest
over the following information:
client_key|separator|table
data|separator|client_key. The
separator is the string:coolgoog: -
that is a colon followed by "coolgoog"
followed by a colon. The resulting
128-bit MD5 digest is websafe base-64
encoded.
There's also example data to check against:
client key: "8eirwN1kTwCzgWA2HxTaRQ=="
response:
[goog-black-hash 1.180 update][mac=dRalfTU+bXwUhlk0NCGJtQ==]
+8070465bdf3b9c6ad6a89c32e8162ef1
+86fa593a025714f89d6bc8c9c5a191ac
+bbbd7247731cbb7ec1b3a5814ed4bc9d
*Note that there are tabs at the end of each line.
I'm unable to get a match. Please either point out where I'm going wrong, or just write the couple of lines of Python code necessary to do this!
FWIW, I expected to be able to do something like this:
>>> s = "+8070465bdf3b9c6ad6a89c32e8162ef1\t\n+86fa593a025714f89d6bc8c9c5a191ac\t\n+bbbd7247731cbb7ec1b3a5814ed4bc9d\t"
>>> c = "8eirwN1kTwCzgWA2HxTaRQ=="
>>> hashlib.md5("%s%s%s%s%s" % (c, ":coolgoog:", s, ":coolgoog:", c)).digest().encode("base64")
'qfb50mxpHrS82yTofPkcEg==\n'
But as you can see, 'qfb50mxpHrS82yTofPkcEg==\n' != 'dRalfTU+bXwUhlk0NCGJtQ=='.
| [
"Anders' answer gives the necessary information, but isn't that clear: the client key needs to be decoded before it is combined. (The example above is also missing a newline at the end of the final table data).\nSo the working code is:\n>>> s = \"+8070465bdf3b9c6ad6a89c32e8162ef1\\t\\n+86fa593a025714f89d6bc8c9c5a191ac\\t\\n+bbbd7247731cbb7ec1b3a5814ed4bc9d\\t\\n\"\n>>> c = \"8eirwN1kTwCzgWA2HxTaRQ==\".decode('base64') \n>>> hashlib.md5(\"%s%s%s%s%s\" % (c, \":coolgoog:\", s, \":coolgoog:\", c)).digest().encode(\"base64\")\n'dRalfTU+bXwUhlk0NCGJtQ==\\n'\n\n",
"c=\"8eirwN1kTwCzgWA2HxTaRQ==\".decode('base64')\n\n"
] | [
2,
1
] | [] | [] | [
"api",
"python",
"safe_browsing",
"verification"
] | stackoverflow_0000181994_api_python_safe_browsing_verification.txt |
Q:
MVC model structure in Python
I'm having problems structuring classes in the Model part of an MVC pattern in my Python app. No matter how I turn things, I keep running into circular imports. Here's what I have:
Model/__init__p.y
should hold all Model class names so
I can do a "from Model import User"
e.g. from a Controller or a unit
test case
Model/Database.py
holds Database class
needs to import all Model classes to do ORM
initialization should be performed on first module import, i.e. no extra init calls or instantiations (all methods on Database class are @classmethods)
Model/User.py
contains User model class
needs access to Database class to do queries
should inherit from base class common to all Model classes to share functionality (database persistency methods, parameter validation code etc.)
I have yet to see a real world Python app employing MVC, so my approach is probably un-Pythonic (and possibly a language-agnostic mess on top of that...) - any suggestions on how to solve this?
Thanks, Simon
A:
There is an inconsistency in your specification. You say Database.py needs to import all Model classes to do ORM but then you say the User class need access to the Database to do queries.
Think of these as layers of an API. The Database class provides an API (maybe object-oriented) to some physical persistence layer (such as DB-API 2.0). The Model classes, like User, use the Database layer to load and save their state. There is no reason for the Database.py class to import all the Model classes, and in fact you wouldn't want that because you'd have to modify Database.py each time you created a new Model class - which is a code smell.
A:
Generally, we put it all in one file. This isn't Java or C++.
Start with a single file until you get some more experience with Python. Unless your files are gargantuan, it will work fine.
For example, Django encourages this style, so copy their formula for success. One module for the model. A module for each application; each application imports a common model.
Your Database and superclass stuff can be in your __init__.py file, since it applies to the entire package. That may reduce some of the circularity.
A:
I think you have one issue that should be straightened. Circular references often result from a failure to achieve separation of concerns. In my opinion, the database and model modules shouldn't know much about each other, working against an API instead. In this case the database shouldn't directly reference any specific model classes but instead provide the functionality the model classes will need to function. The model in turn, should get a database reference (injected or requested) that it would use to query and persist itself.
| MVC model structure in Python | I'm having problems structuring classes in the Model part of an MVC pattern in my Python app. No matter how I turn things, I keep running into circular imports. Here's what I have:
Model/__init__p.y
should hold all Model class names so
I can do a "from Model import User"
e.g. from a Controller or a unit
test case
Model/Database.py
holds Database class
needs to import all Model classes to do ORM
initialization should be performed on first module import, i.e. no extra init calls or instantiations (all methods on Database class are @classmethods)
Model/User.py
contains User model class
needs access to Database class to do queries
should inherit from base class common to all Model classes to share functionality (database persistency methods, parameter validation code etc.)
I have yet to see a real world Python app employing MVC, so my approach is probably un-Pythonic (and possibly a language-agnostic mess on top of that...) - any suggestions on how to solve this?
Thanks, Simon
| [
"There is an inconsistency in your specification. You say Database.py needs to import all Model classes to do ORM but then you say the User class need access to the Database to do queries.\nThink of these as layers of an API. The Database class provides an API (maybe object-oriented) to some physical persistence layer (such as DB-API 2.0). The Model classes, like User, use the Database layer to load and save their state. There is no reason for the Database.py class to import all the Model classes, and in fact you wouldn't want that because you'd have to modify Database.py each time you created a new Model class - which is a code smell.\n",
"Generally, we put it all in one file. This isn't Java or C++.\nStart with a single file until you get some more experience with Python. Unless your files are gargantuan, it will work fine. \nFor example, Django encourages this style, so copy their formula for success. One module for the model. A module for each application; each application imports a common model.\nYour Database and superclass stuff can be in your __init__.py file, since it applies to the entire package. That may reduce some of the circularity.\n",
"I think you have one issue that should be straightened. Circular references often result from a failure to achieve separation of concerns. In my opinion, the database and model modules shouldn't know much about each other, working against an API instead. In this case the database shouldn't directly reference any specific model classes but instead provide the functionality the model classes will need to function. The model in turn, should get a database reference (injected or requested) that it would use to query and persist itself.\n"
] | [
7,
3,
1
] | [] | [] | [
"model",
"model_view_controller",
"python",
"structure"
] | stackoverflow_0000185389_model_model_view_controller_python_structure.txt |
Q:
Regular expression to match start of filename and filename extension
What is the regular expression to match strings (in this case, file names) that start with 'Run' and have a filename extension of '.py'?
The regular expression should match any of the following:
RunFoo.py
RunBar.py
Run42.py
It should not match:
myRunFoo.py
RunBar.py1
Run42.txt
The SQL equivalent of what I am looking for is ... LIKE 'Run%.py' ....
A:
For a regular expression, you would use:
re.match(r'Run.*\.py$')
A quick explanation:
. means match any character.
* means match any repetition of the previous character (hence .* means any sequence of chars)
\ is an escape to escape the explicit dot
$ indicates "end of the string", so we don't match "Run_foo.py.txt"
However, for this task, you're probably better off using simple string methods. ie.
filename.startswith("Run") and filename.endswith(".py")
Note: if you want case insensitivity (ie. matching "run.PY" as well as "Run.py", use the re.I option to the regular expression, or convert to a specific case (eg filename.lower()) before using string methods.
A:
I don't really understand why you're after a regular expression to solve this 'problem'. You're just after a way to find all .py files that start with 'Run'. So this is a simple solution that will work, without resorting to compiling an running a regular expression:
import os
for filename in os.listdir(dirname):
root, ext = os.path.splitext(filename)
if root.startswith('Run') and ext == '.py':
print filename
A:
Warning:
jobscry's answer ("^Run.?.py$") is incorrect (will not match "Run123.py", for example).
orlandu63's answer ("/^Run[\w]*?.py$/") will not match "RunFoo.Bar.py".
(I don't have enough reputation to comment, sorry.)
A:
/^Run.*\.py$/
Or, in python specifically:
import re
re.match(r"^Run.*\.py$", stringtocheck)
This will match "Runfoobar.py", but not "runfoobar.PY". To make it case insensitive, instead use:
re.match(r"^Run.*\.py$", stringtocheck, re.I)
A:
You don't need a regular expression, you can use glob, which takes wildcards e.g. Run*.py
For example, to get those files in your current directory...
import os, glob
files = glob.glob( "".join([ os.getcwd(), "\\Run*.py"]) )
A:
If you write a slightly more complex regular expression, you can get an extra feature: extract the bit between "Run" and ".py":
>>> import re
>>> regex = '^Run(?P<name>.*)\.py$'
>>> m = re.match(regex, 'RunFoo.py')
>>> m.group('name')
'Foo'
(the extra bit is the parentheses and everything between them, except for '.*' which is as in Rob Howard's answer)
A:
This probably doesn't fully comply with file-naming standards, but here it goes:
/^Run[\w]*?\.py$/
A:
mabye:
^Run.*\.py$
just a quick try
| Regular expression to match start of filename and filename extension | What is the regular expression to match strings (in this case, file names) that start with 'Run' and have a filename extension of '.py'?
The regular expression should match any of the following:
RunFoo.py
RunBar.py
Run42.py
It should not match:
myRunFoo.py
RunBar.py1
Run42.txt
The SQL equivalent of what I am looking for is ... LIKE 'Run%.py' ....
| [
"For a regular expression, you would use:\nre.match(r'Run.*\\.py$')\n\nA quick explanation:\n\n. means match any character.\n* means match any repetition of the previous character (hence .* means any sequence of chars)\n\\ is an escape to escape the explicit dot\n$ indicates \"end of the string\", so we don't match \"Run_foo.py.txt\"\n\nHowever, for this task, you're probably better off using simple string methods. ie.\nfilename.startswith(\"Run\") and filename.endswith(\".py\")\n\nNote: if you want case insensitivity (ie. matching \"run.PY\" as well as \"Run.py\", use the re.I option to the regular expression, or convert to a specific case (eg filename.lower()) before using string methods.\n",
"I don't really understand why you're after a regular expression to solve this 'problem'. You're just after a way to find all .py files that start with 'Run'. So this is a simple solution that will work, without resorting to compiling an running a regular expression:\nimport os\nfor filename in os.listdir(dirname):\n root, ext = os.path.splitext(filename)\n if root.startswith('Run') and ext == '.py':\n print filename\n\n",
"Warning:\n\njobscry's answer (\"^Run.?.py$\") is incorrect (will not match \"Run123.py\", for example).\norlandu63's answer (\"/^Run[\\w]*?.py$/\") will not match \"RunFoo.Bar.py\".\n\n(I don't have enough reputation to comment, sorry.)\n",
"/^Run.*\\.py$/\n\nOr, in python specifically:\nimport re\nre.match(r\"^Run.*\\.py$\", stringtocheck)\n\nThis will match \"Runfoobar.py\", but not \"runfoobar.PY\". To make it case insensitive, instead use:\nre.match(r\"^Run.*\\.py$\", stringtocheck, re.I)\n\n",
"You don't need a regular expression, you can use glob, which takes wildcards e.g. Run*.py\nFor example, to get those files in your current directory...\nimport os, glob\nfiles = glob.glob( \"\".join([ os.getcwd(), \"\\\\Run*.py\"]) )\n\n",
"If you write a slightly more complex regular expression, you can get an extra feature: extract the bit between \"Run\" and \".py\":\n>>> import re\n>>> regex = '^Run(?P<name>.*)\\.py$'\n>>> m = re.match(regex, 'RunFoo.py')\n>>> m.group('name')\n'Foo'\n\n(the extra bit is the parentheses and everything between them, except for '.*' which is as in Rob Howard's answer)\n",
"This probably doesn't fully comply with file-naming standards, but here it goes:\n/^Run[\\w]*?\\.py$/\n\n",
"mabye:\n^Run.*\\.py$\n\njust a quick try\n"
] | [
56,
16,
15,
10,
6,
4,
0,
0
] | [] | [] | [
"python",
"regex",
"sql",
"sql_like"
] | stackoverflow_0000185378_python_regex_sql_sql_like.txt |
Q:
Extracting unique items from a list of mappings
He're an interesting problem that looks for the most Pythonic solution. Suppose I have a list of mappings {'id': id, 'url': url}. Some ids in the list are duplicate, and I want to create a new list, with all the duplicates removed. I came up with the following function:
def unique_mapping(map):
d = {}
for res in map:
d[res['id']] = res['url']
return [{'id': id, 'url': d[id]} for id in d]
I suppose it's quite efficient. But is there a "more Pythonic" way ? Or perhaps a more efficient way ?
A:
Your example can be rewritten slightly to construct the first dictionary using a generator expression and to remove necessity of construction of another mappings. Just reuse the old ones:
def unique_mapping(mappings):
return dict((m['id'], m) for m in mappings).values()
Although this came out as a one-liner, I still think it's quite readable.
There are two things you have to keep in mind when using your original solution and mine:
the items will not always be returned in the same order they were originally
the later entry will overwrite previous entries with the same id
If you don't mind, then I suggest the solution above. In other case, this function preserves order and treats first-encountered ids with priority:
def unique_mapping(mappings):
addedIds = set()
for m in mappings:
mId = m['id']
if mId not in addedIds:
addedIds.add(mId)
yield m
You might need to call it with list(unique_mappings(mappings)) if you need a list and not a generator.
A:
There are a couple of things you could improve.
You're performing two loops, one over the original dict, and then again over the result dict. You could build up your results in one step instead.
You could change to use a generator, to avoid constructing the whole list up-front. (Use list(unique_mapping(items)) to convert to a full list if you need it)
There's no need to store the value when just checking for duplicates, you can use a set instead.
You're recreating a dictionary for each element, rather than returning the original. This may actually be needed (eg. you're modifying them, and don't want to touch the original), but if not, its more efficient to use the dictionaries already created.
Here's an implementation:
def unique_mapping(items):
s = set()
for res in items:
if res['id'] not in s:
yield res
s.add(res['id'])
A:
I think this can be made simpler still. Dictionaries don't tolerate duplicate keys. Make your list of mappings into a dictionary of mappings. This will remove duplicates.
>>> someListOfDicts= [
{'url': 'http://a', 'id': 'a'},
{'url': 'http://b', 'id': 'b'},
{'url': 'http://c', 'id': 'a'}]
>>> dict( [(x['id'],x) for x in someListOfDicts ] ).values()
[{'url': 'http://c', 'id': 'a'}, {'url': 'http://b', 'id': 'b'}]
| Extracting unique items from a list of mappings | He're an interesting problem that looks for the most Pythonic solution. Suppose I have a list of mappings {'id': id, 'url': url}. Some ids in the list are duplicate, and I want to create a new list, with all the duplicates removed. I came up with the following function:
def unique_mapping(map):
d = {}
for res in map:
d[res['id']] = res['url']
return [{'id': id, 'url': d[id]} for id in d]
I suppose it's quite efficient. But is there a "more Pythonic" way ? Or perhaps a more efficient way ?
| [
"Your example can be rewritten slightly to construct the first dictionary using a generator expression and to remove necessity of construction of another mappings. Just reuse the old ones:\ndef unique_mapping(mappings):\n return dict((m['id'], m) for m in mappings).values()\n\nAlthough this came out as a one-liner, I still think it's quite readable.\nThere are two things you have to keep in mind when using your original solution and mine:\n\nthe items will not always be returned in the same order they were originally\nthe later entry will overwrite previous entries with the same id\n\nIf you don't mind, then I suggest the solution above. In other case, this function preserves order and treats first-encountered ids with priority:\ndef unique_mapping(mappings):\n addedIds = set()\n for m in mappings:\n mId = m['id']\n if mId not in addedIds:\n addedIds.add(mId)\n yield m\n\nYou might need to call it with list(unique_mappings(mappings)) if you need a list and not a generator.\n",
"There are a couple of things you could improve.\n\nYou're performing two loops, one over the original dict, and then again over the result dict. You could build up your results in one step instead.\nYou could change to use a generator, to avoid constructing the whole list up-front. (Use list(unique_mapping(items)) to convert to a full list if you need it)\nThere's no need to store the value when just checking for duplicates, you can use a set instead.\nYou're recreating a dictionary for each element, rather than returning the original. This may actually be needed (eg. you're modifying them, and don't want to touch the original), but if not, its more efficient to use the dictionaries already created.\n\nHere's an implementation:\ndef unique_mapping(items):\n s = set()\n for res in items:\n if res['id'] not in s:\n yield res\n s.add(res['id'])\n\n",
"I think this can be made simpler still. Dictionaries don't tolerate duplicate keys. Make your list of mappings into a dictionary of mappings. This will remove duplicates.\n>>> someListOfDicts= [\n {'url': 'http://a', 'id': 'a'}, \n {'url': 'http://b', 'id': 'b'}, \n {'url': 'http://c', 'id': 'a'}]\n\n>>> dict( [(x['id'],x) for x in someListOfDicts ] ).values()\n\n[{'url': 'http://c', 'id': 'a'}, {'url': 'http://b', 'id': 'b'}]\n\n"
] | [
4,
2,
1
] | [] | [] | [
"duplicate_data",
"python",
"unique"
] | stackoverflow_0000186131_duplicate_data_python_unique.txt |
Q:
How to make parts of a website under SSL and the rest not?
I need to create a cherrypy main page that has a login area. I want the login area to be secure, but not the rest of the page. How can I do this in CherryPy?
Ideally, any suggestions will be compatible with http://web.archive.org/web/20170210040849/http://tools.cherrypy.org:80/wiki/AuthenticationAndAccessRestrictions
A:
This is commonly considered a bad idea. The primary reason is that it confuses most people due to the website identity markers appearing in just about every current browsers url area.
A:
Assuming you only want parts of the actual page to be secure, you should create an iframe pointing to a HTTPS source. However, this shows a "secure and non-secure items on page" warning to the user.
| How to make parts of a website under SSL and the rest not? | I need to create a cherrypy main page that has a login area. I want the login area to be secure, but not the rest of the page. How can I do this in CherryPy?
Ideally, any suggestions will be compatible with http://web.archive.org/web/20170210040849/http://tools.cherrypy.org:80/wiki/AuthenticationAndAccessRestrictions
| [
"This is commonly considered a bad idea. The primary reason is that it confuses most people due to the website identity markers appearing in just about every current browsers url area.\n",
"Assuming you only want parts of the actual page to be secure, you should create an iframe pointing to a HTTPS source. However, this shows a \"secure and non-secure items on page\" warning to the user.\n"
] | [
1,
1
] | [] | [] | [
"cherrypy",
"python",
"ssl"
] | stackoverflow_0000187434_cherrypy_python_ssl.txt |
Q:
Configuration file with list of key-value pairs in python
I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .* does not exist" and be accounted as two occurrences of "file not found" category.
As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort.
I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...).
Do you have any idea of what could be a good way of accomplishing this?
Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code.
A:
I sometimes just write a python module (i.e. file) called config.py or something with following contents:
config = {
'name': 'hello',
'see?': 'world'
}
this can then be 'read' like so:
from config import config
config['name']
config['see?']
easy.
A:
You have two decent options:
Python standard config file format
using ConfigParser
YAML using a library like PyYAML
The standard Python configuration files look like INI files with [sections] and key : value or key = value pairs. The advantages to this format are:
No third-party libraries necessary
Simple, familiar file format.
YAML is different in that it is designed to be a human friendly data serialization format rather than specifically designed for configuration. It is very readable and gives you a couple different ways to represent the same data. For your problem, you could create a YAML file that looks like this:
file .* does not exist : file not found
user .* not found : authorization error
Or like this:
{ file .* does not exist: file not found,
user .* not found: authorization error }
Using PyYAML couldn't be simpler:
import yaml
errors = yaml.load(open('my.yaml'))
At this point errors is a Python dictionary with the expected format. YAML is capable of representing more than dictionaries: if you prefer a list of pairs, use this format:
-
- file .* does not exist
- file not found
-
- user .* not found
- authorization error
Or
[ [file .* does not exist, file not found],
[user .* not found, authorization error]]
Which will produce a list of lists when yaml.load is called.
One advantage of YAML is that you could use it to export your existing, hard-coded data out to a file to create the initial version, rather than cut/paste plus a bunch of find/replace to get the data into the right format.
The YAML format will take a little more time to get familiar with, but using PyYAML is even simpler than using ConfigParser with the advantage is that you have more options regarding how your data is represented using YAML.
Either one sounds like it will fit your current needs, ConfigParser will be easier to start with while YAML gives you more flexibilty in the future, if your needs expand.
Best of luck!
A:
I've heard that ConfigObj is easier to work with than ConfigParser. It is used by a lot of big projects, IPython, Trac, Turbogears, etc...
From their introduction:
ConfigObj is a simple but powerful config file reader and writer: an ini file round tripper. Its main feature is that it is very easy to use, with a straightforward programmer's interface and a simple syntax for config files. It has lots of other features though :
Nested sections (subsections), to any level
List values
Multiple line values
String interpolation (substitution)
Integrated with a powerful validation system
including automatic type checking/conversion
repeated sections
and allowing default values
When writing out config files, ConfigObj preserves all comments and the order of members and sections
Many useful methods and options for working with configuration files (like the 'reload' method)
Full Unicode support
A:
I think you want the ConfigParser module in the standard library. It reads and writes INI style files. The examples and documentation in the standard documentation I've linked to are very comprehensive.
A:
If you are the only one that has access to the configuration file, you can use a simple, low-level solution. Keep the "dictionary" in a text file as a list of tuples (regexp, message) exactly as if it was a python expression:
[
("file .* does not exist", "file not found"),
("user .* not authorized", "authorization error")
]
In your code, load it, then eval it, and compile the regexps in the result:
f = open("messages.py")
messages = eval(f.read()) # caution: you must be sure of what's in that file
f.close()
messages = [(re.compile(r), m) for (r,m) in messages]
and you end up with a list of tuples (compiled_regexp, message).
A:
I typically do as Daren suggested, just make your config file a Python script:
patterns = {
'file .* does not exist': 'file not found',
'user .* not found': 'authorization error',
}
Then you can use it as:
import config
for pattern in config.patterns:
if re.search(pattern, log_message):
print config.patterns[pattern]
This is what Django does with their settings file, by the way.
| Configuration file with list of key-value pairs in python | I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .* does not exist" and be accounted as two occurrences of "file not found" category.
As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort.
I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...).
Do you have any idea of what could be a good way of accomplishing this?
Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code.
| [
"I sometimes just write a python module (i.e. file) called config.py or something with following contents:\nconfig = {\n 'name': 'hello',\n 'see?': 'world'\n}\n\nthis can then be 'read' like so:\nfrom config import config\nconfig['name']\nconfig['see?']\n\neasy.\n",
"You have two decent options:\n\nPython standard config file format\nusing ConfigParser\nYAML using a library like PyYAML\n\nThe standard Python configuration files look like INI files with [sections] and key : value or key = value pairs. The advantages to this format are:\n\nNo third-party libraries necessary\nSimple, familiar file format.\n\nYAML is different in that it is designed to be a human friendly data serialization format rather than specifically designed for configuration. It is very readable and gives you a couple different ways to represent the same data. For your problem, you could create a YAML file that looks like this:\nfile .* does not exist : file not found\nuser .* not found : authorization error\n\nOr like this:\n{ file .* does not exist: file not found,\n user .* not found: authorization error }\n\nUsing PyYAML couldn't be simpler:\nimport yaml\n\nerrors = yaml.load(open('my.yaml'))\n\nAt this point errors is a Python dictionary with the expected format. YAML is capable of representing more than dictionaries: if you prefer a list of pairs, use this format:\n-\n - file .* does not exist \n - file not found\n-\n - user .* not found\n - authorization error\n\nOr\n[ [file .* does not exist, file not found],\n [user .* not found, authorization error]]\n\nWhich will produce a list of lists when yaml.load is called.\nOne advantage of YAML is that you could use it to export your existing, hard-coded data out to a file to create the initial version, rather than cut/paste plus a bunch of find/replace to get the data into the right format.\nThe YAML format will take a little more time to get familiar with, but using PyYAML is even simpler than using ConfigParser with the advantage is that you have more options regarding how your data is represented using YAML.\nEither one sounds like it will fit your current needs, ConfigParser will be easier to start with while YAML gives you more flexibilty in the future, if your needs expand.\nBest of luck!\n",
"I've heard that ConfigObj is easier to work with than ConfigParser. It is used by a lot of big projects, IPython, Trac, Turbogears, etc... \nFrom their introduction:\nConfigObj is a simple but powerful config file reader and writer: an ini file round tripper. Its main feature is that it is very easy to use, with a straightforward programmer's interface and a simple syntax for config files. It has lots of other features though :\n\nNested sections (subsections), to any level\nList values\nMultiple line values\nString interpolation (substitution)\nIntegrated with a powerful validation system\n\n\nincluding automatic type checking/conversion\nrepeated sections\nand allowing default values\n\nWhen writing out config files, ConfigObj preserves all comments and the order of members and sections\nMany useful methods and options for working with configuration files (like the 'reload' method)\nFull Unicode support\n\n",
"I think you want the ConfigParser module in the standard library. It reads and writes INI style files. The examples and documentation in the standard documentation I've linked to are very comprehensive.\n",
"If you are the only one that has access to the configuration file, you can use a simple, low-level solution. Keep the \"dictionary\" in a text file as a list of tuples (regexp, message) exactly as if it was a python expression:\n[\n(\"file .* does not exist\", \"file not found\"),\n(\"user .* not authorized\", \"authorization error\")\n]\n\nIn your code, load it, then eval it, and compile the regexps in the result:\nf = open(\"messages.py\")\nmessages = eval(f.read()) # caution: you must be sure of what's in that file\nf.close()\nmessages = [(re.compile(r), m) for (r,m) in messages]\n\nand you end up with a list of tuples (compiled_regexp, message).\n",
"I typically do as Daren suggested, just make your config file a Python script:\npatterns = {\n 'file .* does not exist': 'file not found',\n 'user .* not found': 'authorization error',\n}\n\nThen you can use it as:\nimport config\n\nfor pattern in config.patterns:\n if re.search(pattern, log_message):\n print config.patterns[pattern]\n\nThis is what Django does with their settings file, by the way.\n"
] | [
38,
36,
8,
4,
4,
3
] | [] | [] | [
"configuration",
"python",
"serialization"
] | stackoverflow_0000186916_configuration_python_serialization.txt |
Q:
Django admin interface inlines placement
I want to be able to place an inline inbetween two different fields in a fieldset. You can already do this with foreignkeys, I figured that inlining the class I wanted and defining it to get extra forms would do the trick, but apparently I get a:
"class x" has no ForeignKey to "class y"
error. Is this not something that is supported in Django 1.0? If so, how would I go about fixing the problem, if there isn't a pre-existing solution?
in models.py
class Place(models.Model):
name = models.CharField(max_length=50)
address = models.CharField(max_length=80)
class Owner(models.Model):
name = models.CharField(max_length=100)
place = models.ForeignKey(Place)
background = models.TextField()
license_expiration = models.DateTimeField('license expiration')
in admin.py
class PlaceInline(admin.TabularInline):
model = Place
extra = 5
class OwnerAdmin(admin.ModelAdmin):
fieldsets = [
(None, {'fields': ['background','place', 'license_expiration']}),
]
inlines = [PlaceInline]
A:
It seems to be impossible in Django admin site itself (you should not include inlined fields in "fields" at all) but you can use JS to move inlined fields wherever you want.
| Django admin interface inlines placement | I want to be able to place an inline inbetween two different fields in a fieldset. You can already do this with foreignkeys, I figured that inlining the class I wanted and defining it to get extra forms would do the trick, but apparently I get a:
"class x" has no ForeignKey to "class y"
error. Is this not something that is supported in Django 1.0? If so, how would I go about fixing the problem, if there isn't a pre-existing solution?
in models.py
class Place(models.Model):
name = models.CharField(max_length=50)
address = models.CharField(max_length=80)
class Owner(models.Model):
name = models.CharField(max_length=100)
place = models.ForeignKey(Place)
background = models.TextField()
license_expiration = models.DateTimeField('license expiration')
in admin.py
class PlaceInline(admin.TabularInline):
model = Place
extra = 5
class OwnerAdmin(admin.ModelAdmin):
fieldsets = [
(None, {'fields': ['background','place', 'license_expiration']}),
]
inlines = [PlaceInline]
| [
"It seems to be impossible in Django admin site itself (you should not include inlined fields in \"fields\" at all) but you can use JS to move inlined fields wherever you want.\n"
] | [
3
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0000188451_django_django_admin_python.txt |
Q:
OCSP libraries for python / java / c?
Going back to my previous question on OCSP, does anybody know of "reliable" OCSP libraries for Python, Java and C?
I need "client" OCSP functionality, as I'll be checking the status of Certs against an OCSP responder, so responder functionality is not that important.
Thanks
A:
Java 5 has support of revocation checking via OCSP built in. If you want to build an OCSP responder, or have finer control over revocation checking, check out Bouncy Castle. You can use this to implement your own CertPathChecker that, for example, uses non-blocking I/O in its status checks.
A:
Have you check pyOpenSSL.. am sure openssl supports ocsp and python binding may support it
A:
OpenSSL is the most widely used product for OCSP in C. It's quite reliable, although incredibly obtuse. I'd recommend looking at apps/ocsp.c for a pretty good example of how to make OCSP requests and validate responses.
Vista and Server 2008 have built-in OCSP support in CAPI; check out CertVerifyRevocation.
| OCSP libraries for python / java / c? | Going back to my previous question on OCSP, does anybody know of "reliable" OCSP libraries for Python, Java and C?
I need "client" OCSP functionality, as I'll be checking the status of Certs against an OCSP responder, so responder functionality is not that important.
Thanks
| [
"Java 5 has support of revocation checking via OCSP built in. If you want to build an OCSP responder, or have finer control over revocation checking, check out Bouncy Castle. You can use this to implement your own CertPathChecker that, for example, uses non-blocking I/O in its status checks.\n",
"Have you check pyOpenSSL.. am sure openssl supports ocsp and python binding may support it\n",
"OpenSSL is the most widely used product for OCSP in C. It's quite reliable, although incredibly obtuse. I'd recommend looking at apps/ocsp.c for a pretty good example of how to make OCSP requests and validate responses.\nVista and Server 2008 have built-in OCSP support in CAPI; check out CertVerifyRevocation.\n"
] | [
3,
1,
1
] | [] | [] | [
"c",
"java",
"ocsp",
"python"
] | stackoverflow_0000143515_c_java_ocsp_python.txt |
Q:
Resources for Python Programmer
I have written a lot of code in Python, and I am very used to the syntax, object structure, and so forth of Python because of it.
What is the best online guide or resource site to provide me with the basics, as well as a comparison or lookup guide with equivalent functions/features in VBA versus Python.
For example, I am having trouble equating a simple List in Python to VBA code. I am also have issues with data structures, such as dictionaries, and so forth.
What resources or tutorials are available that will provide me with a guide to porting python functionality to VBA, or just adapting to the VBA syntax from a strong OOP language background?
A:
VBA is quite different from Python, so you should read at least the "Microsoft Visual Basic Help" as provided by the application you are going to use (Excel, Access…).
Generally speaking, VBA has the equivalent of Python modules; they're called "Libraries", and they are not as easy to create as Python modules. I mention them because Libraries will provide you with higher-level types that you can use.
As a start-up nudge, there are two types that can be substituted for list and dict.
list
VBA has the type Collection. It's available by default (it's in the library VBA). So you just do a
dim alist as New Collection
and from then on, you can use its methods/properties:
.Add(item) ( list.append(item) ),
.Count ( len(list) ),
.Item(i) ( list[i] ) and
.Remove(i) ( del list[i] ). Very primitive, but it's there.
You can also use the VBA Array type, which like python arrays are lists of same-type items, and unlike python arrays, you need to do ReDim to change their size (i.e. you can't just append and remove items)
dict
To have a dictionary-like object, you should add the Scripting library to your VBA project¹. Afterwards, you can
Dim adict As New Dictionary
and then use its properties/methods:
.Add(key, item) ( dict[key] = item ),
.Exists(key) ( dict.has_key[key] ),
.Items() ( dict.values() ),
.Keys() ( dict.keys() ),
and others which you will find in the Object Browser².
¹ Open VBA editor (Alt+F11). Go to Tools→References, and check the "Microsoft Scripting Runtime" in the list.
² To see the Object Browser, in VBA editor press F2 (or View→Object Browser).
A:
This tutorial isn't 'for python programmers' but I thinkit's a pretty good vba resource:
http://www.vbtutor.net/VBA/vba_tutorial.html
This site goes over a real-world example using lists:
http://www.ozgrid.com/VBA/count-of-list.htm
A:
Probably not exactly what you are looking for but this is a decent VBA site if you have some programming background. It's not a list of this = that but more of a problem/solution
http://www.mvps.org/access/toc.htm
A:
VBA as in what was implemented as part of Office 2000, 2003 and VB6 have been deprecated in favor of .Net technologies. Unless you are maintaining old code stick to python or maybe even go with IronPython for .Net. If you go IronPython, you may have to write some C#/VB.Net helper classes here and there when working with various COM objects such as ones in Office but otherwise it is supposed to be pretty functional and nice. Just about all of the Python goodness is over in IronPython. If you are just doing some COM scripting take a look at what ActiveState puts out. I've used it in the past to do some COM work. Specifically using Python as an Active Scripting language (classic ASP).
A:
While I'm not a Python programmer, you might be able to run VSTO with Iron Python and Visual Studio. At least that way, you won't have to learn VBA syntax.
A:
I think the equivalent of lists would be arrays in terms of common usage.
Where it is common to use a list in Python you would normally use an array in VB.
However, VB arrays are very inflexible compared to Python lists and are more like arrays in C.
' An array with 3 elements
'' The number inside the brackets represents the upper bound index
'' ie. the last index you can access
'' So a(2) means you can access a(0), a(1), and a(2) '
Dim a(2) As String
a(0) = "a"
a(1) = "b"
a(2) = "c"
Dim i As Integer
For i = 0 To UBound(a)
MsgBox a(i)
Next
Note that arrays in VB cannot be resized if you declare the initial number of elements.
' Declare a "dynamic" array '
Dim a() As Variant
' Set the array size to 3 elements '
ReDim a(2)
a(0) = 1
a(1) = 2
' Set the array size to 2 elements
'' If you dont use Preserve then you will lose
'' the existing data in the array '
ReDim Preserve a(1)
You will also come across various collections in VB.
eg. http://devguru.com/technologies/vbscript/14045.asp
Dictionaries in VB can be created like this:
Set cars = CreateObject("Scripting.Dictionary")
cars.Add "a", "Alvis"
cars.Add "b", "Buick"
cars.Add "c", "Cadillac"
http://devguru.com/technologies/vbscript/13992.asp
A:
"I'm having trouble equating a simple
List in Python with something in
VBA..."
This isn't the best way to learn the language. In a way, you're giving up large pieces of Python because there isn't something like it in VBA.
If there's nothing like a Python list in VBA, then -- well -- it's something new. And new would be the significant value in parts of Python.
The first parts of the Python Built-in Types may not map well to VBA. That makes learning appear daunting. But limiting yourself to just things that appear in VBA tends to prevent learning.
A:
This may sound weird, but since I learned data structures in C++, I had a really hard time figuring out how to create them without pointers. There's something about VB6/VBA that makes them feel unnatural to me. Anyway, I came across this section of MSDN that has several data structure examples written in VBA. I found it useful.
Creating Dynamic Data Structures Using Class Modules
| Resources for Python Programmer | I have written a lot of code in Python, and I am very used to the syntax, object structure, and so forth of Python because of it.
What is the best online guide or resource site to provide me with the basics, as well as a comparison or lookup guide with equivalent functions/features in VBA versus Python.
For example, I am having trouble equating a simple List in Python to VBA code. I am also have issues with data structures, such as dictionaries, and so forth.
What resources or tutorials are available that will provide me with a guide to porting python functionality to VBA, or just adapting to the VBA syntax from a strong OOP language background?
| [
"VBA is quite different from Python, so you should read at least the \"Microsoft Visual Basic Help\" as provided by the application you are going to use (Excel, Access…).\nGenerally speaking, VBA has the equivalent of Python modules; they're called \"Libraries\", and they are not as easy to create as Python modules. I mention them because Libraries will provide you with higher-level types that you can use.\nAs a start-up nudge, there are two types that can be substituted for list and dict.\nlist\nVBA has the type Collection. It's available by default (it's in the library VBA). So you just do a\ndim alist as New Collection\nand from then on, you can use its methods/properties:\n\n.Add(item) ( list.append(item) ),\n.Count ( len(list) ),\n.Item(i) ( list[i] ) and\n.Remove(i) ( del list[i] ). Very primitive, but it's there.\n\nYou can also use the VBA Array type, which like python arrays are lists of same-type items, and unlike python arrays, you need to do ReDim to change their size (i.e. you can't just append and remove items)\ndict\nTo have a dictionary-like object, you should add the Scripting library to your VBA project¹. Afterwards, you can\nDim adict As New Dictionary\nand then use its properties/methods:\n\n.Add(key, item) ( dict[key] = item ),\n.Exists(key) ( dict.has_key[key] ),\n.Items() ( dict.values() ),\n.Keys() ( dict.keys() ),\nand others which you will find in the Object Browser².\n\n¹ Open VBA editor (Alt+F11). Go to Tools→References, and check the \"Microsoft Scripting Runtime\" in the list.\n² To see the Object Browser, in VBA editor press F2 (or View→Object Browser).\n",
"This tutorial isn't 'for python programmers' but I thinkit's a pretty good vba resource:\nhttp://www.vbtutor.net/VBA/vba_tutorial.html\nThis site goes over a real-world example using lists:\nhttp://www.ozgrid.com/VBA/count-of-list.htm\n",
"Probably not exactly what you are looking for but this is a decent VBA site if you have some programming background. It's not a list of this = that but more of a problem/solution\nhttp://www.mvps.org/access/toc.htm\n",
"VBA as in what was implemented as part of Office 2000, 2003 and VB6 have been deprecated in favor of .Net technologies. Unless you are maintaining old code stick to python or maybe even go with IronPython for .Net. If you go IronPython, you may have to write some C#/VB.Net helper classes here and there when working with various COM objects such as ones in Office but otherwise it is supposed to be pretty functional and nice. Just about all of the Python goodness is over in IronPython. If you are just doing some COM scripting take a look at what ActiveState puts out. I've used it in the past to do some COM work. Specifically using Python as an Active Scripting language (classic ASP).\n",
"While I'm not a Python programmer, you might be able to run VSTO with Iron Python and Visual Studio. At least that way, you won't have to learn VBA syntax.\n",
"I think the equivalent of lists would be arrays in terms of common usage.\nWhere it is common to use a list in Python you would normally use an array in VB.\nHowever, VB arrays are very inflexible compared to Python lists and are more like arrays in C.\n' An array with 3 elements\n'' The number inside the brackets represents the upper bound index\n'' ie. the last index you can access\n'' So a(2) means you can access a(0), a(1), and a(2) '\n\nDim a(2) As String\na(0) = \"a\"\na(1) = \"b\"\na(2) = \"c\"\n\nDim i As Integer\nFor i = 0 To UBound(a)\n MsgBox a(i)\nNext\n\nNote that arrays in VB cannot be resized if you declare the initial number of elements.\n' Declare a \"dynamic\" array '\n\nDim a() As Variant\n\n' Set the array size to 3 elements '\n\nReDim a(2)\na(0) = 1\na(1) = 2\n\n' Set the array size to 2 elements\n'' If you dont use Preserve then you will lose\n'' the existing data in the array '\n\nReDim Preserve a(1)\n\nYou will also come across various collections in VB.\neg. http://devguru.com/technologies/vbscript/14045.asp\nDictionaries in VB can be created like this:\nSet cars = CreateObject(\"Scripting.Dictionary\")\ncars.Add \"a\", \"Alvis\"\ncars.Add \"b\", \"Buick\"\ncars.Add \"c\", \"Cadillac\" \n\nhttp://devguru.com/technologies/vbscript/13992.asp\n",
"\n\"I'm having trouble equating a simple\n List in Python with something in\n VBA...\"\n\nThis isn't the best way to learn the language. In a way, you're giving up large pieces of Python because there isn't something like it in VBA.\nIf there's nothing like a Python list in VBA, then -- well -- it's something new. And new would be the significant value in parts of Python.\nThe first parts of the Python Built-in Types may not map well to VBA. That makes learning appear daunting. But limiting yourself to just things that appear in VBA tends to prevent learning.\n",
"This may sound weird, but since I learned data structures in C++, I had a really hard time figuring out how to create them without pointers. There's something about VB6/VBA that makes them feel unnatural to me. Anyway, I came across this section of MSDN that has several data structure examples written in VBA. I found it useful.\nCreating Dynamic Data Structures Using Class Modules\n"
] | [
25,
2,
2,
2,
2,
1,
0,
0
] | [] | [] | [
"porting",
"python",
"vba"
] | stackoverflow_0000076882_porting_python_vba.txt |
Q:
Python - How to use Conch to create a Virtual SSH server
I'm looking at creating a server in python that I can run, and will work as an SSH server. This will then let different users login, and act as if they'd logged in normally, but only had access to one command.
I want to do this so that I can have a system where I can add users to without having to create a system wide account, so that they can then, for example, commit to a VCS branch, or similar.
While I can work out how to do this with conch to get it to a "custom" shell... I can't figure out how to make it so that the SSH stream works as if it were a real one (I'm preferably wanting to limit to /bin/bzr so that bzr+ssh will work.
It needs to be in python (which i can get to do the authorisation) but don't know how to do the linking to the app.
This needs to be in python to work within the app its designed for, and to be able to be used for those without access to add new users
A:
When you write a Conch server, you can control what happens when the client makes a shell request by implementing ISession.openShell. The Conch server will request IConchUser from your realm and then adapt the resulting avatar to ISession to call openShell on it if necessary.
ISession.openShell's job is to take the transport object passed to it and associate it with a protocol to interpret the bytes received from it and, if desired, to write bytes to it to be sent to the client.
In an unfortunate twist, the object passed to openShell which represents the transport is actually an IProcessProtocol provider. This means that you need to call makeConnection on it, passing an IProcessTransport provider. When data is received from the client, the IProcessProtocol will call writeToChild on the transport you pass to makeConnection. When you want to send data to the client, you should call childDataReceived on it.
To see the exact behavior, I suggest reading the implementation of the IProcessProtocol that is passed in. Don't depend on anything that's not part of IProcessProtocol, but seeing the implementation can make it easier to understand what's going on.
You may also want to look at the implementation of the normal shell-creation to get a sense of what you're aiming for. This will give you a clue about how to associate the stdio of the bzr child process you launch with the SSH channel.
| Python - How to use Conch to create a Virtual SSH server | I'm looking at creating a server in python that I can run, and will work as an SSH server. This will then let different users login, and act as if they'd logged in normally, but only had access to one command.
I want to do this so that I can have a system where I can add users to without having to create a system wide account, so that they can then, for example, commit to a VCS branch, or similar.
While I can work out how to do this with conch to get it to a "custom" shell... I can't figure out how to make it so that the SSH stream works as if it were a real one (I'm preferably wanting to limit to /bin/bzr so that bzr+ssh will work.
It needs to be in python (which i can get to do the authorisation) but don't know how to do the linking to the app.
This needs to be in python to work within the app its designed for, and to be able to be used for those without access to add new users
| [
"When you write a Conch server, you can control what happens when the client makes a shell request by implementing ISession.openShell. The Conch server will request IConchUser from your realm and then adapt the resulting avatar to ISession to call openShell on it if necessary.\nISession.openShell's job is to take the transport object passed to it and associate it with a protocol to interpret the bytes received from it and, if desired, to write bytes to it to be sent to the client.\nIn an unfortunate twist, the object passed to openShell which represents the transport is actually an IProcessProtocol provider. This means that you need to call makeConnection on it, passing an IProcessTransport provider. When data is received from the client, the IProcessProtocol will call writeToChild on the transport you pass to makeConnection. When you want to send data to the client, you should call childDataReceived on it.\nTo see the exact behavior, I suggest reading the implementation of the IProcessProtocol that is passed in. Don't depend on anything that's not part of IProcessProtocol, but seeing the implementation can make it easier to understand what's going on.\nYou may also want to look at the implementation of the normal shell-creation to get a sense of what you're aiming for. This will give you a clue about how to associate the stdio of the bzr child process you launch with the SSH channel.\n"
] | [
7
] | [
"While Python really is my favorite language, I think you need not create you own server for this. When you look at the OpenSSH Manualpage for sshd you'll find the \"command\" options for the authorized keys file that lets you define a specific command to run on login.\nUsing keys, you can use one system account to allow many user to log in, just put their public keys in the account's authorized keys file.\nWe are using this to create SSH tunnels for SVN and it works just great.\n"
] | [
-2
] | [
"python",
"twisted"
] | stackoverflow_0000186316_python_twisted.txt |
Q:
Base-2 (Binary) Representation Using Python
Building on How Do You Express Binary Literals in Python, I was thinking about sensible, intuitive ways to do that Programming 101 chestnut of displaying integers in base-2 form. This is the best I came up with, but I'd like to replace it with a better algorithm, or at least one that should have screaming-fast performance.
def num_bin(N, places=8):
def bit_at_p(N, p):
''' find the bit at place p for number n '''
two_p = 1 << p # 2 ^ p, using bitshift, will have exactly one
# bit set, at place p
x = N & two_p # binary composition, will be one where *both* numbers
# have a 1 at that bit. this can only happen
# at position p. will yield two_p if N has a 1 at
# bit p
return int(x > 0)
bits = ( bit_at_p(N,x) for x in xrange(places))
return "".join( (str(x) for x in bits) )
# or, more consisely
# return "".join([str(int((N & 1 << x)>0)) for x in xrange(places)])
A:
For best efficiency, you generally want to process more than a single bit at a time.
You can use a simple method to get a fixed width binary representation. eg.
def _bin(x, width):
return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))
_bin(x, 8) will now give a zero padded representation of x's lower 8 bits. This can be used to build a lookup table, allowing your converter to process 8 bits at a time (or more if you want to devote the memory to it).
_conv_table = [_bin(x,8) for x in range(256)]
Then you can use this in your real function, stripping off leading zeroes when returning it. I've also added handling for signed numbers, as without it you will get an infinite loop (Negative integers conceptually have an infinite number of set sign bits.)
def bin(x):
if x == 0:
return '0' #Special case: Don't strip leading zero if no other digits
elif x < 0:
sign='-'
x*=-1
else:
sign = ''
l=[]
while x:
l.append(_conv_table[x & 0xff])
x >>= 8
return sign + ''.join(reversed(l)).lstrip("0")
[Edit] Changed code to handle signed integers.
[Edit2] Here are some timing figures of the various solutions. bin is the function above, constantin_bin is from Constantin's answer and num_bin is the original version. Out of curiosity, I also tried a 16 bit lookup table variant of the above (bin16 below), and tried out Python3's builtin bin() function. All timings were for 100000 runs using an 01010101 bit pattern.
Num Bits: 8 16 32 64 128 256
---------------------------------------------------------------------
bin 0.544 0.586 0.744 1.942 1.854 3.357
bin16 0.542 0.494 0.592 0.773 1.150 1.886
constantin_bin 2.238 3.803 7.794 17.869 34.636 94.799
num_bin 3.712 5.693 12.086 32.566 67.523 128.565
Python3's bin 0.079 0.045 0.062 0.069 0.212 0.201
As you can see, when processing long values using large chunks really pays off, but nothing beats the low-level C code of python3's builtin (which bizarrely seems consistently faster at 256 bits than 128!). Using a 16 bit lookup table improves things, but probably isn't worth it unless you really need it, as it uses up a large chunk of memory, and can introduce a small but noticalbe startup delay to precompute the table.
A:
Not screaming-fast, but straightforward:
>>> def bin(x):
... sign = '-' if x < 0 else ''
... x = abs(x)
... bits = []
... while x:
... x, rmost = divmod(x, 2)
... bits.append(rmost)
... return sign + ''.join(str(b) for b in reversed(bits or [0]))
It is also faster than num_bin:
>>> import timeit
>>> t_bin = timeit.Timer('bin(0xf0)', 'from __main__ import bin')
>>> print t_bin.timeit(number=100000)
4.19453350997
>>> t_num_bin = timeit.Timer('num_bin(0xf0)', 'from __main__ import num_bin')
>>> print t_num_bin.timeit(number=100000)
4.70694716882
Even more, it actually works correctly (for my definition of "correctness" :)):
>>> bin(1)
'1'
>>> num_bin(1)
'10000000'
| Base-2 (Binary) Representation Using Python | Building on How Do You Express Binary Literals in Python, I was thinking about sensible, intuitive ways to do that Programming 101 chestnut of displaying integers in base-2 form. This is the best I came up with, but I'd like to replace it with a better algorithm, or at least one that should have screaming-fast performance.
def num_bin(N, places=8):
def bit_at_p(N, p):
''' find the bit at place p for number n '''
two_p = 1 << p # 2 ^ p, using bitshift, will have exactly one
# bit set, at place p
x = N & two_p # binary composition, will be one where *both* numbers
# have a 1 at that bit. this can only happen
# at position p. will yield two_p if N has a 1 at
# bit p
return int(x > 0)
bits = ( bit_at_p(N,x) for x in xrange(places))
return "".join( (str(x) for x in bits) )
# or, more consisely
# return "".join([str(int((N & 1 << x)>0)) for x in xrange(places)])
| [
"For best efficiency, you generally want to process more than a single bit at a time.\nYou can use a simple method to get a fixed width binary representation. eg.\ndef _bin(x, width):\n return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))\n\n_bin(x, 8) will now give a zero padded representation of x's lower 8 bits. This can be used to build a lookup table, allowing your converter to process 8 bits at a time (or more if you want to devote the memory to it).\n_conv_table = [_bin(x,8) for x in range(256)]\n\nThen you can use this in your real function, stripping off leading zeroes when returning it. I've also added handling for signed numbers, as without it you will get an infinite loop (Negative integers conceptually have an infinite number of set sign bits.)\ndef bin(x):\n if x == 0: \n return '0' #Special case: Don't strip leading zero if no other digits\n elif x < 0:\n sign='-'\n x*=-1\n else:\n sign = ''\n l=[]\n while x:\n l.append(_conv_table[x & 0xff])\n x >>= 8\n return sign + ''.join(reversed(l)).lstrip(\"0\")\n\n[Edit] Changed code to handle signed integers.\n[Edit2] Here are some timing figures of the various solutions. bin is the function above, constantin_bin is from Constantin's answer and num_bin is the original version. Out of curiosity, I also tried a 16 bit lookup table variant of the above (bin16 below), and tried out Python3's builtin bin() function. All timings were for 100000 runs using an 01010101 bit pattern.\nNum Bits: 8 16 32 64 128 256\n---------------------------------------------------------------------\nbin 0.544 0.586 0.744 1.942 1.854 3.357 \nbin16 0.542 0.494 0.592 0.773 1.150 1.886\nconstantin_bin 2.238 3.803 7.794 17.869 34.636 94.799\nnum_bin 3.712 5.693 12.086 32.566 67.523 128.565\nPython3's bin 0.079 0.045 0.062 0.069 0.212 0.201 \n\nAs you can see, when processing long values using large chunks really pays off, but nothing beats the low-level C code of python3's builtin (which bizarrely seems consistently faster at 256 bits than 128!). Using a 16 bit lookup table improves things, but probably isn't worth it unless you really need it, as it uses up a large chunk of memory, and can introduce a small but noticalbe startup delay to precompute the table.\n",
"Not screaming-fast, but straightforward:\n>>> def bin(x):\n... sign = '-' if x < 0 else ''\n... x = abs(x)\n... bits = []\n... while x:\n... x, rmost = divmod(x, 2)\n... bits.append(rmost)\n... return sign + ''.join(str(b) for b in reversed(bits or [0]))\n\nIt is also faster than num_bin:\n>>> import timeit\n>>> t_bin = timeit.Timer('bin(0xf0)', 'from __main__ import bin')\n>>> print t_bin.timeit(number=100000)\n4.19453350997\n>>> t_num_bin = timeit.Timer('num_bin(0xf0)', 'from __main__ import num_bin')\n>>> print t_num_bin.timeit(number=100000)\n4.70694716882\n\nEven more, it actually works correctly (for my definition of \"correctness\" :)):\n>>> bin(1)\n'1'\n>>> num_bin(1)\n'10000000'\n\n"
] | [
14,
3
] | [] | [] | [
"python"
] | stackoverflow_0000187273_python.txt |
Q:
What property returns the regular expression when re.compile is called?
def foo ():
x = re.compile('^abc')
foo2(x)
def foo2(x):
How do I get x to return '^abc'? within the following code
logging.info('x is ' + x.???)
A:
pattern
| What property returns the regular expression when re.compile is called? | def foo ():
x = re.compile('^abc')
foo2(x)
def foo2(x):
How do I get x to return '^abc'? within the following code
logging.info('x is ' + x.???)
| [
"pattern\n"
] | [
3
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000189861_python_regex.txt |
Q:
Getting the pattern back from a compiled re?
Assume I have created a compiled re:
x = re.compile('^\d+$')
Is there a way to extract the pattern string (^\d+$) back from the x?
A:
You can get it back with
x.pattern
from the Python documentation on Regular Expression Objects
| Getting the pattern back from a compiled re? | Assume I have created a compiled re:
x = re.compile('^\d+$')
Is there a way to extract the pattern string (^\d+$) back from the x?
| [
"You can get it back with\nx.pattern\n\nfrom the Python documentation on Regular Expression Objects\n"
] | [
31
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000190967_python_regex.txt |
Q:
How to make Ruby or Python web sites to use multiple cores?
Even though Python and Ruby have one kernel thread per interpreter thread, they have a global interpreter lock (GIL) that is used to protect potentially shared data structures, so this inhibits multi-processor execution. Even though the portions in those languajes that are written in C or C++ can be free-threaded, that's not possible with pure interpreted code unless you use multiple processes. What's the best way to achieve this? Using FastCGI? Creating a cluster or a farm of virtualized servers? Using their Java equivalents, JRuby and Jython?
A:
I'm not totally sure which problem you want so solve, but if you deploy your python/django application via an apache prefork MPM using mod_python apache will start several worker processes for handling different requests.
If one request needs so much resources, that you want to use multiple cores have a look at pyprocessing. But I don't think that would be wise.
A:
The 'standard' way to do this with rails is to run a "pack" of Mongrel instances (ie: 4 copies of the rails application) and then use apache or nginx or some other piece of software to sit in front of them and act as a load balancer.
This is probably how it's done with other ruby frameworks such as merb etc, but I haven't used those personally.
The OS will take care of running each mongrel on it's own CPU.
If you install mod_rails aka phusion passenger it will start and stop multiple copies of the rails process for you as well, so it will end up spreading the load across multiple CPUs/cores in a similar way.
A:
Use an interface that runs each response in a separate interpreter, such as mod_wsgi for Python. This lets multi-threading be used without encountering the GIL.
EDIT: Apparently, mod_wsgi no longer supports multiple interpreters per process because idiots couldn't figure out how to properly implement extension modules. It still supports running requests in separate processes FastCGI-style, though, so that's apparently the current accepted solution.
A:
In Python and Ruby it is only possible to use multiple cores, is to spawn new (heavyweight) processes.
The Java counterparts inherit the possibilities of the Java platform. You could imply use Java threads. That is for example a reason why sometimes (often) Java Application Server like Glassfish are used for Ruby on Rails applications.
A:
For Python, the PyProcessing project allows you to program with processes much like you would use threads. It is included in the standard library of the recently released 2.6 version as multiprocessing. The module has many features for establishing and controlling access to shared data structures (queues, pipes, etc.) and support for common idioms (i.e. managers and worker pools).
| How to make Ruby or Python web sites to use multiple cores? | Even though Python and Ruby have one kernel thread per interpreter thread, they have a global interpreter lock (GIL) that is used to protect potentially shared data structures, so this inhibits multi-processor execution. Even though the portions in those languajes that are written in C or C++ can be free-threaded, that's not possible with pure interpreted code unless you use multiple processes. What's the best way to achieve this? Using FastCGI? Creating a cluster or a farm of virtualized servers? Using their Java equivalents, JRuby and Jython?
| [
"I'm not totally sure which problem you want so solve, but if you deploy your python/django application via an apache prefork MPM using mod_python apache will start several worker processes for handling different requests.\nIf one request needs so much resources, that you want to use multiple cores have a look at pyprocessing. But I don't think that would be wise.\n",
"The 'standard' way to do this with rails is to run a \"pack\" of Mongrel instances (ie: 4 copies of the rails application) and then use apache or nginx or some other piece of software to sit in front of them and act as a load balancer. \nThis is probably how it's done with other ruby frameworks such as merb etc, but I haven't used those personally.\nThe OS will take care of running each mongrel on it's own CPU.\nIf you install mod_rails aka phusion passenger it will start and stop multiple copies of the rails process for you as well, so it will end up spreading the load across multiple CPUs/cores in a similar way.\n",
"Use an interface that runs each response in a separate interpreter, such as mod_wsgi for Python. This lets multi-threading be used without encountering the GIL.\nEDIT: Apparently, mod_wsgi no longer supports multiple interpreters per process because idiots couldn't figure out how to properly implement extension modules. It still supports running requests in separate processes FastCGI-style, though, so that's apparently the current accepted solution.\n",
"In Python and Ruby it is only possible to use multiple cores, is to spawn new (heavyweight) processes.\nThe Java counterparts inherit the possibilities of the Java platform. You could imply use Java threads. That is for example a reason why sometimes (often) Java Application Server like Glassfish are used for Ruby on Rails applications.\n",
"For Python, the PyProcessing project allows you to program with processes much like you would use threads. It is included in the standard library of the recently released 2.6 version as multiprocessing. The module has many features for establishing and controlling access to shared data structures (queues, pipes, etc.) and support for common idioms (i.e. managers and worker pools).\n"
] | [
4,
4,
1,
1,
0
] | [] | [] | [
"multicore",
"multithreading",
"python",
"ruby"
] | stackoverflow_0000037142_multicore_multithreading_python_ruby.txt |
Q:
Configuring python
I am new to python and struggling to find how to control the amount of memory a python process can take? I am running python on a Cento OS machine with more than 2 GB of main memory size. Python is taking up only 128mb of this and I want to allocate it more. I tried to search all over the internet on this for last half an hour and found absolutely nothing! Why is it so difficult to find information on python related stuff :(
I would be happy if someone could throw some light on how to configure python for various things like allowed memory size, number of threads etc.
A link to a site where most controllable parameters of python are described would be appreciated well.
A:
Forget all that, python just allocates more memory as needed, there is not a myriad of comandline arguments for the VM as in java, just let it run. For all comandline switches you can just run python -h or read man python.
A:
Are you sure that the machine does not have a 128M process limit? If you are running the python script as a CGI inside a web server, it is quite likely that there is a process limit set - you will need to look at the web server configuration.
| Configuring python | I am new to python and struggling to find how to control the amount of memory a python process can take? I am running python on a Cento OS machine with more than 2 GB of main memory size. Python is taking up only 128mb of this and I want to allocate it more. I tried to search all over the internet on this for last half an hour and found absolutely nothing! Why is it so difficult to find information on python related stuff :(
I would be happy if someone could throw some light on how to configure python for various things like allowed memory size, number of threads etc.
A link to a site where most controllable parameters of python are described would be appreciated well.
| [
"Forget all that, python just allocates more memory as needed, there is not a myriad of comandline arguments for the VM as in java, just let it run. For all comandline switches you can just run python -h or read man python.\n",
"Are you sure that the machine does not have a 128M process limit? If you are running the python script as a CGI inside a web server, it is quite likely that there is a process limit set - you will need to look at the web server configuration.\n"
] | [
11,
2
] | [] | [] | [
"memory",
"python"
] | stackoverflow_0000191700_memory_python.txt |
Q:
How do I turn an RSS feed back into RSS?
According to the feedparser documentation, I can turn an RSS feed into a parsed object like this:
import feedparser
d = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
but I can't find anything showing how to go the other way; I'd like to be able do manipulate 'd' and then output the result as XML:
print d.toXML()
but there doesn't seem to be anything in feedparser for going in that direction. Am I going to have to loop through d's various elements, or is there a quicker way?
A:
Appended is a not hugely-elegant, but working solution - it uses feedparser to parse the feed, you can then modify the entries, and it passes the data to PyRSS2Gen. It preserves most of the feed info (the important bits anyway, there are somethings that will need extra conversion, the parsed_feed['feed']['image'] element for example).
I put this together as part of a little feed-processing framework I'm fiddling about with.. It may be of some use (it's pretty short - should be less than 100 lines of code in total when done..)
#!/usr/bin/env python
import datetime
# http://www.feedparser.org/
import feedparser
# http://www.dalkescientific.com/Python/PyRSS2Gen.html
import PyRSS2Gen
# Get the data
parsed_feed = feedparser.parse('http://reddit.com/.rss')
# Modify the parsed_feed data here
items = [
PyRSS2Gen.RSSItem(
title = x.title,
link = x.link,
description = x.summary,
guid = x.link,
pubDate = datetime.datetime(
x.modified_parsed[0],
x.modified_parsed[1],
x.modified_parsed[2],
x.modified_parsed[3],
x.modified_parsed[4],
x.modified_parsed[5])
)
for x in parsed_feed.entries
]
# make the RSS2 object
# Try to grab the title, link, language etc from the orig feed
rss = PyRSS2Gen.RSS2(
title = parsed_feed['feed'].get("title"),
link = parsed_feed['feed'].get("link"),
description = parsed_feed['feed'].get("description"),
language = parsed_feed['feed'].get("language"),
copyright = parsed_feed['feed'].get("copyright"),
managingEditor = parsed_feed['feed'].get("managingEditor"),
webMaster = parsed_feed['feed'].get("webMaster"),
pubDate = parsed_feed['feed'].get("pubDate"),
lastBuildDate = parsed_feed['feed'].get("lastBuildDate"),
categories = parsed_feed['feed'].get("categories"),
generator = parsed_feed['feed'].get("generator"),
docs = parsed_feed['feed'].get("docs"),
items = items
)
print rss.to_xml()
A:
If you're looking to read in an XML feed, modify it and then output it again, there's a page on the main python wiki indicating that the RSS.py library might support what you're after (it reads most RSS and is able to output RSS 1.0). I've not looked at it in much detail though..
A:
from xml.dom import minidom
doc= minidom.parse('./your/file.xml')
print doc.toxml()
The only problem is that it do not download feeds from the internet.
A:
As a method of making a feed, how about PyRSS2Gen? :)
I've not played with FeedParser, but have you tried just doing str(yourFeedParserObject)? I've often been suprised by various modules that have str methods to just output the object as text.
[Edit] Just tried the str() method and it doesn't work on this one. Worth a shot though ;-)
| How do I turn an RSS feed back into RSS? | According to the feedparser documentation, I can turn an RSS feed into a parsed object like this:
import feedparser
d = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
but I can't find anything showing how to go the other way; I'd like to be able do manipulate 'd' and then output the result as XML:
print d.toXML()
but there doesn't seem to be anything in feedparser for going in that direction. Am I going to have to loop through d's various elements, or is there a quicker way?
| [
"Appended is a not hugely-elegant, but working solution - it uses feedparser to parse the feed, you can then modify the entries, and it passes the data to PyRSS2Gen. It preserves most of the feed info (the important bits anyway, there are somethings that will need extra conversion, the parsed_feed['feed']['image'] element for example).\nI put this together as part of a little feed-processing framework I'm fiddling about with.. It may be of some use (it's pretty short - should be less than 100 lines of code in total when done..)\n#!/usr/bin/env python\nimport datetime\n\n# http://www.feedparser.org/\nimport feedparser\n# http://www.dalkescientific.com/Python/PyRSS2Gen.html\nimport PyRSS2Gen\n\n# Get the data\nparsed_feed = feedparser.parse('http://reddit.com/.rss')\n\n# Modify the parsed_feed data here\n\nitems = [\n PyRSS2Gen.RSSItem(\n title = x.title,\n link = x.link,\n description = x.summary,\n guid = x.link,\n pubDate = datetime.datetime(\n x.modified_parsed[0],\n x.modified_parsed[1],\n x.modified_parsed[2],\n x.modified_parsed[3],\n x.modified_parsed[4],\n x.modified_parsed[5])\n )\n\n for x in parsed_feed.entries\n]\n\n# make the RSS2 object\n# Try to grab the title, link, language etc from the orig feed\n\nrss = PyRSS2Gen.RSS2(\n title = parsed_feed['feed'].get(\"title\"),\n link = parsed_feed['feed'].get(\"link\"),\n description = parsed_feed['feed'].get(\"description\"),\n\n language = parsed_feed['feed'].get(\"language\"),\n copyright = parsed_feed['feed'].get(\"copyright\"),\n managingEditor = parsed_feed['feed'].get(\"managingEditor\"),\n webMaster = parsed_feed['feed'].get(\"webMaster\"),\n pubDate = parsed_feed['feed'].get(\"pubDate\"),\n lastBuildDate = parsed_feed['feed'].get(\"lastBuildDate\"),\n\n categories = parsed_feed['feed'].get(\"categories\"),\n generator = parsed_feed['feed'].get(\"generator\"),\n docs = parsed_feed['feed'].get(\"docs\"),\n\n items = items\n)\n\n\nprint rss.to_xml()\n\n",
"If you're looking to read in an XML feed, modify it and then output it again, there's a page on the main python wiki indicating that the RSS.py library might support what you're after (it reads most RSS and is able to output RSS 1.0). I've not looked at it in much detail though..\n",
"from xml.dom import minidom\n\ndoc= minidom.parse('./your/file.xml')\nprint doc.toxml()\n\nThe only problem is that it do not download feeds from the internet.\n",
"As a method of making a feed, how about PyRSS2Gen? :)\nI've not played with FeedParser, but have you tried just doing str(yourFeedParserObject)? I've often been suprised by various modules that have str methods to just output the object as text.\n[Edit] Just tried the str() method and it doesn't work on this one. Worth a shot though ;-)\n"
] | [
7,
1,
0,
0
] | [] | [] | [
"python",
"rss"
] | stackoverflow_0000181818_python_rss.txt |
Q:
In Django how do I notify a parent when a child is saved in a foreign key relationship?
I have the following two models:
class Activity(models.Model):
name = models.CharField(max_length=50, help_text='Some help.')
entity = models.ForeignKey(CancellationEntity)
...
class Cancellation(models.Model):
activity = models.ForeignKey(Activity)
date = models.DateField(default=datetime.now().date())
description = models.CharField(max_length=250)
...
I would like the Activity model to be aware when a Cancellation related to it is saved (both inserted or updated).
What is the best way to go about this?
A:
What you want to look into is Django's signals (check out this page too), specifically the model signals--more specifically, the post_save signal. Signals are Django's version of a plugin/hook system. The post_save signal gets sent every time a model is saved, whether it's updated or created (and it'll let you know if it was created). This is how you'd use signals to get notified when an Activity has a Cancellation
from django.db.models.signals import post_save
class Activity(models.Model):
name = models.CharField(max_length=50, help_text='Some help.')
entity = models.ForeignKey(CancellationEntity)
@classmethod
def cancellation_occurred (sender, instance, created, raw):
# grab the current instance of Activity
self = instance.activity_set.all()[0]
# do something
...
class Cancellation(models.Model):
activity = models.ForeignKey(Activity)
date = models.DateField(default=datetime.now().date())
description = models.CharField(max_length=250)
...
post_save.connect(Activity.cancellation_occurred, sender=Cancellation)
A:
What's wrong with the following?
class Cancellation( models.Model ):
blah
blah
def save( self, **kw ):
for a in self.activity_set.all():
a.somethingChanged( self )
super( Cancellation, self ).save( **kw )
It would allow you to to control the notification among models very precisely. In a way, this is the canonical "Why is OO so good?" question. I think OO is good precisely because your collection of Cancellation and Activity objects can cooperate fully.
| In Django how do I notify a parent when a child is saved in a foreign key relationship? | I have the following two models:
class Activity(models.Model):
name = models.CharField(max_length=50, help_text='Some help.')
entity = models.ForeignKey(CancellationEntity)
...
class Cancellation(models.Model):
activity = models.ForeignKey(Activity)
date = models.DateField(default=datetime.now().date())
description = models.CharField(max_length=250)
...
I would like the Activity model to be aware when a Cancellation related to it is saved (both inserted or updated).
What is the best way to go about this?
| [
"What you want to look into is Django's signals (check out this page too), specifically the model signals--more specifically, the post_save signal. Signals are Django's version of a plugin/hook system. The post_save signal gets sent every time a model is saved, whether it's updated or created (and it'll let you know if it was created). This is how you'd use signals to get notified when an Activity has a Cancellation\nfrom django.db.models.signals import post_save\n\nclass Activity(models.Model):\n name = models.CharField(max_length=50, help_text='Some help.')\n entity = models.ForeignKey(CancellationEntity)\n\n @classmethod\n def cancellation_occurred (sender, instance, created, raw):\n # grab the current instance of Activity\n self = instance.activity_set.all()[0]\n # do something\n ...\n\n\nclass Cancellation(models.Model):\n activity = models.ForeignKey(Activity)\n date = models.DateField(default=datetime.now().date())\n description = models.CharField(max_length=250)\n ...\n\npost_save.connect(Activity.cancellation_occurred, sender=Cancellation)\n",
"What's wrong with the following?\nclass Cancellation( models.Model ):\n blah\n blah\n def save( self, **kw ):\n for a in self.activity_set.all():\n a.somethingChanged( self )\n super( Cancellation, self ).save( **kw )\n\nIt would allow you to to control the notification among models very precisely. In a way, this is the canonical \"Why is OO so good?\" question. I think OO is good precisely because your collection of Cancellation and Activity objects can cooperate fully.\n"
] | [
17,
5
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000192367_django_django_models_python.txt |
Q:
Read colors of image with Python (GAE)
How can I read the colors of an image with python using google app engine?
Example: I like to build a function to determine the most striking colors of an image to set a harmonic background color for it.
A:
The Images API does not (currently) contain pixel-level functions. To quote the overview document:
Note: In order to use the Images API in your local environment you must first download and install PIL, the Python Imaging Library. PIL is not available on App Engine; it is only used as a stub for the Images API in your local environment. Only the transforms provided in the images API are available on App Engine.
The community has been asking for full PIL support for some time, but it looks like we'll have to wait.
A:
For PNG you can use PyPNG module - lightweight pure-Python PNG decoder/encoder.
import png
point = (10, 20) # coordinates of pixel to read
reader = png.Reader(filename='image.png') # streams are also accepted
w, h, pixels, metadata = reader.read()
pixel_byte_width = 4 if metadata['has_alpha'] else 3
pixel_position = point[0] + point[1] * w
print pixels[
pixel_position * pixel_byte_width :
(pixel_position + 1) * pixel_byte_width]
A:
If you are willing to put Flash or a Java applet on the page, you might be able to do it on the client. I'm not sure if anything like canvas or SVG supports pixel-level manipulation, but if they do, you might be able to get it to work in some browsers with JavaScript.
The Flash or Java Applet, can be invisible and optional -- you can use JavaScript to detect if the user has the plugin and only support this feature when they do.
| Read colors of image with Python (GAE) | How can I read the colors of an image with python using google app engine?
Example: I like to build a function to determine the most striking colors of an image to set a harmonic background color for it.
| [
"The Images API does not (currently) contain pixel-level functions. To quote the overview document:\n\nNote: In order to use the Images API in your local environment you must first download and install PIL, the Python Imaging Library. PIL is not available on App Engine; it is only used as a stub for the Images API in your local environment. Only the transforms provided in the images API are available on App Engine.\n\nThe community has been asking for full PIL support for some time, but it looks like we'll have to wait.\n",
"For PNG you can use PyPNG module - lightweight pure-Python PNG decoder/encoder.\nimport png\n\npoint = (10, 20) # coordinates of pixel to read\n\nreader = png.Reader(filename='image.png') # streams are also accepted\nw, h, pixels, metadata = reader.read()\npixel_byte_width = 4 if metadata['has_alpha'] else 3\npixel_position = point[0] + point[1] * w\nprint pixels[\n pixel_position * pixel_byte_width :\n (pixel_position + 1) * pixel_byte_width]\n\n",
"If you are willing to put Flash or a Java applet on the page, you might be able to do it on the client. I'm not sure if anything like canvas or SVG supports pixel-level manipulation, but if they do, you might be able to get it to work in some browsers with JavaScript.\nThe Flash or Java Applet, can be invisible and optional -- you can use JavaScript to detect if the user has the plugin and only support this feature when they do.\n"
] | [
2,
2,
0
] | [] | [] | [
"analysis",
"google_app_engine",
"image",
"python"
] | stackoverflow_0000190675_analysis_google_app_engine_image_python.txt |
Q:
How can I, in python, iterate over multiple 2d lists at once, cleanly?
If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this.
for i in range(len(alist)):
for j in range(len(alist[i])):
if alist[i][j].isWhatever:
blist[i][j].doSomething()
Is there a nicer way to do something like this?
A:
If anyone is interested in performance of the above solutions, here they are for 4000x4000 grids, from fastest to slowest:
Brian: 1.08s (modified, with izip instead of zip)
John: 2.33s
DzinX: 2.36s
ΤΖΩΤΖΙΟΥ: 2.41s (but object initialization took 62s)
Eugene: 3.17s
Robert: 4.56s
Brian: 27.24s (original, with zip)
EDIT: Added Brian's scores with izip modification and it won by a large amount!
John's solution is also very fast, although it uses indices (I was really surprised to see this!), whereas Robert's and Brian's (with zip) are slower than the question creator's initial solution.
So let's present Brian's winning function, as it is not shown in proper form anywhere in this thread:
from itertools import izip
for a_row,b_row in izip(alist, blist):
for a_item, b_item in izip(a_row,b_row):
if a_item.isWhatever:
b_item.doSomething()
A:
I'd start by writing a generator method:
def grid_objects(alist, blist):
for i in range(len(alist)):
for j in range(len(alist[i])):
yield(alist[i][j], blist[i][j])
Then whenever you need to iterate over the lists your code looks like this:
for (a, b) in grid_objects(alist, blist):
if a.is_whatever():
b.do_something()
A:
You could zip them. ie:
for a_row,b_row in zip(alist, blist):
for a_item, b_item in zip(a_row,b_row):
if a_item.isWhatever:
b_item.doSomething()
However the overhead of zipping and iterating over the items may be higher than your original method if you rarely actually use the b_item (ie a_item.isWhatever is usually False). You could use itertools.izip instead of zip to reduce the memory impact of this, but its still probably going to be slightly slower unless you always need the b_item.
Alternatively, consider using a 3D list instead, so terrain for cell i,j is at l[i][j][0], objects at l[i][j][1] etc, or even combine the objects so you can do a[i][j].terrain, a[i][j].object etc.
[Edit] DzinX's timings actually show that the impact of the extra check for b_item isn't really significant, next to the performance penalty of re-looking up by index, so the above (using izip) seems to be fastest.
I've now given a quick test for the 3d approach as well, and it seems faster still, so if you can store your data in that form, it could be both simpler and faster to access. Here's an example of using it:
# Initialise 3d list:
alist = [ [[A(a_args), B(b_args)] for i in xrange(WIDTH)] for j in xrange(HEIGHT)]
# Process it:
for row in xlist:
for a,b in row:
if a.isWhatever():
b.doSomething()
Here are my timings for 10 loops using a 1000x1000 array, with various proportions of isWhatever being true are:
( Chance isWhatever is True )
Method 100% 50% 10% 1%
3d 3.422 2.151 1.067 0.824
izip 3.647 2.383 1.282 0.985
original 5.422 3.426 1.891 1.534
A:
When you are operating with grids of numbers and want really good performance, you should consider using Numpy. It's surprisingly easy to use and lets you think in terms of operations with grids instead of loops over grids. The performance comes from the fact that the operations are then run over whole grids with optimised SSE code.
For example here is some numpy using code that I wrote that does brute force numerical simulation of charged particles connected by springs. This code calculates a timestep for a 3d system with 100 nodes and 99 edges in 31ms. That is over 10x faster than the best pure python code I could come up with.
from numpy import array, sqrt, float32, newaxis
def evolve(points, velocities, edges, timestep=0.01, charge=0.1, mass=1., edgelen=0.5, dampen=0.95):
"""Evolve a n body system of electrostatically repulsive nodes connected by
springs by one timestep."""
velocities *= dampen
# calculate matrix of distance vectors between all points and their lengths squared
dists = array([[p2 - p1 for p2 in points] for p1 in points])
l_2 = (dists*dists).sum(axis=2)
# make the diagonal 1's to avoid division by zero
for i in xrange(points.shape[0]):
l_2[i,i] = 1
l_2_inv = 1/l_2
l_3_inv = l_2_inv*sqrt(l_2_inv)
# repulsive force: distance vectors divided by length cubed, summed and multiplied by scale
scale = timestep*charge*charge/mass
velocities -= scale*(l_3_inv[:,:,newaxis].repeat(points.shape[1], axis=2)*dists).sum(axis=1)
# calculate spring contributions for each point
for idx, (point, outedges) in enumerate(izip(points, edges)):
edgevecs = point - points.take(outedges, axis=0)
edgevec_lens = sqrt((edgevecs*edgevecs).sum(axis=1))
scale = timestep/mass
velocities[idx] += (edgevecs*((((edgelen*scale)/edgevec_lens - scale))[:,newaxis].repeat(points.shape[1],axis=1))).sum(axis=0)
# move points to new positions
points += velocities*timestep
A:
Generator expressions and izip from itertools module will do very nicely here:
from itertools import izip
for a, b in (pair for (aline, bline) in izip(alist, blist)
for pair in izip(aline, bline)):
if a.isWhatever:
b.doSomething()
The line in for statement above means:
take each line from combined grids alist and blist and make a tuple from them (aline, bline)
now combine these lists with izip again and take each element from them (pair).
This method has two advantages:
there are no indices used anywhere
you don't have to create lists with zip and use more efficient generators with izip instead.
A:
As a slight style change, you could use enumerate:
for i, arow in enumerate(alist):
for j, aval in enumerate(arow):
if aval.isWhatever():
blist[i][j].doSomething()
I don't think you'll get anything significantly simpler unless you rearrange your data structures as Federico suggests. So that you could turn the last line into something like "aval.b.doSomething()".
A:
Are you sure that the objects in the two matrices you are iterating in parallel are instances of conceptually distinct classes? What about merging the two classes ending up with a matrix of objects that contain both isWhatever() and doSomething()?
A:
If the two 2D-lists remain constant during the lifetime of your game and you can't enjoy Python's multiple inheritance to join the alist[i][j] and blist[i][j] object classes (as others have suggested), you could add a pointer to the corresponding b item in each a item after the lists are created, like this:
for a_row, b_row in itertools.izip(alist, blist):
for a_item, b_item in itertools.izip(a_row, b_row):
a_item.b_item= b_item
Various optimisations can apply here, like your classes having __slots__ defined, or the initialization code above could be merged with your own initialization code e.t.c. After that, your loop will become:
for a_row in alist:
for a_item in a_row:
if a_item.isWhatever():
a_item.b_item.doSomething()
That should be more efficient.
A:
If a.isWhatever is rarely true you could build an "index" once:
a_index = set((i,j)
for i,arow in enumerate(a)
for j,a in enumerate(arow)
if a.IsWhatever())
and each time you want something to be done:
for (i,j) in a_index:
b[i][j].doSomething()
If a changes over time, then you will need to
keep the index up-to-date. That's why I used
a set, so items can be added and removed fast.
| How can I, in python, iterate over multiple 2d lists at once, cleanly? | If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this.
for i in range(len(alist)):
for j in range(len(alist[i])):
if alist[i][j].isWhatever:
blist[i][j].doSomething()
Is there a nicer way to do something like this?
| [
"If anyone is interested in performance of the above solutions, here they are for 4000x4000 grids, from fastest to slowest:\n\nBrian: 1.08s (modified, with izip instead of zip)\nJohn: 2.33s\nDzinX: 2.36s\nΤΖΩΤΖΙΟΥ: 2.41s (but object initialization took 62s)\nEugene: 3.17s\nRobert: 4.56s\nBrian: 27.24s (original, with zip)\n\nEDIT: Added Brian's scores with izip modification and it won by a large amount!\nJohn's solution is also very fast, although it uses indices (I was really surprised to see this!), whereas Robert's and Brian's (with zip) are slower than the question creator's initial solution.\nSo let's present Brian's winning function, as it is not shown in proper form anywhere in this thread:\nfrom itertools import izip\nfor a_row,b_row in izip(alist, blist):\n for a_item, b_item in izip(a_row,b_row):\n if a_item.isWhatever:\n b_item.doSomething()\n\n",
"I'd start by writing a generator method:\ndef grid_objects(alist, blist):\n for i in range(len(alist)):\n for j in range(len(alist[i])):\n yield(alist[i][j], blist[i][j])\n\nThen whenever you need to iterate over the lists your code looks like this:\nfor (a, b) in grid_objects(alist, blist):\n if a.is_whatever():\n b.do_something()\n\n",
"You could zip them. ie:\nfor a_row,b_row in zip(alist, blist):\n for a_item, b_item in zip(a_row,b_row):\n if a_item.isWhatever:\n b_item.doSomething()\n\nHowever the overhead of zipping and iterating over the items may be higher than your original method if you rarely actually use the b_item (ie a_item.isWhatever is usually False). You could use itertools.izip instead of zip to reduce the memory impact of this, but its still probably going to be slightly slower unless you always need the b_item.\nAlternatively, consider using a 3D list instead, so terrain for cell i,j is at l[i][j][0], objects at l[i][j][1] etc, or even combine the objects so you can do a[i][j].terrain, a[i][j].object etc.\n[Edit] DzinX's timings actually show that the impact of the extra check for b_item isn't really significant, next to the performance penalty of re-looking up by index, so the above (using izip) seems to be fastest. \nI've now given a quick test for the 3d approach as well, and it seems faster still, so if you can store your data in that form, it could be both simpler and faster to access. Here's an example of using it:\n# Initialise 3d list:\nalist = [ [[A(a_args), B(b_args)] for i in xrange(WIDTH)] for j in xrange(HEIGHT)]\n\n# Process it:\nfor row in xlist:\n for a,b in row:\n if a.isWhatever(): \n b.doSomething()\n\nHere are my timings for 10 loops using a 1000x1000 array, with various proportions of isWhatever being true are:\n ( Chance isWhatever is True )\nMethod 100% 50% 10% 1%\n\n3d 3.422 2.151 1.067 0.824\nizip 3.647 2.383 1.282 0.985\noriginal 5.422 3.426 1.891 1.534\n\n",
"When you are operating with grids of numbers and want really good performance, you should consider using Numpy. It's surprisingly easy to use and lets you think in terms of operations with grids instead of loops over grids. The performance comes from the fact that the operations are then run over whole grids with optimised SSE code.\nFor example here is some numpy using code that I wrote that does brute force numerical simulation of charged particles connected by springs. This code calculates a timestep for a 3d system with 100 nodes and 99 edges in 31ms. That is over 10x faster than the best pure python code I could come up with.\nfrom numpy import array, sqrt, float32, newaxis\ndef evolve(points, velocities, edges, timestep=0.01, charge=0.1, mass=1., edgelen=0.5, dampen=0.95):\n \"\"\"Evolve a n body system of electrostatically repulsive nodes connected by\n springs by one timestep.\"\"\"\n velocities *= dampen\n\n # calculate matrix of distance vectors between all points and their lengths squared\n dists = array([[p2 - p1 for p2 in points] for p1 in points])\n l_2 = (dists*dists).sum(axis=2)\n\n # make the diagonal 1's to avoid division by zero\n for i in xrange(points.shape[0]):\n l_2[i,i] = 1\n\n l_2_inv = 1/l_2\n l_3_inv = l_2_inv*sqrt(l_2_inv)\n\n # repulsive force: distance vectors divided by length cubed, summed and multiplied by scale\n scale = timestep*charge*charge/mass\n velocities -= scale*(l_3_inv[:,:,newaxis].repeat(points.shape[1], axis=2)*dists).sum(axis=1)\n\n # calculate spring contributions for each point\n for idx, (point, outedges) in enumerate(izip(points, edges)):\n edgevecs = point - points.take(outedges, axis=0)\n edgevec_lens = sqrt((edgevecs*edgevecs).sum(axis=1))\n scale = timestep/mass\n velocities[idx] += (edgevecs*((((edgelen*scale)/edgevec_lens - scale))[:,newaxis].repeat(points.shape[1],axis=1))).sum(axis=0)\n\n # move points to new positions\n points += velocities*timestep\n\n",
"Generator expressions and izip from itertools module will do very nicely here:\nfrom itertools import izip\nfor a, b in (pair for (aline, bline) in izip(alist, blist) \n for pair in izip(aline, bline)):\n if a.isWhatever:\n b.doSomething()\n\nThe line in for statement above means:\n\ntake each line from combined grids alist and blist and make a tuple from them (aline, bline)\nnow combine these lists with izip again and take each element from them (pair).\n\nThis method has two advantages:\n\nthere are no indices used anywhere\nyou don't have to create lists with zip and use more efficient generators with izip instead.\n\n",
"As a slight style change, you could use enumerate:\nfor i, arow in enumerate(alist):\n for j, aval in enumerate(arow):\n if aval.isWhatever():\n blist[i][j].doSomething()\n\nI don't think you'll get anything significantly simpler unless you rearrange your data structures as Federico suggests. So that you could turn the last line into something like \"aval.b.doSomething()\".\n",
"Are you sure that the objects in the two matrices you are iterating in parallel are instances of conceptually distinct classes? What about merging the two classes ending up with a matrix of objects that contain both isWhatever() and doSomething()?\n",
"If the two 2D-lists remain constant during the lifetime of your game and you can't enjoy Python's multiple inheritance to join the alist[i][j] and blist[i][j] object classes (as others have suggested), you could add a pointer to the corresponding b item in each a item after the lists are created, like this:\nfor a_row, b_row in itertools.izip(alist, blist):\n for a_item, b_item in itertools.izip(a_row, b_row):\n a_item.b_item= b_item\n\nVarious optimisations can apply here, like your classes having __slots__ defined, or the initialization code above could be merged with your own initialization code e.t.c. After that, your loop will become:\nfor a_row in alist:\n for a_item in a_row:\n if a_item.isWhatever():\n a_item.b_item.doSomething()\n\nThat should be more efficient.\n",
"If a.isWhatever is rarely true you could build an \"index\" once:\na_index = set((i,j) \n for i,arow in enumerate(a) \n for j,a in enumerate(arow) \n if a.IsWhatever())\n\nand each time you want something to be done:\nfor (i,j) in a_index:\n b[i][j].doSomething()\n\nIf a changes over time, then you will need to\nkeep the index up-to-date. That's why I used\na set, so items can be added and removed fast.\n"
] | [
33,
15,
10,
4,
3,
3,
2,
1,
0
] | [
"for d1 in alist\n for d2 in d1\n if d2 = \"whatever\"\n do_my_thing()\n\n"
] | [
-4
] | [
"python"
] | stackoverflow_0000189087_python.txt |
Q:
Problem opening berkeley db in python
I have problems opening a berkeley db in python using bdtables. As bdtables is used by the library I am using to access the database, I need it to work.
The problem seems to be that the db environment I am trying to open (I got a copy of the database to open), is version 4.4 while libdb is version 4.6. I get the following error using bsddb.dbtables.bsdTableDB([dbname],[folder]):
(-30972, "DB_VERSION_MISMATCH: Database environment version mismatch -- Program version 4.6 doesn't match environment version 4.4")
However, bsddb.btopen([dbname]) works.
I have also tried installing db4.4-util, db4.5-util and db4.6-util. Trying to use db4.6_verify results in:
db4.6_verify: Program version 4.6 doesn't match environment version 4.4
db4.6_verify: DB_ENV->open: DB_VERSION_MISMATCH: Database environment version mismatchs
db4.4_verify results in the computer just hanging, and nothing happening.
Finally, if I run db4.4_recover on the database, that works. However, afterwards I get the following error 'No such file or directory' in python.
A:
I think answers should go in the "answer" section rather than as an addendum to the question since that marks the question as having an answer on the various question-list pages. I'll do that for you but, if you also get around to doing it, leave a comment on my answer so I can delete it.
Quoting "answer in question":
Verifying everything in this question, I eventually solved the problem. The 'No such file or directory' are caused by some __db.XXX files missing. Using
bsddb.dbtables.bsdTableDB([dbname],[folder], create=1)
after db4.4_recover, these files got created and everything is now working.
Still, it was a bit of an obscure problem, and initially hard to figure out. But thanks to the question Examining Berkeley DB files from the CLI, I got the tools I needed. I'll just post it here if someone ends up with the same problem in the future and end up at stackoverflow.com
A:
Damn, verifying everything in this question I eventually solved the problem. The 'No such file or directory' are caused by some __db.XXX files missing. Using bsddb.dbtables.bsdTableDB([dbname],[folder], create=1) after db4.4_recover, these files got created and everything is now working.
Still, it was a bit of an obscure problem, and initially hard to figure out. But thanks to the question Examining Berkeley DB files from the CLI I got the tools I needed. I'll just post it here if someone ends up with the same problem in the future and end up at stackoverflow.com
| Problem opening berkeley db in python | I have problems opening a berkeley db in python using bdtables. As bdtables is used by the library I am using to access the database, I need it to work.
The problem seems to be that the db environment I am trying to open (I got a copy of the database to open), is version 4.4 while libdb is version 4.6. I get the following error using bsddb.dbtables.bsdTableDB([dbname],[folder]):
(-30972, "DB_VERSION_MISMATCH: Database environment version mismatch -- Program version 4.6 doesn't match environment version 4.4")
However, bsddb.btopen([dbname]) works.
I have also tried installing db4.4-util, db4.5-util and db4.6-util. Trying to use db4.6_verify results in:
db4.6_verify: Program version 4.6 doesn't match environment version 4.4
db4.6_verify: DB_ENV->open: DB_VERSION_MISMATCH: Database environment version mismatchs
db4.4_verify results in the computer just hanging, and nothing happening.
Finally, if I run db4.4_recover on the database, that works. However, afterwards I get the following error 'No such file or directory' in python.
| [
"I think answers should go in the \"answer\" section rather than as an addendum to the question since that marks the question as having an answer on the various question-list pages. I'll do that for you but, if you also get around to doing it, leave a comment on my answer so I can delete it.\nQuoting \"answer in question\":\nVerifying everything in this question, I eventually solved the problem. The 'No such file or directory' are caused by some __db.XXX files missing. Using\nbsddb.dbtables.bsdTableDB([dbname],[folder], create=1)\n\nafter db4.4_recover, these files got created and everything is now working.\nStill, it was a bit of an obscure problem, and initially hard to figure out. But thanks to the question Examining Berkeley DB files from the CLI, I got the tools I needed. I'll just post it here if someone ends up with the same problem in the future and end up at stackoverflow.com\n",
"Damn, verifying everything in this question I eventually solved the problem. The 'No such file or directory' are caused by some __db.XXX files missing. Using bsddb.dbtables.bsdTableDB([dbname],[folder], create=1) after db4.4_recover, these files got created and everything is now working.\nStill, it was a bit of an obscure problem, and initially hard to figure out. But thanks to the question Examining Berkeley DB files from the CLI I got the tools I needed. I'll just post it here if someone ends up with the same problem in the future and end up at stackoverflow.com\n"
] | [
3,
0
] | [] | [] | [
"berkeley_db",
"database",
"python"
] | stackoverflow_0000181648_berkeley_db_database_python.txt |
Q:
How can I find the full path to a font from its display name on a Mac?
I am using the Photoshop's javascript API to find the fonts in a given PSD.
Given a font name returned by the API, I want to find the actual physical font file that font name corresponds to on the disc.
This is all happening in a python program running on OSX so I guess I'm looking for one of:
Some Photoshop javascript
A Python function
An OSX API that I can call from python
A:
Unfortunately the only API that isn't deprecated is located in the ApplicationServices framework, which doesn't have a bridge support file, and thus isn't available in the bridge. If you're wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.
Cocoa doesn't have any native support, at least as of 10.5, for getting the location of a font.
A:
open up a terminal (Applications->Utilities->Terminal) and type this in:
locate InsertFontHere
This will spit out every file that has the name you want.
Warning: there may be alot to wade through.
A:
I haven't been able to find anything that does this directly. I think you'll have to iterate through the various font folders on the system: /System/Library/Fonts, /Library/Fonts, and there can probably be a user-level directory as well ~/Library/Fonts.
A:
There must be a method in Cocoa to get a list of fonts, then you would have to use the PyObjC bindings to call it..
Depending on what you need them for, you could probably just use something like the following..
import os
def get_font_list():
fonts = []
for font_path in ["/Library/Fonts", os.path.expanduser("~/Library/Fonts")]:
if os.path.isdir(font_path):
fonts.extend(
[os.path.join(font_path, cur_font)
for cur_font in os.listdir(font_path)
]
)
return fonts
| How can I find the full path to a font from its display name on a Mac? | I am using the Photoshop's javascript API to find the fonts in a given PSD.
Given a font name returned by the API, I want to find the actual physical font file that font name corresponds to on the disc.
This is all happening in a python program running on OSX so I guess I'm looking for one of:
Some Photoshop javascript
A Python function
An OSX API that I can call from python
| [
"Unfortunately the only API that isn't deprecated is located in the ApplicationServices framework, which doesn't have a bridge support file, and thus isn't available in the bridge. If you're wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.\nCocoa doesn't have any native support, at least as of 10.5, for getting the location of a font.\n",
"open up a terminal (Applications->Utilities->Terminal) and type this in:\nlocate InsertFontHere\n\nThis will spit out every file that has the name you want.\nWarning: there may be alot to wade through.\n",
"I haven't been able to find anything that does this directly. I think you'll have to iterate through the various font folders on the system: /System/Library/Fonts, /Library/Fonts, and there can probably be a user-level directory as well ~/Library/Fonts.\n",
"There must be a method in Cocoa to get a list of fonts, then you would have to use the PyObjC bindings to call it..\nDepending on what you need them for, you could probably just use something like the following..\nimport os\ndef get_font_list():\n fonts = []\n for font_path in [\"/Library/Fonts\", os.path.expanduser(\"~/Library/Fonts\")]:\n if os.path.isdir(font_path):\n fonts.extend(\n [os.path.join(font_path, cur_font) \n for cur_font in os.listdir(font_path)\n ]\n )\n return fonts\n\n"
] | [
22,
7,
6,
5
] | [] | [] | [
"fonts",
"macos",
"photoshop",
"python"
] | stackoverflow_0000000469_fonts_macos_photoshop_python.txt |
Q:
How do you design data models for Bigtable/Datastore (GAE)?
Since the Google App Engine Datastore is based on Bigtable and we know that's not a relational database, how do you design a database schema/data model for applications that use this type of database system?
A:
Designing a bigtable schema is an open process, and basically requires you to think about:
The access patterns you will be using and how often each will be used
The relationships between your types
What indices you are going to need
The write patterns you will be using (in order to effectively spread load)
GAE's datastore automatically denormalizes your data. That is, each index contains a (mostly) complete copy of the data, and thus every index adds significantly to time taken to perform a write, and the storage space used.
If this were not the case, designing a Datastore schema would be a lot more work: You would have to think carefully about the primary key for each type, and consider the effect of your decision on the locality of data. For example, when rendering a blog post you would probably need to display the comments to go along with it, so each comment's key would probably begin with the associated post's key.
With Datastore, this is not such a big deal: The query you use will look something like "Select * FROM Comment WHERE post_id = N." (If you want to page the comments, you would also have a limit clause, and a possible suffix of " AND comment_id > last_comment_id".) Once you add such a query, Datastore will build the index for you, and your reads will be magically fast.
Something to keep in mind is that each additional index creates some additional cost: it is best if you can use as few access patterns as possible, since it will reduce the number of indices GAE will construct, and thus the total storage required by your data.
Reading over this answer, I find it a little vague. Maybe a hands-on design question would help to scope this down? :-)
A:
You can use www.web2py.com. You build the model and the application once and it works on GAE but also witl SQLite, MySQL, Posgres, Oracle, MSSQL, FireBird
| How do you design data models for Bigtable/Datastore (GAE)? | Since the Google App Engine Datastore is based on Bigtable and we know that's not a relational database, how do you design a database schema/data model for applications that use this type of database system?
| [
"Designing a bigtable schema is an open process, and basically requires you to think about:\n\nThe access patterns you will be using and how often each will be used\nThe relationships between your types\nWhat indices you are going to need\nThe write patterns you will be using (in order to effectively spread load)\n\nGAE's datastore automatically denormalizes your data. That is, each index contains a (mostly) complete copy of the data, and thus every index adds significantly to time taken to perform a write, and the storage space used.\nIf this were not the case, designing a Datastore schema would be a lot more work: You would have to think carefully about the primary key for each type, and consider the effect of your decision on the locality of data. For example, when rendering a blog post you would probably need to display the comments to go along with it, so each comment's key would probably begin with the associated post's key.\nWith Datastore, this is not such a big deal: The query you use will look something like \"Select * FROM Comment WHERE post_id = N.\" (If you want to page the comments, you would also have a limit clause, and a possible suffix of \" AND comment_id > last_comment_id\".) Once you add such a query, Datastore will build the index for you, and your reads will be magically fast.\nSomething to keep in mind is that each additional index creates some additional cost: it is best if you can use as few access patterns as possible, since it will reduce the number of indices GAE will construct, and thus the total storage required by your data.\nReading over this answer, I find it a little vague. Maybe a hands-on design question would help to scope this down? :-)\n",
"You can use www.web2py.com. You build the model and the application once and it works on GAE but also witl SQLite, MySQL, Posgres, Oracle, MSSQL, FireBird\n"
] | [
19,
1
] | [
"As GAE builds on how data is managed in Django there is a lot of info on how to address similar questions in the Django documentation (for example see here, scroll down to 'Your first model').\nIn short you design you db model as a regular object model and let GAE sort out all of the object-relational mappings. \n"
] | [
-2
] | [
"bigtable",
"database",
"google_app_engine",
"python"
] | stackoverflow_0000079850_bigtable_database_google_app_engine_python.txt |
Q:
How can I search through Stack Overflow questions from a script?
Given a string of keywords, such as "Python best practices", I would like to obtain the first 10 Stack Overflow questions that contain that keywords, sorted by relevance (?), say from a Python script. My goal is to end up with a list of tuples (title, URL).
How can I accomplish this? Would you consider querying Google instead? (How would you do it from Python?)
A:
>>> from urllib import urlencode
>>> params = urlencode({'q': 'python best practices', 'sort': 'relevance'})
>>> params
'q=python+best+practices&sort=relevance'
>>> from urllib2 import urlopen
>>> html = urlopen("http://stackoverflow.com/search?%s" % params).read()
>>> import re
>>> links = re.findall(r'<h3><a href="([^"]*)" class="answer-title">([^<]*)</a></h3>', html)
>>> links
[('/questions/5119/what-are-the-best-rss-feeds-for-programmersdevelopers#5150', 'What are the best RSS feeds for programmers/developers?'), ('/questions/3088/best-ways-to-teach-a-beginner-to-program#13185', 'Best ways to teach a beginner to program?'), ('/questions/13678/textual-versus-graphical-programming-languages#13886', 'Textual versus Graphical Programming Languages'), ('/questions/58968/what-defines-pythonian-or-pythonic#59877', 'What defines “pythonian” or “pythonic”?'), ('/questions/592/cxoracle-how-do-i-access-oracle-from-python#62392', 'cx_Oracle - How do I access Oracle from Python? '), ('/questions/7170/recommendation-for-straight-forward-python-frameworks#83608', 'Recommendation for straight-forward python frameworks'), ('/questions/100732/why-is-if-not-someobj-better-than-if-someobj-none-in-python#100903', 'Why is if not someobj: better than if someobj == None: in Python?'), ('/questions/132734/presentations-on-switching-from-perl-to-python#134006', 'Presentations on switching from Perl to Python'), ('/questions/136977/after-c-python-or-java#138442', 'After C++ - Python or Java?')]
>>> from urlparse import urljoin
>>> links = [(urljoin('http://stackoverflow.com/', url), title) for url,title in links]
>>> links
[('http://stackoverflow.com/questions/5119/what-are-the-best-rss-feeds-for-programmersdevelopers#5150', 'What are the best RSS feeds for programmers/developers?'), ('http://stackoverflow.com/questions/3088/best-ways-to-teach-a-beginner-to-program#13185', 'Best ways to teach a beginner to program?'), ('http://stackoverflow.com/questions/13678/textual-versus-graphical-programming-languages#13886', 'Textual versus Graphical Programming Languages'), ('http://stackoverflow.com/questions/58968/what-defines-pythonian-or-pythonic#59877', 'What defines “pythonian” or “pythonic”?'), ('http://stackoverflow.com/questions/592/cxoracle-how-do-i-access-oracle-from-python#62392', 'cx_Oracle - How do I access Oracle from Python? '), ('http://stackoverflow.com/questions/7170/recommendation-for-straight-forward-python-frameworks#83608', 'Recommendation for straight-forward python frameworks'), ('http://stackoverflow.com/questions/100732/why-is-if-not-someobj-better-than-if-someobj-none-in-python#100903', 'Why is if not someobj: better than if someobj == None: in Python?'), ('http://stackoverflow.com/questions/132734/presentations-on-switching-from-perl-to-python#134006', 'Presentations on switching from Perl to Python'), ('http://stackoverflow.com/questions/136977/after-c-python-or-java#138442', 'After C++ - Python or Java?')]
Converting this to a function should be trivial.
EDIT: Heck, I'll do it...
def get_stackoverflow(query):
import urllib, urllib2, re, urlparse
params = urllib.urlencode({'q': query, 'sort': 'relevance'})
html = urllib2.urlopen("http://stackoverflow.com/search?%s" % params).read()
links = re.findall(r'<h3><a href="([^"]*)" class="answer-title">([^<]*)</a></h3>', html)
links = [(urlparse.urljoin('http://stackoverflow.com/', url), title) for url,title in links]
return links
A:
Since Stackoverflow already has this feature you just need to get the contents of the search results page and scrape the information you need. Here is the URL for a search by relevance:
https://stackoverflow.com/search?q=python+best+practices&sort=relevance
If you View Source, you'll see that the information you need for each question is on a line like this:
<h3><a href="/questions/5119/what-are-the-best-rss-feeds-for-programmersdevelopers#5150" class="answer-title">What are the best RSS feeds for programmers/developers?</a></h3>
So you should be able to get the first ten by doing a regex search for a string of that form.
A:
Suggest that a REST API be added to SO. http://stackoverflow.uservoice.com/
A:
You could screen scrape the returned HTML from a valid HTTP request. But that would result in bad karma, and the loss of the ability to enjoy a good night's sleep.
A:
I would just use Pycurl to concatenate the search terms onto the query uri.
| How can I search through Stack Overflow questions from a script? | Given a string of keywords, such as "Python best practices", I would like to obtain the first 10 Stack Overflow questions that contain that keywords, sorted by relevance (?), say from a Python script. My goal is to end up with a list of tuples (title, URL).
How can I accomplish this? Would you consider querying Google instead? (How would you do it from Python?)
| [
">>> from urllib import urlencode\n>>> params = urlencode({'q': 'python best practices', 'sort': 'relevance'})\n>>> params\n'q=python+best+practices&sort=relevance'\n>>> from urllib2 import urlopen\n>>> html = urlopen(\"http://stackoverflow.com/search?%s\" % params).read()\n>>> import re\n>>> links = re.findall(r'<h3><a href=\"([^\"]*)\" class=\"answer-title\">([^<]*)</a></h3>', html)\n>>> links\n[('/questions/5119/what-are-the-best-rss-feeds-for-programmersdevelopers#5150', 'What are the best RSS feeds for programmers/developers?'), ('/questions/3088/best-ways-to-teach-a-beginner-to-program#13185', 'Best ways to teach a beginner to program?'), ('/questions/13678/textual-versus-graphical-programming-languages#13886', 'Textual versus Graphical Programming Languages'), ('/questions/58968/what-defines-pythonian-or-pythonic#59877', 'What defines “pythonian” or “pythonic”?'), ('/questions/592/cxoracle-how-do-i-access-oracle-from-python#62392', 'cx_Oracle - How do I access Oracle from Python? '), ('/questions/7170/recommendation-for-straight-forward-python-frameworks#83608', 'Recommendation for straight-forward python frameworks'), ('/questions/100732/why-is-if-not-someobj-better-than-if-someobj-none-in-python#100903', 'Why is if not someobj: better than if someobj == None: in Python?'), ('/questions/132734/presentations-on-switching-from-perl-to-python#134006', 'Presentations on switching from Perl to Python'), ('/questions/136977/after-c-python-or-java#138442', 'After C++ - Python or Java?')]\n>>> from urlparse import urljoin\n>>> links = [(urljoin('http://stackoverflow.com/', url), title) for url,title in links]\n>>> links\n[('http://stackoverflow.com/questions/5119/what-are-the-best-rss-feeds-for-programmersdevelopers#5150', 'What are the best RSS feeds for programmers/developers?'), ('http://stackoverflow.com/questions/3088/best-ways-to-teach-a-beginner-to-program#13185', 'Best ways to teach a beginner to program?'), ('http://stackoverflow.com/questions/13678/textual-versus-graphical-programming-languages#13886', 'Textual versus Graphical Programming Languages'), ('http://stackoverflow.com/questions/58968/what-defines-pythonian-or-pythonic#59877', 'What defines “pythonian” or “pythonic”?'), ('http://stackoverflow.com/questions/592/cxoracle-how-do-i-access-oracle-from-python#62392', 'cx_Oracle - How do I access Oracle from Python? '), ('http://stackoverflow.com/questions/7170/recommendation-for-straight-forward-python-frameworks#83608', 'Recommendation for straight-forward python frameworks'), ('http://stackoverflow.com/questions/100732/why-is-if-not-someobj-better-than-if-someobj-none-in-python#100903', 'Why is if not someobj: better than if someobj == None: in Python?'), ('http://stackoverflow.com/questions/132734/presentations-on-switching-from-perl-to-python#134006', 'Presentations on switching from Perl to Python'), ('http://stackoverflow.com/questions/136977/after-c-python-or-java#138442', 'After C++ - Python or Java?')]\n\nConverting this to a function should be trivial.\nEDIT: Heck, I'll do it...\ndef get_stackoverflow(query):\n import urllib, urllib2, re, urlparse\n params = urllib.urlencode({'q': query, 'sort': 'relevance'})\n html = urllib2.urlopen(\"http://stackoverflow.com/search?%s\" % params).read()\n links = re.findall(r'<h3><a href=\"([^\"]*)\" class=\"answer-title\">([^<]*)</a></h3>', html)\n links = [(urlparse.urljoin('http://stackoverflow.com/', url), title) for url,title in links]\n\n return links\n\n",
"Since Stackoverflow already has this feature you just need to get the contents of the search results page and scrape the information you need. Here is the URL for a search by relevance:\n\nhttps://stackoverflow.com/search?q=python+best+practices&sort=relevance\n\nIf you View Source, you'll see that the information you need for each question is on a line like this:\n<h3><a href=\"/questions/5119/what-are-the-best-rss-feeds-for-programmersdevelopers#5150\" class=\"answer-title\">What are the best RSS feeds for programmers/developers?</a></h3>\n\nSo you should be able to get the first ten by doing a regex search for a string of that form.\n",
"Suggest that a REST API be added to SO. http://stackoverflow.uservoice.com/\n",
"You could screen scrape the returned HTML from a valid HTTP request. But that would result in bad karma, and the loss of the ability to enjoy a good night's sleep.\n",
"I would just use Pycurl to concatenate the search terms onto the query uri.\n"
] | [
6,
5,
2,
1,
0
] | [] | [] | [
"python",
"scripting",
"search",
"stackexchange_api"
] | stackoverflow_0000196755_python_scripting_search_stackexchange_api.txt |
Q:
MySQLdb execute timeout
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end).
This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.
So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
A:
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
A:
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.
If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.
If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.
You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).
| MySQLdb execute timeout | Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end).
This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.
So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
| [
"if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.\n",
"You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.\nIf the database is \"flaky\", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.\nIf you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.\nYou might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go \"stale\", where the server and client both think it's still alive, but some stateful network element in between has \"forgotten\" about the TCP connection. An application trying to use such a \"stale\" connection will have a long wait before receiving an error (but it should eventually).\n"
] | [
2,
1
] | [] | [] | [
"mysql",
"python",
"timeout"
] | stackoverflow_0000196217_mysql_python_timeout.txt |
Q:
what is the best/easiest to use encryption library in python
I want to encrypt few files using python what is the best way
I can use gpg/pgp using any standard/famous python libraries?
A:
PyCrypto seems to be the best one around.
A:
Try KeyCzar
Very easy to implement.
A:
I use GPGme The main strength of GPGme is that it read and writes files at the OpenPGP standard (RFC 4880) which can be important if you want to interoperate with other PGP programs.
It has a Python interface. Warning: it is a low-level interface, not very Pythonic.
If you read French, see examples.
Here is one, to check a signature:
signed = core.Data(sys.stdin.read())
plain = core.Data()
context = core.Context()
context.op_verify(signed, None, plain)
result = context.op_verify_result()
sign = result.signatures
while sign:
if sign.status != 0:
print "BAD signature from:"
else:
print "Good signature from:"
print " uid: ", context.get_key(sign.fpr, 0).uids.uid
print " timestamp: ", sign.timestamp
print " fingerprint:", sign.fpr
sign = sign.next
A:
I use pyOpenSSL, its a python binding for OpenSSL which has been around for a long time and is very well tested. I did some benchmarks for my application, which is very crypto intensive and it won hands down against pyCrypto. YMMV.
A:
See Google's Keyczar project, which provides a nice set of interfaces to PyCrypto's functionality.
A:
I like pyDes (http://twhiteman.netfirms.com/des.html). It's not the quickest, but it's pure Python and works very well for small amounts of encrypted data.
| what is the best/easiest to use encryption library in python | I want to encrypt few files using python what is the best way
I can use gpg/pgp using any standard/famous python libraries?
| [
"PyCrypto seems to be the best one around.\n",
"Try KeyCzar \nVery easy to implement. \n",
"I use GPGme The main strength of GPGme is that it read and writes files at the OpenPGP standard (RFC 4880) which can be important if you want to interoperate with other PGP programs. \nIt has a Python interface. Warning: it is a low-level interface, not very Pythonic.\nIf you read French, see examples.\nHere is one, to check a signature:\nsigned = core.Data(sys.stdin.read())\nplain = core.Data()\ncontext = core.Context()\n\ncontext.op_verify(signed, None, plain)\nresult = context.op_verify_result()\n\nsign = result.signatures\nwhile sign:\n if sign.status != 0:\n print \"BAD signature from:\"\n else:\n print \"Good signature from:\"\n print \" uid: \", context.get_key(sign.fpr, 0).uids.uid\n print \" timestamp: \", sign.timestamp\n print \" fingerprint:\", sign.fpr\n sign = sign.next\n\n",
"I use pyOpenSSL, its a python binding for OpenSSL which has been around for a long time and is very well tested. I did some benchmarks for my application, which is very crypto intensive and it won hands down against pyCrypto. YMMV.\n",
"See Google's Keyczar project, which provides a nice set of interfaces to PyCrypto's functionality.\n",
"I like pyDes (http://twhiteman.netfirms.com/des.html). It's not the quickest, but it's pure Python and works very well for small amounts of encrypted data.\n"
] | [
12,
7,
6,
5,
4,
0
] | [] | [] | [
"encryption",
"gnupg",
"pgp",
"python"
] | stackoverflow_0000090413_encryption_gnupg_pgp_python.txt |
Q:
Which AES library to use in Ruby/Python?
I need to be able to send encrypted data between a Ruby client and a Python server (and vice versa) and have been having trouble with the ruby-aes gem/library. The library is very easy to use but we've been having trouble passing data between it and the pyCrypto AES library for Python. These libraries seem to be fine when they're the only one being used, but they don't seem to play well across language boundaries. Any ideas?
Edit: We're doing the communication over SOAP and have also tried converting the binary data to base64 to no avail. Also, it's more that the encryption/decryption is almost but not exactly the same between the two (e.g., the lengths differ by one or there is extra garbage characters on the end of the decrypted string)
A:
(e.g., the lengths differ by one or there is extra garbage characters on the end of the decrypted string)
I missed that bit. There's nothing wrong with your encryption/decryption. It sounds like a padding problem. AES always encodes data in blocks of 128 bits. If the length of your data isn't a multiple of 128 bits the data should be padded before encryption and the padding needs to be removed/ignored after encryption.
A:
Turns out what happened was that ruby-aes automatically pads data to fill up 16 chars and sticks a null character on the end of the final string as a delimiter. PyCrypto requires you to do multiples of 16 chars so that was how we figured out what ruby-aes was doing.
A:
It's hard to even guess at what's happening without more information ...
If I were you, I'd check that in your Python and Ruby programs:
The keys are the same (obviously). Dump them as hex and compare each byte.
The initialization vectors are the same. This is the parameter IV in AES.new() in pyCrypto. Dump them as hex too.
The modes are the same. The parameter mode in AES.new() in pyCrypto.
There are defaults for IV and mode in pyCrypto, but don't trust that they are the same as in the Ruby implementation. Use one of the simpler modes, like CBC. I've found that different libraries have different interpretations of how the mode complex modes, such as PTR, work.
Wikipedia has a great article about how block cipher modes.
A:
Kind of depends on how you are transferring the encrypted data. It is possible that you are writing a file in one language and then trying to read it in from the other. Python (especially on Windows) requires that you specify binary mode for binary files. So in Python, assuming you want to decrypt there, you should open the file like this:
f = open('/path/to/file', 'rb')
The "b" indicates binary. And if you are writing the encrypted data to file from Python:
f = open('/path/to/file', 'wb')
f.write(encrypted_data)
A:
Basically what Hugh said above: check the IV's, key sizes and the chaining modes to make sure everything is identical.
Test both sides independantly, encode some information and check that Ruby and Python endoded it identically. You're assuming that the problem has to do with encryption, but it may just be something as simple as sending the encrypted data with puts which throws random newlines into the data. Once you're sure they encrypt the data correctly, check that you receive exactly what you think you sent. Keep going step by step until you find the stage that corrupts the data.
Also, I'd suggest using the openssl library that's included in ruby's standard library instead of using an external gem.
| Which AES library to use in Ruby/Python? | I need to be able to send encrypted data between a Ruby client and a Python server (and vice versa) and have been having trouble with the ruby-aes gem/library. The library is very easy to use but we've been having trouble passing data between it and the pyCrypto AES library for Python. These libraries seem to be fine when they're the only one being used, but they don't seem to play well across language boundaries. Any ideas?
Edit: We're doing the communication over SOAP and have also tried converting the binary data to base64 to no avail. Also, it's more that the encryption/decryption is almost but not exactly the same between the two (e.g., the lengths differ by one or there is extra garbage characters on the end of the decrypted string)
| [
"\n(e.g., the lengths differ by one or there is extra garbage characters on the end of the decrypted string)\n\nI missed that bit. There's nothing wrong with your encryption/decryption. It sounds like a padding problem. AES always encodes data in blocks of 128 bits. If the length of your data isn't a multiple of 128 bits the data should be padded before encryption and the padding needs to be removed/ignored after encryption.\n",
"Turns out what happened was that ruby-aes automatically pads data to fill up 16 chars and sticks a null character on the end of the final string as a delimiter. PyCrypto requires you to do multiples of 16 chars so that was how we figured out what ruby-aes was doing.\n",
"It's hard to even guess at what's happening without more information ... \nIf I were you, I'd check that in your Python and Ruby programs:\n\nThe keys are the same (obviously). Dump them as hex and compare each byte.\nThe initialization vectors are the same. This is the parameter IV in AES.new() in pyCrypto. Dump them as hex too.\nThe modes are the same. The parameter mode in AES.new() in pyCrypto.\n\nThere are defaults for IV and mode in pyCrypto, but don't trust that they are the same as in the Ruby implementation. Use one of the simpler modes, like CBC. I've found that different libraries have different interpretations of how the mode complex modes, such as PTR, work.\nWikipedia has a great article about how block cipher modes.\n",
"Kind of depends on how you are transferring the encrypted data. It is possible that you are writing a file in one language and then trying to read it in from the other. Python (especially on Windows) requires that you specify binary mode for binary files. So in Python, assuming you want to decrypt there, you should open the file like this:\nf = open('/path/to/file', 'rb')\n\nThe \"b\" indicates binary. And if you are writing the encrypted data to file from Python:\nf = open('/path/to/file', 'wb')\nf.write(encrypted_data)\n\n",
"Basically what Hugh said above: check the IV's, key sizes and the chaining modes to make sure everything is identical.\nTest both sides independantly, encode some information and check that Ruby and Python endoded it identically. You're assuming that the problem has to do with encryption, but it may just be something as simple as sending the encrypted data with puts which throws random newlines into the data. Once you're sure they encrypt the data correctly, check that you receive exactly what you think you sent. Keep going step by step until you find the stage that corrupts the data.\nAlso, I'd suggest using the openssl library that's included in ruby's standard library instead of using an external gem.\n"
] | [
5,
3,
2,
1,
1
] | [] | [] | [
"aes",
"encryption",
"python",
"ruby"
] | stackoverflow_0000196776_aes_encryption_python_ruby.txt |
Q:
What are the advantages of packaging your python library/application as an .egg file?
I've read some about .egg files and I've noticed them in my lib directory but what are the advantages/disadvantages of using then as a developer?
A:
From the Python Enterprise Application Kit community:
"Eggs are to Pythons as Jars are to Java..."
Python eggs are a way of bundling
additional information with a Python
project, that allows the project's
dependencies to be checked and
satisfied at runtime, as well as
allowing projects to provide plugins
for other projects. There are several
binary formats that embody eggs, but
the most common is '.egg' zipfile
format, because it's a convenient one
for distributing projects. All of the
formats support including
package-specific data, project-wide
metadata, C extensions, and Python
code.
The primary benefits of Python Eggs
are:
They enable tools like the "Easy Install" Python package manager
.egg files are a "zero installation" format for a Python
package; no build or install step is
required, just put them on PYTHONPATH
or sys.path and use them (may require
the runtime installed if C extensions
or data files are used)
They can include package metadata, such as the other eggs they depend on
They allow "namespace packages" (packages that just contain other
packages) to be split into separate
distributions (e.g. zope., twisted.,
peak.* packages can be distributed as
separate eggs, unlike normal packages
which must always be placed under the
same parent directory. This allows
what are now huge monolithic packages
to be distributed as separate
components.)
They allow applications or libraries to specify the needed
version of a library, so that you can
e.g. require("Twisted-Internet>=2.0")
before doing an import
twisted.internet.
They're a great format for distributing extensions or plugins to
extensible applications and frameworks
(such as Trac, which uses eggs for
plugins as of 0.9b1), because the egg
runtime provides simple APIs to locate
eggs and find their advertised entry
points (similar to Eclipse's
"extension point" concept).
There are also other benefits that may come from having a standardized
format, similar to the benefits of
Java's "jar" format.
-Adam
A:
One egg by itself is not better than a proper source release. The good part is the dependency handling. Like debian or rpm packages, you can say you depend on other eggs and they'll be installed automatically (through pypi.python.org).
A second comment: the egg format itself is a binary packaged format. Normal python packages that consist of just python code are best distributed as "source releases", so "python setup.py sdist" which result in a .tar.gz. These are also commonly called "eggs" when uploaded to pypi.
Where you need binary eggs: when you're bundling some C code extension. You'll need several binary eggs (a 32bit unix one, a windows one, etc.) then.
A:
Eggs are a pretty good way to distribute python apps. Think of it as a platform independent .deb file that will install all dependencies and whatnot. The advantage is that it's easy to use for the end user. The disadvantage are that it can be cumbersome to package your app up as a .egg file.
You should also offer an alternative means of installation in addition to .eggs. There are some people who don't like using eggs because they don't like the idea of a software program installing whatever software it wants. These usually tend to be sysadmin types.
A:
.egg files are basically a nice way to deploy your python application. You can think of it as something like .jar files for Java.
More info here.
A:
Whatever you do, do not stop distributing your application, also, as a tarball, as that is the easiest packagable format for operating systems with a package sysetem.
A:
For simple Python programs, you probably don't need to use eggs. Distributing the raw .py files should suffice; it's like distributing source files for GNU/Linux. You can also use the various OS "packagers" (like py2exe or py2app) to create .exe, .dmg, or other files for different operating systems.
More complex programs, e.g. Django, pretty much require eggs due to the various modules and dependencies required.
| What are the advantages of packaging your python library/application as an .egg file? | I've read some about .egg files and I've noticed them in my lib directory but what are the advantages/disadvantages of using then as a developer?
| [
"From the Python Enterprise Application Kit community:\n\n\"Eggs are to Pythons as Jars are to Java...\"\nPython eggs are a way of bundling\n additional information with a Python\n project, that allows the project's\n dependencies to be checked and\n satisfied at runtime, as well as\n allowing projects to provide plugins\n for other projects. There are several\n binary formats that embody eggs, but\n the most common is '.egg' zipfile\n format, because it's a convenient one\n for distributing projects. All of the\n formats support including\n package-specific data, project-wide\n metadata, C extensions, and Python\n code.\nThe primary benefits of Python Eggs\n are:\n\nThey enable tools like the \"Easy Install\" Python package manager\n.egg files are a \"zero installation\" format for a Python\n package; no build or install step is\n required, just put them on PYTHONPATH\n or sys.path and use them (may require\n the runtime installed if C extensions\n or data files are used)\nThey can include package metadata, such as the other eggs they depend on\nThey allow \"namespace packages\" (packages that just contain other\n packages) to be split into separate\n distributions (e.g. zope., twisted.,\n peak.* packages can be distributed as\n separate eggs, unlike normal packages\n which must always be placed under the\n same parent directory. This allows\n what are now huge monolithic packages\n to be distributed as separate\n components.)\nThey allow applications or libraries to specify the needed\n version of a library, so that you can\n e.g. require(\"Twisted-Internet>=2.0\")\n before doing an import\n twisted.internet.\nThey're a great format for distributing extensions or plugins to\n extensible applications and frameworks\n (such as Trac, which uses eggs for\n plugins as of 0.9b1), because the egg\n runtime provides simple APIs to locate\n eggs and find their advertised entry\n points (similar to Eclipse's\n \"extension point\" concept).\nThere are also other benefits that may come from having a standardized\n format, similar to the benefits of\n Java's \"jar\" format.\n\n\n-Adam\n",
"One egg by itself is not better than a proper source release. The good part is the dependency handling. Like debian or rpm packages, you can say you depend on other eggs and they'll be installed automatically (through pypi.python.org).\nA second comment: the egg format itself is a binary packaged format. Normal python packages that consist of just python code are best distributed as \"source releases\", so \"python setup.py sdist\" which result in a .tar.gz. These are also commonly called \"eggs\" when uploaded to pypi.\nWhere you need binary eggs: when you're bundling some C code extension. You'll need several binary eggs (a 32bit unix one, a windows one, etc.) then.\n",
"Eggs are a pretty good way to distribute python apps. Think of it as a platform independent .deb file that will install all dependencies and whatnot. The advantage is that it's easy to use for the end user. The disadvantage are that it can be cumbersome to package your app up as a .egg file.\nYou should also offer an alternative means of installation in addition to .eggs. There are some people who don't like using eggs because they don't like the idea of a software program installing whatever software it wants. These usually tend to be sysadmin types.\n",
".egg files are basically a nice way to deploy your python application. You can think of it as something like .jar files for Java. \nMore info here.\n",
"Whatever you do, do not stop distributing your application, also, as a tarball, as that is the easiest packagable format for operating systems with a package sysetem.\n",
"For simple Python programs, you probably don't need to use eggs. Distributing the raw .py files should suffice; it's like distributing source files for GNU/Linux. You can also use the various OS \"packagers\" (like py2exe or py2app) to create .exe, .dmg, or other files for different operating systems.\nMore complex programs, e.g. Django, pretty much require eggs due to the various modules and dependencies required.\n"
] | [
32,
5,
4,
3,
1,
1
] | [] | [] | [
"egg",
"packaging",
"python",
"software_distribution",
"zip"
] | stackoverflow_0000047953_egg_packaging_python_software_distribution_zip.txt |
Q:
Dealing with a string containing multiple character encodings
I'm not exactly sure how to ask this question really, and I'm no where close to finding an answer, so I hope someone can help me.
I'm writing a Python app that connects to a remote host and receives back byte data, which I unpack using Python's built-in struct module. My problem is with the strings, as they include multiple character encodings. Here is an example of such a string:
"^LThis is an example ^Gstring with multiple ^Jcharacter encodings"
Where the different encoding starts and ends is marked using special escape chars:
^L - Latin1
^E - Central Europe
^T - Turkish
^B - Baltic
^J - Japanese
^C - Cyrillic
^G - Greek
And so on... I need a way to convert this sort of string into Unicode, but I'm really not sure how to do it. I've read up on Python's codecs and string.encode/decode, but I'm none the wiser really. I should mention as well, that I have no control over how the strings are outputted by the host.
I hope someone can help me with how to get started on this.
A:
Here's a relatively simple example of how do it...
# -*- coding: utf-8 -*-
import re
# Test Data
ENCODING_RAW_DATA = (
('latin_1', 'L', u'Hello'), # Latin 1
('iso8859_2', 'E', u'dobrý večer'), # Central Europe
('iso8859_9', 'T', u'İyi akşamlar'), # Turkish
('iso8859_13', 'B', u'Į sveikatą!'), # Baltic
('shift_jis', 'J', u'今日は'), # Japanese
('iso8859_5', 'C', u'Здравствуйте'), # Cyrillic
('iso8859_7', 'G', u'Γειά σου'), # Greek
)
CODE_TO_ENCODING = dict([(chr(ord(code)-64), encoding) for encoding, code, text in ENCODING_RAW_DATA])
EXPECTED_RESULT = u''.join([line[2] for line in ENCODING_RAW_DATA])
ENCODED_DATA = ''.join([chr(ord(code)-64) + text.encode(encoding) for encoding, code, text in ENCODING_RAW_DATA])
FIND_RE = re.compile('[\x00-\x1A][^\x00-\x1A]*')
def decode_single(bytes):
return bytes[1:].decode(CODE_TO_ENCODING[bytes[0]])
result = u''.join([decode_single(bytes) for bytes in FIND_RE.findall(ENCODED_DATA)])
assert result==EXPECTED_RESULT, u"Expected %s, but got %s" % (EXPECTED_RESULT, result)
A:
There's no built-in functionality for decoding a string like this, since it is really its own custom codec. You simply need to split up the string on those control characters and decode it accordingly.
Here's a (very slow) example of such a function that handles latin1 and shift-JIS:
latin1 = "latin-1"
japanese = "Shift-JIS"
control_l = "\x0c"
control_j = "\n"
encodingMap = {
control_l: latin1,
control_j: japanese}
def funkyDecode(s, initialCodec=latin1):
output = u""
accum = ""
currentCodec = initialCodec
for ch in s:
if ch in encodingMap:
output += accum.decode(currentCodec)
currentCodec = encodingMap[ch]
accum = ""
else:
accum += ch
output += accum.decode(currentCodec)
return output
A faster version might use str.split, or regular expressions.
(Also, as you can see in this example, "^J" is the control character for "newline", so your input data is going to have some interesting restrictions.)
A:
I would write a codec that incrementally scanned the string and decoded the bytes as they came along. Essentially, you would have to separate strings into chunks with a consistent encoding and decode those and append them to the strings that followed them.
A:
You definitely have to split the string first into the substrings wih different encodings, and decode each one separately. Just for fun, the obligatory "one-line" version:
import re
encs = {
'L': 'latin1',
'G': 'iso8859-7',
...
}
decoded = ''.join(substr[2:].decode(encs[substr[1]])
for substr in re.findall('\^[%s][^^]*' % ''.join(encs.keys()), st))
(no error checking, and also you'll want to decide how to handle '^' characters in substrings)
A:
I don't suppose you have any way of convincing the person who hosts the other machine to switch to unicode?
This is one of the reasons Unicode was invented, after all.
| Dealing with a string containing multiple character encodings | I'm not exactly sure how to ask this question really, and I'm no where close to finding an answer, so I hope someone can help me.
I'm writing a Python app that connects to a remote host and receives back byte data, which I unpack using Python's built-in struct module. My problem is with the strings, as they include multiple character encodings. Here is an example of such a string:
"^LThis is an example ^Gstring with multiple ^Jcharacter encodings"
Where the different encoding starts and ends is marked using special escape chars:
^L - Latin1
^E - Central Europe
^T - Turkish
^B - Baltic
^J - Japanese
^C - Cyrillic
^G - Greek
And so on... I need a way to convert this sort of string into Unicode, but I'm really not sure how to do it. I've read up on Python's codecs and string.encode/decode, but I'm none the wiser really. I should mention as well, that I have no control over how the strings are outputted by the host.
I hope someone can help me with how to get started on this.
| [
"Here's a relatively simple example of how do it...\n# -*- coding: utf-8 -*-\nimport re\n\n# Test Data\nENCODING_RAW_DATA = (\n ('latin_1', 'L', u'Hello'), # Latin 1\n ('iso8859_2', 'E', u'dobrý večer'), # Central Europe\n ('iso8859_9', 'T', u'İyi akşamlar'), # Turkish\n ('iso8859_13', 'B', u'Į sveikatą!'), # Baltic\n ('shift_jis', 'J', u'今日は'), # Japanese\n ('iso8859_5', 'C', u'Здравствуйте'), # Cyrillic\n ('iso8859_7', 'G', u'Γειά σου'), # Greek\n)\n\nCODE_TO_ENCODING = dict([(chr(ord(code)-64), encoding) for encoding, code, text in ENCODING_RAW_DATA])\nEXPECTED_RESULT = u''.join([line[2] for line in ENCODING_RAW_DATA])\nENCODED_DATA = ''.join([chr(ord(code)-64) + text.encode(encoding) for encoding, code, text in ENCODING_RAW_DATA])\n\nFIND_RE = re.compile('[\\x00-\\x1A][^\\x00-\\x1A]*')\n\ndef decode_single(bytes):\n return bytes[1:].decode(CODE_TO_ENCODING[bytes[0]])\n\nresult = u''.join([decode_single(bytes) for bytes in FIND_RE.findall(ENCODED_DATA)])\n\nassert result==EXPECTED_RESULT, u\"Expected %s, but got %s\" % (EXPECTED_RESULT, result)\n\n",
"There's no built-in functionality for decoding a string like this, since it is really its own custom codec. You simply need to split up the string on those control characters and decode it accordingly.\nHere's a (very slow) example of such a function that handles latin1 and shift-JIS:\nlatin1 = \"latin-1\"\njapanese = \"Shift-JIS\"\n\ncontrol_l = \"\\x0c\"\ncontrol_j = \"\\n\"\n\nencodingMap = {\n control_l: latin1,\n control_j: japanese}\n\ndef funkyDecode(s, initialCodec=latin1):\n output = u\"\"\n accum = \"\"\n currentCodec = initialCodec\n for ch in s:\n if ch in encodingMap:\n output += accum.decode(currentCodec)\n currentCodec = encodingMap[ch]\n accum = \"\"\n else:\n accum += ch\n output += accum.decode(currentCodec)\n return output\n\nA faster version might use str.split, or regular expressions.\n(Also, as you can see in this example, \"^J\" is the control character for \"newline\", so your input data is going to have some interesting restrictions.)\n",
"I would write a codec that incrementally scanned the string and decoded the bytes as they came along. Essentially, you would have to separate strings into chunks with a consistent encoding and decode those and append them to the strings that followed them.\n",
"You definitely have to split the string first into the substrings wih different encodings, and decode each one separately. Just for fun, the obligatory \"one-line\" version:\nimport re\n\nencs = {\n 'L': 'latin1',\n 'G': 'iso8859-7',\n ...\n}\n\ndecoded = ''.join(substr[2:].decode(encs[substr[1]])\n for substr in re.findall('\\^[%s][^^]*' % ''.join(encs.keys()), st))\n\n(no error checking, and also you'll want to decide how to handle '^' characters in substrings)\n",
"I don't suppose you have any way of convincing the person who hosts the other machine to switch to unicode?\nThis is one of the reasons Unicode was invented, after all.\n"
] | [
7,
4,
3,
2,
1
] | [] | [] | [
"encoding",
"python",
"string",
"unicode"
] | stackoverflow_0000197759_encoding_python_string_unicode.txt |
Q:
A python web application framework for tight DB/GUI coupling?
I'm a firm believer of the heretic thought of tight coupling between the backend and frontend: I want existing, implied knowledge about a backend to be automatically made use of when generating user interfaces. E.g., if a VARCHAR column has a maximum with of 20 characters, there GUIs should automatically constrain the user from typing more than 20 characters in a related form field.
And I have strong antipathy to ORMs which want to define my database tables, or are based on some hack where every table needs to have extra numeric ID columns because of the ORM.
I've looked a bit into Python database frameworks and I think I can conclude the SQLAlchemy fits best to my mentality.
Now, I need to find a web application framework which fits naturally with SQLAlchemy (or an equivalent) and perhaps even with my appetite for coupling. With "web application framework", I mean products/project such as Pyhons, Django, TurboGears, web2py, etc.
E.g., it should ideally be able to:
automatically select a suitable form widget for data entering a given column if told to do so; e.g., if the column has a foreign key to a column with 10 different values, widget should display the 10 possible values as a dropdown
auto-generate javascript form validation code which gives the end-user quick error feedback if a string is entered into a field which is about to end up in an INTEGER column, etc
auto-generate a calendar widget for data which will end up in a DATE column
hint NOT NULL constraints as javascript which complains about empty or whitespace-only data in a related input field
generate javascript validation code which matches relevant (simple) CHECK-constraints
make it easy to avoid SQL injection, by using prepared statements and/or validation of externally derived data
make it easy to avoid cross site scripting by automatically escape outgoing strings when appropriate
make use of constraint names to generate somewhat user friendly error messages in case a constrataint is violated
All this should happen dynamically, so table adjustments are automatically reflected on the frontend - probably with a caching mechanism, so that all the model introspection wouldn't kill performance. In other words, I don't want to repeat my model definition in an XML file (or alike) when it has already been carefully been defined in my database.
Does such a framework exist for Python (or for any language, for that matter)? If not: Which of the several Python web application frameworks will be least in the way if I were to add parts of the above features myself?
A:
web2py does most of what you ask:
Based on a field type and its validators it will render the field with the appropriate widget. You can override with
db.table.field.widget=...
and use a third party widget.
web2py has js to blocks the user from entering a non-integer in a integer field or a non-double in a double field. time, date and datetime fields have their own pickers. These js validation work with (not instead) of server side validation.
There is IS_EMPTY_OR(...) validator.
The DAL prevents SQL injections since everthing is escaped when goes in the DB.
web2py prevents XSS because in {{=variable}}, 'variable' is escaped unless specified otherwise {{=XML(variable)}} or {{=XML(variable,sanitize=True)}}
Error messages are arguments of validators for example
db.table.field.requires=IS_NOT_EMPTY(error_message=T('hey! write something in here'))
T is for internationalization.
A:
You should have a look at django and especially its newforms and admin modules. The newforms module provides a nice possibility to do server side validation with automated generation of error messages/pages for the user. Adding ajax validation is also possible
A:
I believe that Django models does not support composite primary keys (see documentation). But perhaps you can use SQLAlchemy in Django? A google search indicates that you can. I have not used Django, so I don't know.
I suggest you take a look at:
ToscaWidgets
DBSprockets, including DBMechanic
Catwalk. Catwalk is an application for TurboGears 1.0 that uses SQLObject, not SQLAlchemy. Also check out this blog post and screencast.
FastData. Also uses SQLObject.
formalchemy
Rum
I do not have any deep knowledge of any of the projects above. I am just in the process of trying to add something similar to one of my own applications as what the original question mentions. The above list is simply a list of interesting projects that I have stumbled across.
As to web application frameworks for Python, I recommend TurboGears 2. Not that I have any experience with any of the other frameworks, I just like TurboGears...
If the original question's author finds a solution that works well, please update or answer this thread.
A:
TurboGears currently uses SQLObject by default but you can use it with SQLAlchemy. They are saying that the next major release of TurboGears (1.1) will use SQLAlchemy by default.
A:
I know that you specificity ask for a framework but I thought I would let you know about what I get up to here. I have just undergone converting my company's web application from a custom in-house ORM layer into sqlAlchemy so I am far from an expert but something that occurred to me was that sqlAlchemy has types for all of the attributes it maps from the database so why not use that to help output the right html onto the page. So we use sqlAlchemy for the back end and Cheetah templates for the front end but everything in between is basically our own still.
We have never managed to find a framework that does exactly what we want without compromise and prefer to get all the bits that work right for us and write the glue our selves.
Step 1. For each data type sqlAlchemy.types.INTEGER etc. Add an extra function toHtml (or many maybe toHTMLReadOnly, toHTMLAdminEdit whatever) and just have that return the template for the html, now you don't even have to care what data type your displaying if you just want to spit out a whole table you can just do (as a cheetah template or what ever your templating engine is).
Step 2
<table>
<tr>
#for $field in $dbObject.c:
<th>$field.name</th>
#end for
</tr>
<tr>
#for $field in dbObject.c:
<td>$field.type.toHtml($field.name, $field.value)</td>
#end for
</tr>
</table>
Using this basic method and stretching pythons introspection to its potential, in an afternoon I managed to make create read update and delete code for our whole admin section of out database, not yet with the polish of django but more then good enough for my needs.
Step 3 Discovered the need for a third step just on Friday, wanted to upload files which as you know needs more then just the varchar data types default text box. No sweat, I just overrode the rows class in my table definition from VARCHAR to FilePath(VARCHAR) where the only difference was FilePath had a different toHtml method. Worked flawlessly.
All that said, if there is a shrink wrapped one out there that does just what you want, use that.
Disclaimer: This code was written from memory after midnight and probably wont produce a functioning web page.
| A python web application framework for tight DB/GUI coupling? | I'm a firm believer of the heretic thought of tight coupling between the backend and frontend: I want existing, implied knowledge about a backend to be automatically made use of when generating user interfaces. E.g., if a VARCHAR column has a maximum with of 20 characters, there GUIs should automatically constrain the user from typing more than 20 characters in a related form field.
And I have strong antipathy to ORMs which want to define my database tables, or are based on some hack where every table needs to have extra numeric ID columns because of the ORM.
I've looked a bit into Python database frameworks and I think I can conclude the SQLAlchemy fits best to my mentality.
Now, I need to find a web application framework which fits naturally with SQLAlchemy (or an equivalent) and perhaps even with my appetite for coupling. With "web application framework", I mean products/project such as Pyhons, Django, TurboGears, web2py, etc.
E.g., it should ideally be able to:
automatically select a suitable form widget for data entering a given column if told to do so; e.g., if the column has a foreign key to a column with 10 different values, widget should display the 10 possible values as a dropdown
auto-generate javascript form validation code which gives the end-user quick error feedback if a string is entered into a field which is about to end up in an INTEGER column, etc
auto-generate a calendar widget for data which will end up in a DATE column
hint NOT NULL constraints as javascript which complains about empty or whitespace-only data in a related input field
generate javascript validation code which matches relevant (simple) CHECK-constraints
make it easy to avoid SQL injection, by using prepared statements and/or validation of externally derived data
make it easy to avoid cross site scripting by automatically escape outgoing strings when appropriate
make use of constraint names to generate somewhat user friendly error messages in case a constrataint is violated
All this should happen dynamically, so table adjustments are automatically reflected on the frontend - probably with a caching mechanism, so that all the model introspection wouldn't kill performance. In other words, I don't want to repeat my model definition in an XML file (or alike) when it has already been carefully been defined in my database.
Does such a framework exist for Python (or for any language, for that matter)? If not: Which of the several Python web application frameworks will be least in the way if I were to add parts of the above features myself?
| [
"web2py does most of what you ask:\nBased on a field type and its validators it will render the field with the appropriate widget. You can override with\ndb.table.field.widget=...\n\nand use a third party widget.\nweb2py has js to blocks the user from entering a non-integer in a integer field or a non-double in a double field. time, date and datetime fields have their own pickers. These js validation work with (not instead) of server side validation.\nThere is IS_EMPTY_OR(...) validator.\nThe DAL prevents SQL injections since everthing is escaped when goes in the DB.\nweb2py prevents XSS because in {{=variable}}, 'variable' is escaped unless specified otherwise {{=XML(variable)}} or {{=XML(variable,sanitize=True)}}\nError messages are arguments of validators for example\ndb.table.field.requires=IS_NOT_EMPTY(error_message=T('hey! write something in here'))\n\nT is for internationalization.\n",
"You should have a look at django and especially its newforms and admin modules. The newforms module provides a nice possibility to do server side validation with automated generation of error messages/pages for the user. Adding ajax validation is also possible \n",
"I believe that Django models does not support composite primary keys (see documentation). But perhaps you can use SQLAlchemy in Django? A google search indicates that you can. I have not used Django, so I don't know.\nI suggest you take a look at:\n\nToscaWidgets\nDBSprockets, including DBMechanic\nCatwalk. Catwalk is an application for TurboGears 1.0 that uses SQLObject, not SQLAlchemy. Also check out this blog post and screencast.\nFastData. Also uses SQLObject.\nformalchemy\nRum\n\nI do not have any deep knowledge of any of the projects above. I am just in the process of trying to add something similar to one of my own applications as what the original question mentions. The above list is simply a list of interesting projects that I have stumbled across.\nAs to web application frameworks for Python, I recommend TurboGears 2. Not that I have any experience with any of the other frameworks, I just like TurboGears...\nIf the original question's author finds a solution that works well, please update or answer this thread.\n",
"TurboGears currently uses SQLObject by default but you can use it with SQLAlchemy. They are saying that the next major release of TurboGears (1.1) will use SQLAlchemy by default.\n",
"I know that you specificity ask for a framework but I thought I would let you know about what I get up to here. I have just undergone converting my company's web application from a custom in-house ORM layer into sqlAlchemy so I am far from an expert but something that occurred to me was that sqlAlchemy has types for all of the attributes it maps from the database so why not use that to help output the right html onto the page. So we use sqlAlchemy for the back end and Cheetah templates for the front end but everything in between is basically our own still.\nWe have never managed to find a framework that does exactly what we want without compromise and prefer to get all the bits that work right for us and write the glue our selves. \nStep 1. For each data type sqlAlchemy.types.INTEGER etc. Add an extra function toHtml (or many maybe toHTMLReadOnly, toHTMLAdminEdit whatever) and just have that return the template for the html, now you don't even have to care what data type your displaying if you just want to spit out a whole table you can just do (as a cheetah template or what ever your templating engine is).\nStep 2\n<table>\n<tr>\n#for $field in $dbObject.c:\n<th>$field.name</th>\n#end for\n</tr>\n<tr>\n#for $field in dbObject.c:\n<td>$field.type.toHtml($field.name, $field.value)</td>\n#end for\n</tr>\n</table>\nUsing this basic method and stretching pythons introspection to its potential, in an afternoon I managed to make create read update and delete code for our whole admin section of out database, not yet with the polish of django but more then good enough for my needs.\nStep 3 Discovered the need for a third step just on Friday, wanted to upload files which as you know needs more then just the varchar data types default text box. No sweat, I just overrode the rows class in my table definition from VARCHAR to FilePath(VARCHAR) where the only difference was FilePath had a different toHtml method. Worked flawlessly.\nAll that said, if there is a shrink wrapped one out there that does just what you want, use that.\nDisclaimer: This code was written from memory after midnight and probably wont produce a functioning web page.\n"
] | [
5,
3,
1,
1,
1
] | [] | [] | [
"coupling",
"data_driven",
"metadata",
"python",
"sql"
] | stackoverflow_0000043368_coupling_data_driven_metadata_python_sql.txt |
Q:
Python Regex vs PHP Regex
Not a competition, it is instead me trying to find why a certain regex works in one but not the other.
(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
That's my Regex and I'm trying to run it on
127.255.0.0
Using Pythons regex I get nothing, using PHP I match it, below are the two calls I am making (just incase it's something to do with that). Essentially I am trying to work out why it works in PHP but not Python.
re.findall(regex, string)
preg_match_all($regex, $string, $matches);
Solution found, it was due to the way that I was iterating through the results, this regex turned them into groups and then it didn't want to print them out in the same way etc etc. Thank you all for your help, it's really appreciated.
A:
It works for me. You must be doing something wrong.
>>> re.match(r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)', '127.255.0.0').groups()
('127', '255', '0', '0')
Don't forget to escape the regex using raw strings: r'regex_here' as stated in the Regex Howto
A:
I would suggest that using a regex for decimal range validation is not necessarily the correct answer for this problem. This is far more readable:
def valid_ip(s):
m = re.match(r"(\d+)\.(\d+)\.(\d+)\.(\d+)$", s)
if m is None:
return False
parts = [int(m.group(1+x)) for x in range(4)]
if max(parts) > 255:
return False
return True
A:
Just because you can do it with regex, doesn't mean you should. It would be much better to write instructions like: split the string on the period, make sure each group is numeric and within a certain range of numbers.
If you want to use a regex, just verify that it kind of "looks like" an IP address, as with Greg's regex.
A:
Without further details, I'd guess it's quote escaping of some kind. Both PHP and python's RegEX objects take strings as arguments. These strings will be escaped by the languge before being passed on to the RegEx engine.
I always using Python's "raw" string format when working with regular expressions. It ensure that "backslashes are not handled in any special way"
r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
A:
That regular expression matches here, no idea what you are doing wrong:
>>> import re
>>> x = re.compile(r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|'
... r'2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9]'
... r'[0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)')
>>> x.match("127.0.0.1")
<_sre.SRE_Match object at 0x5a8860>
>>> x.match("127.255.0.1")
<_sre.SRE_Match object at 0x5a8910>
>>> x.match("127.255.0.0")
<_sre.SRE_Match object at 0x5a8860>
Please note that preg_match translates to re.search in Python and not re.match. re.match is for useful for lexing because it's anchored.
A:
PHP uses 3 different flavors of regex, while python uses only one. I don't code in python, so I make no expert claims on how it uses REGEX. O'Reilly Mastering Regular Expressions is a great book, as most of their works are.
| Python Regex vs PHP Regex | Not a competition, it is instead me trying to find why a certain regex works in one but not the other.
(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
That's my Regex and I'm trying to run it on
127.255.0.0
Using Pythons regex I get nothing, using PHP I match it, below are the two calls I am making (just incase it's something to do with that). Essentially I am trying to work out why it works in PHP but not Python.
re.findall(regex, string)
preg_match_all($regex, $string, $matches);
Solution found, it was due to the way that I was iterating through the results, this regex turned them into groups and then it didn't want to print them out in the same way etc etc. Thank you all for your help, it's really appreciated.
| [
"It works for me. You must be doing something wrong.\n>>> re.match(r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)', '127.255.0.0').groups()\n('127', '255', '0', '0')\n\nDon't forget to escape the regex using raw strings: r'regex_here' as stated in the Regex Howto\n",
"I would suggest that using a regex for decimal range validation is not necessarily the correct answer for this problem. This is far more readable:\ndef valid_ip(s):\n m = re.match(r\"(\\d+)\\.(\\d+)\\.(\\d+)\\.(\\d+)$\", s)\n if m is None:\n return False\n parts = [int(m.group(1+x)) for x in range(4)]\n if max(parts) > 255:\n return False\n return True\n\n",
"Just because you can do it with regex, doesn't mean you should. It would be much better to write instructions like: split the string on the period, make sure each group is numeric and within a certain range of numbers.\nIf you want to use a regex, just verify that it kind of \"looks like\" an IP address, as with Greg's regex.\n",
"Without further details, I'd guess it's quote escaping of some kind. Both PHP and python's RegEX objects take strings as arguments. These strings will be escaped by the languge before being passed on to the RegEx engine.\nI always using Python's \"raw\" string format when working with regular expressions. It ensure that \"backslashes are not handled in any special way\"\nr'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'\n\n",
"That regular expression matches here, no idea what you are doing wrong:\n>>> import re\n>>> x = re.compile(r'(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|'\n... r'2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9]'\n... r'[0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)')\n>>> x.match(\"127.0.0.1\")\n<_sre.SRE_Match object at 0x5a8860>\n>>> x.match(\"127.255.0.1\")\n<_sre.SRE_Match object at 0x5a8910>\n>>> x.match(\"127.255.0.0\")\n<_sre.SRE_Match object at 0x5a8860>\n\nPlease note that preg_match translates to re.search in Python and not re.match. re.match is for useful for lexing because it's anchored.\n",
"PHP uses 3 different flavors of regex, while python uses only one. I don't code in python, so I make no expert claims on how it uses REGEX. O'Reilly Mastering Regular Expressions is a great book, as most of their works are.\n"
] | [
7,
4,
3,
2,
1,
1
] | [] | [] | [
"php",
"python",
"regex"
] | stackoverflow_0000118143_php_python_regex.txt |
Q:
Docstrings for data?
Is there a way to describe the module's data in a similar way that a docstring describes a module or a funcion?
class MyClass(object):
def my_function():
"""This docstring works!"""
return True
my_list = []
"""This docstring does not work!"""
A:
To my knowledge, it is not possible to assign docstrings to module data members.
PEP 224 suggests this feature, but the PEP was rejected.
I suggest you document the data members of a module in the module's docstring:
# module.py:
"""About the module.
module.data: contains the word "spam"
"""
data = "spam"
A:
It is possible to make documentation of module's data, with use of epydoc syntax. Epydoc is one of the most frequently used documentation tools for Python.
The syntax for documenting is #: above the variable initialization line, like this:
# module.py:
#: Very important data.
#: Use with caution.
#: @type: C{str}
data = "important data"
Now when you generate your documentation, data will be described as module variable with given description and type str. You can omit the @type line.
A:
As codeape explains, it's not possible to document general data members.
However, it is possible to document property data members:
class Foo:
def get_foo(self): ...
def set_foo(self, val): ...
def del_foo(self): ...
foo = property(get_foo, set_foo, del_foo, '''Doc string here''')
This will give a docstring to the foo attribute, obviously.
| Docstrings for data? | Is there a way to describe the module's data in a similar way that a docstring describes a module or a funcion?
class MyClass(object):
def my_function():
"""This docstring works!"""
return True
my_list = []
"""This docstring does not work!"""
| [
"To my knowledge, it is not possible to assign docstrings to module data members.\nPEP 224 suggests this feature, but the PEP was rejected.\nI suggest you document the data members of a module in the module's docstring:\n# module.py:\n\"\"\"About the module.\n\nmodule.data: contains the word \"spam\"\n\n\"\"\"\n\ndata = \"spam\"\n\n",
"It is possible to make documentation of module's data, with use of epydoc syntax. Epydoc is one of the most frequently used documentation tools for Python.\nThe syntax for documenting is #: above the variable initialization line, like this:\n# module.py:\n\n#: Very important data.\n#: Use with caution.\n#: @type: C{str}\ndata = \"important data\"\n\nNow when you generate your documentation, data will be described as module variable with given description and type str. You can omit the @type line.\n",
"As codeape explains, it's not possible to document general data members.\nHowever, it is possible to document property data members:\nclass Foo:\n def get_foo(self): ...\n\n def set_foo(self, val): ...\n\n def del_foo(self): ...\n\n foo = property(get_foo, set_foo, del_foo, '''Doc string here''')\n\nThis will give a docstring to the foo attribute, obviously.\n"
] | [
15,
11,
10
] | [] | [] | [
"docstring",
"python"
] | stackoverflow_0000197387_docstring_python.txt |
Q:
How to find out whether subversion working directory is locked by svn?
A python script is running two parallel python processes ( created via os.fork() ) each of which eventually tries to check out a subversion repository leaf into the same working copy dir.
Before running 'svn co ...' command in a sub-process ( via python subprocess module ) the parent python code checks if the working copy dir already exists.
if os.path.isdir(checkout_dir):
# working copy dir already exists
return checkout_dir
So that if it does there shouldn't be any 'svn co' running, but rather immediate return from the parent function.
Nevertheless some collision happened and one of the python processes failed on 'svn co ..' with the following error.
checked-out failed: svn: Working copy '/tmp/qm_23683' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)**
So the first question is why the working copy dir existence check didn't work and the second - is there a way to find out that a working copy dir is locked by svn and loop until it is unlocked?
Thanks.
A:
Within the directory, there should be a directory called '.svn'. Within this, a file named 'locked' indicates that the directory is locked.
A:
This sounds like a potential race condition, in that something like the following can happen:
Process A checks to see if the directory exists (it doesn't yet).
Process B checks to see if the directory exists (it doesn't yet).
Process A invokes svn, which creates the directory.
Process B invokes svn, which subsequently fails.
An easy way to avoid this is to have each process attempt to create the directory rather than checking for its existence. If the other process has already created the directory, the other process is guaranteed to get a well-defined error code under a very wide variety of platforms and filesystems. For instance, this is one of the only reliable ways to do synchronization on many implementations of NFS. Luckily, svn won't care if the working directory already exists.
The Python code would look something like this:
import os, errno
# ...
try:
os.mkdir(dirName)
except OSError, e:
if e.errno != errno.EEXIST: raise # some other error
print 'Directory already exists.'
else:
print 'Successfully created new directory.'
This technique is easy to implement, very reliable, and useful in a wide variety of situations.
| How to find out whether subversion working directory is locked by svn? | A python script is running two parallel python processes ( created via os.fork() ) each of which eventually tries to check out a subversion repository leaf into the same working copy dir.
Before running 'svn co ...' command in a sub-process ( via python subprocess module ) the parent python code checks if the working copy dir already exists.
if os.path.isdir(checkout_dir):
# working copy dir already exists
return checkout_dir
So that if it does there shouldn't be any 'svn co' running, but rather immediate return from the parent function.
Nevertheless some collision happened and one of the python processes failed on 'svn co ..' with the following error.
checked-out failed: svn: Working copy '/tmp/qm_23683' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)**
So the first question is why the working copy dir existence check didn't work and the second - is there a way to find out that a working copy dir is locked by svn and loop until it is unlocked?
Thanks.
| [
"Within the directory, there should be a directory called '.svn'. Within this, a file named 'locked' indicates that the directory is locked.\n",
"This sounds like a potential race condition, in that something like the following can happen:\n\nProcess A checks to see if the directory exists (it doesn't yet).\nProcess B checks to see if the directory exists (it doesn't yet).\nProcess A invokes svn, which creates the directory.\nProcess B invokes svn, which subsequently fails.\n\nAn easy way to avoid this is to have each process attempt to create the directory rather than checking for its existence. If the other process has already created the directory, the other process is guaranteed to get a well-defined error code under a very wide variety of platforms and filesystems. For instance, this is one of the only reliable ways to do synchronization on many implementations of NFS. Luckily, svn won't care if the working directory already exists.\nThe Python code would look something like this:\nimport os, errno\n\n# ...\n\ntry:\n os.mkdir(dirName)\nexcept OSError, e:\n if e.errno != errno.EEXIST: raise # some other error\n print 'Directory already exists.'\nelse:\n print 'Successfully created new directory.'\n\nThis technique is easy to implement, very reliable, and useful in a wide variety of situations.\n"
] | [
4,
2
] | [] | [] | [
"python",
"svn"
] | stackoverflow_0000197009_python_svn.txt |
Q:
How do I remove a cookie that I've set on someone's computer?
I've got a web system where users log in, and it stores a cookie of their session. When they log in as someone else or log out I want to remove that original cookie that I stored. What's the best way to do that?
I'm using Python and Apache, though I suppose the answer will remain the same for most languages.
A:
Set the cookie again, as if you hadn't set it the first time, but specify an expiration date that is in the past.
A:
I guess the best way is to set the expiration to a date of the cookie to some date in the past.
A:
Return the header
Set-Cookie: token=opaque; Domain=.your.domain; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
The Domain and Path must match the original attributes that the cookie was issued under.
| How do I remove a cookie that I've set on someone's computer? | I've got a web system where users log in, and it stores a cookie of their session. When they log in as someone else or log out I want to remove that original cookie that I stored. What's the best way to do that?
I'm using Python and Apache, though I suppose the answer will remain the same for most languages.
| [
"Set the cookie again, as if you hadn't set it the first time, but specify an expiration date that is in the past.\n",
"I guess the best way is to set the expiration to a date of the cookie to some date in the past.\n",
"Return the header\n\nSet-Cookie: token=opaque; Domain=.your.domain; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/ \n\nThe Domain and Path must match the original attributes that the cookie was issued under.\n"
] | [
7,
4,
1
] | [] | [] | [
"apache",
"cookies",
"http",
"python"
] | stackoverflow_0000200250_apache_cookies_http_python.txt |
Q:
fast filter method in python
I want to filter two list with any fastest method in python script. I have used the built-in filter() method for this purpose. but it is quite slow and taking too much time because I have very big list, I think more than 5 million item in each list or may be more.
I do not know how I will make it. Please if anybody have idea or write small function for it.
A:
Maybe your lists are too large and do not fit in memory, and you experience thrashing.
If the sources are in files, you do not need the whole list in memory all at once. Try using itertools, e.g.:
from itertools import ifilter
def is_important(s):
return len(s)>10
filtered_list = ifilter(is_important, open('mylist.txt'))
Note that ifilter returns an iterator that is fast and memory efficient.
Generator Tricks is a tutorial by David M. Beazley that teaches some interesting uses for generators.
A:
If you can avoid creating the lists in the first place, you'll be happier.
Rather than
aBigList = someListMakingFunction()
filter( lambda x:x>10, aBigList )
You might want to look at your function that makes the list.
def someListMakingGenerator( ):
for x in some source:
yield x
Then your filter doesn't involve a giant tract of memory
def myFilter( aGenerator ):
for x in aGenerator:
if x > 10:
yield x
By using generators, you don't keep much stuff in memory.
A:
I guess filter() is as fast as you can possibly get without having to code the filtering function in C (and in that case, you better code the whole filtering process in C).
Why don't you paste the function you are filtering on? That might lead to easier optimizations.
Read this about optimization in Python. And this about the Python/C API.
A:
Before doing it in C, you could try numpy. Perhaps you can turn your filtering into number crunching.
A:
Filter will create a new list, so if your original is very big, you could end up using up to twice as much memory.
If you only need to process the results iteratively, rather than use it as a real random-access list, you are probably better off using
ifilter instead. ie.
for x in itertools.ifilter(condition_func, my_really_big_list):
do_something_with(x)
Other speed tips are to use a python builtin, rather than a function you write yourself. There's a itertools.ifilterfalse specifically for the
case where you would otherwise need to introduce a lambda to negate your check. (eg "ifilter(lambda x: not x.isalpha(), l)" should be written "ifilterfalse(str.isalpha, l)")
A:
It may be useful to know that generally a conditional list comprehension is much faster than the corresponding lambda:
>>> import timeit
>>> timeit.Timer('[x for x in xrange(10) if (x**2 % 4) == 1]').timeit()
2.0544309616088867
>>> timeit.f = lambda x: (x**2 % 4) == 1
timeit.Timer('[x for x in xrange(10) if f(x)]').timeit()
>>>
3.4280929565429688
(Not sure why I needed to put f in the timeit namespace, there. Haven't really used the module much.)
| fast filter method in python | I want to filter two list with any fastest method in python script. I have used the built-in filter() method for this purpose. but it is quite slow and taking too much time because I have very big list, I think more than 5 million item in each list or may be more.
I do not know how I will make it. Please if anybody have idea or write small function for it.
| [
"Maybe your lists are too large and do not fit in memory, and you experience thrashing.\nIf the sources are in files, you do not need the whole list in memory all at once. Try using itertools, e.g.:\nfrom itertools import ifilter\n\ndef is_important(s):\n return len(s)>10\n\nfiltered_list = ifilter(is_important, open('mylist.txt'))\n\nNote that ifilter returns an iterator that is fast and memory efficient.\nGenerator Tricks is a tutorial by David M. Beazley that teaches some interesting uses for generators.\n",
"If you can avoid creating the lists in the first place, you'll be happier.\nRather than\naBigList = someListMakingFunction()\nfilter( lambda x:x>10, aBigList )\n\nYou might want to look at your function that makes the list.\ndef someListMakingGenerator( ):\n for x in some source:\n yield x\n\nThen your filter doesn't involve a giant tract of memory\ndef myFilter( aGenerator ):\n for x in aGenerator:\n if x > 10: \n yield x\n\nBy using generators, you don't keep much stuff in memory.\n",
"I guess filter() is as fast as you can possibly get without having to code the filtering function in C (and in that case, you better code the whole filtering process in C).\nWhy don't you paste the function you are filtering on? That might lead to easier optimizations.\nRead this about optimization in Python. And this about the Python/C API.\n",
"Before doing it in C, you could try numpy. Perhaps you can turn your filtering into number crunching.\n",
"Filter will create a new list, so if your original is very big, you could end up using up to twice as much memory.\nIf you only need to process the results iteratively, rather than use it as a real random-access list, you are probably better off using\nifilter instead. ie.\nfor x in itertools.ifilter(condition_func, my_really_big_list):\n do_something_with(x)\n\nOther speed tips are to use a python builtin, rather than a function you write yourself. There's a itertools.ifilterfalse specifically for the\ncase where you would otherwise need to introduce a lambda to negate your check. (eg \"ifilter(lambda x: not x.isalpha(), l)\" should be written \"ifilterfalse(str.isalpha, l)\")\n",
"It may be useful to know that generally a conditional list comprehension is much faster than the corresponding lambda:\n>>> import timeit\n>>> timeit.Timer('[x for x in xrange(10) if (x**2 % 4) == 1]').timeit()\n2.0544309616088867\n>>> timeit.f = lambda x: (x**2 % 4) == 1\ntimeit.Timer('[x for x in xrange(10) if f(x)]').timeit()\n>>> \n3.4280929565429688\n\n(Not sure why I needed to put f in the timeit namespace, there. Haven't really used the module much.)\n"
] | [
15,
5,
2,
2,
2,
2
] | [] | [] | [
"filter",
"list",
"python"
] | stackoverflow_0000200373_filter_list_python.txt |
Q:
How can I unpack binary hex formatted data in Python?
Using the PHP pack() function, I have converted a string into a binary hex representation:
$string = md5(time); // 32 character length
$packed = pack('H*', $string);
The H* formatting means "Hex string, high nibble first".
To unpack this in PHP, I would simply use the unpack() function with the H* format flag.
How would I unpack this data in Python?
A:
There's an easy way to do this with the binascii module:
>>> import binascii
>>> print binascii.hexlify("ABCZ")
'4142435a'
>>> print binascii.unhexlify("4142435a")
'ABCZ'
Unless I'm misunderstanding something about the nibble ordering (high-nibble first is the default… anything different is insane), that should be perfectly sufficient!
Furthermore, Python's hashlib.md5 objects have a hexdigest() method to automatically convert the MD5 digest to an ASCII hex string, so that this method isn't even necessary for MD5 digests. Hope that helps.
A:
There's no corresponding "hex nibble" code for struct.pack, so you'll either need to manually pack into bytes first, like:
hex_string = 'abcdef12'
hexdigits = [int(x, 16) for x in hex_string]
data = ''.join(struct.pack('B', (high <<4) + low)
for high, low in zip(hexdigits[::2], hexdigits[1::2]))
Or better, you can just use the hex codec. ie.
>>> data = hex_string.decode('hex')
>>> data
'\xab\xcd\xef\x12'
To unpack, you can encode the result back to hex similarly
>>> data.encode('hex')
'abcdef12'
However, note that for your example, there's probably no need to take the round-trip through a hex representation at all when encoding. Just use the md5 binary digest directly. ie.
>>> x = md5.md5('some string')
>>> x.digest()
'Z\xc7I\xfb\xee\xc96\x07\xfc(\xd6f\xbe\x85\xe7:'
This is equivalent to your pack()ed representation. To get the hex representation, use the same unpack method above:
>>> x.digest().decode('hex')
'acbd18db4cc2f85cedef654fccc4a4d8'
>>> x.hexdigest()
'acbd18db4cc2f85cedef654fccc4a4d8'
[Edit]: Updated to use better method (hex codec)
A:
In Python you use the struct module for this.
>>> from struct import *
>>> pack('hhl', 1, 2, 3)
'\x00\x01\x00\x02\x00\x00\x00\x03'
>>> unpack('hhl', '\x00\x01\x00\x02\x00\x00\x00\x03')
(1, 2, 3)
>>> calcsize('hhl')
8
HTH
| How can I unpack binary hex formatted data in Python? | Using the PHP pack() function, I have converted a string into a binary hex representation:
$string = md5(time); // 32 character length
$packed = pack('H*', $string);
The H* formatting means "Hex string, high nibble first".
To unpack this in PHP, I would simply use the unpack() function with the H* format flag.
How would I unpack this data in Python?
| [
"There's an easy way to do this with the binascii module:\n>>> import binascii\n>>> print binascii.hexlify(\"ABCZ\")\n'4142435a'\n>>> print binascii.unhexlify(\"4142435a\")\n'ABCZ'\n\nUnless I'm misunderstanding something about the nibble ordering (high-nibble first is the default… anything different is insane), that should be perfectly sufficient!\nFurthermore, Python's hashlib.md5 objects have a hexdigest() method to automatically convert the MD5 digest to an ASCII hex string, so that this method isn't even necessary for MD5 digests. Hope that helps.\n",
"There's no corresponding \"hex nibble\" code for struct.pack, so you'll either need to manually pack into bytes first, like:\nhex_string = 'abcdef12'\n\nhexdigits = [int(x, 16) for x in hex_string]\ndata = ''.join(struct.pack('B', (high <<4) + low) \n for high, low in zip(hexdigits[::2], hexdigits[1::2]))\n\nOr better, you can just use the hex codec. ie.\n>>> data = hex_string.decode('hex')\n>>> data\n'\\xab\\xcd\\xef\\x12'\n\nTo unpack, you can encode the result back to hex similarly\n>>> data.encode('hex')\n'abcdef12'\n\nHowever, note that for your example, there's probably no need to take the round-trip through a hex representation at all when encoding. Just use the md5 binary digest directly. ie.\n>>> x = md5.md5('some string')\n>>> x.digest()\n'Z\\xc7I\\xfb\\xee\\xc96\\x07\\xfc(\\xd6f\\xbe\\x85\\xe7:'\n\nThis is equivalent to your pack()ed representation. To get the hex representation, use the same unpack method above:\n>>> x.digest().decode('hex')\n'acbd18db4cc2f85cedef654fccc4a4d8'\n>>> x.hexdigest()\n'acbd18db4cc2f85cedef654fccc4a4d8'\n\n[Edit]: Updated to use better method (hex codec)\n",
"In Python you use the struct module for this.\n>>> from struct import *\n>>> pack('hhl', 1, 2, 3)\n'\\x00\\x01\\x00\\x02\\x00\\x00\\x00\\x03'\n>>> unpack('hhl', '\\x00\\x01\\x00\\x02\\x00\\x00\\x00\\x03')\n(1, 2, 3)\n>>> calcsize('hhl')\n8\n\nHTH\n"
] | [
13,
11,
8
] | [] | [] | [
"binary",
"hex",
"python"
] | stackoverflow_0000200738_binary_hex_python.txt |
Q:
urllib.urlopen works but urllib2.urlopen doesn't
I have a simple website I'm testing. It's running on localhost and I can access it in my web browser. The index page is simply the word "running". urllib.urlopen will successfully read the page but urllib2.urlopen will not. Here's a script which demonstrates the problem (this is the actual script and not a simplification of a different test script):
import urllib, urllib2
print urllib.urlopen("http://127.0.0.1").read() # prints "running"
print urllib2.urlopen("http://127.0.0.1").read() # throws an exception
Here's the stack trace:
Traceback (most recent call last):
File "urltest.py", line 5, in <module>
print urllib2.urlopen("http://127.0.0.1").read()
File "C:\Python25\lib\urllib2.py", line 121, in urlopen
return _opener.open(url, data)
File "C:\Python25\lib\urllib2.py", line 380, in open
response = meth(req, response)
File "C:\Python25\lib\urllib2.py", line 491, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python25\lib\urllib2.py", line 412, in error
result = self._call_chain(*args)
File "C:\Python25\lib\urllib2.py", line 353, in _call_chain
result = func(*args)
File "C:\Python25\lib\urllib2.py", line 575, in http_error_302
return self.parent.open(new)
File "C:\Python25\lib\urllib2.py", line 380, in open
response = meth(req, response)
File "C:\Python25\lib\urllib2.py", line 491, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python25\lib\urllib2.py", line 418, in error
return self._call_chain(*args)
File "C:\Python25\lib\urllib2.py", line 353, in _call_chain
result = func(*args)
File "C:\Python25\lib\urllib2.py", line 499, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 504: Gateway Timeout
Any ideas? I might end up needing some of the more advanced features of urllib2, so I don't want to just resort to using urllib, plus I want to understand this problem.
A:
Sounds like you have proxy settings defined that urllib2 is picking up on. When it tries to proxy "127.0.0.01/", the proxy gives up and returns a 504 error.
From Obscure python urllib2 proxy gotcha:
proxy_support = urllib2.ProxyHandler({})
opener = urllib2.build_opener(proxy_support)
print opener.open("http://127.0.0.1").read()
# Optional - makes this opener default for urlopen etc.
urllib2.install_opener(opener)
print urllib2.urlopen("http://127.0.0.1").read()
A:
Does calling urlib2.open first followed by urllib.open have the same results? Just wondering if the first call to open is causing the http server to get busy causing the timeout?
A:
I don't know what's going on, but you may find this helpful in figuring it out:
>>> import urllib2
>>> urllib2.urlopen('http://mit.edu').read()[:10]
'<!DOCTYPE '
>>> urllib2._opener.handlers[1].set_http_debuglevel(100)
>>> urllib2.urlopen('http://mit.edu').read()[:10]
connect: (mit.edu, 80)
send: 'GET / HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: mit.edu\r\nConnection: close\r\nUser-Agent: Python-urllib/2.5\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Date: Tue, 14 Oct 2008 15:52:03 GMT
header: Server: MIT Web Server Apache/1.3.26 Mark/1.5 (Unix) mod_ssl/2.8.9 OpenSSL/0.9.7c
header: Last-Modified: Tue, 14 Oct 2008 04:02:15 GMT
header: ETag: "71d3f96-2895-48f419c7"
header: Accept-Ranges: bytes
header: Content-Length: 10389
header: Connection: close
header: Content-Type: text/html
'<!DOCTYPE '
A:
urllib.urlopen() throws the following request at the server:
GET / HTTP/1.0
Host: 127.0.0.1
User-Agent: Python-urllib/1.17
while urllib2.urlopen() throws this:
GET / HTTP/1.1
Accept-Encoding: identity
Host: 127.0.0.1
Connection: close
User-Agent: Python-urllib/2.5
So, your server either doesn't understand HTTP/1.1 or the extra header fields.
| urllib.urlopen works but urllib2.urlopen doesn't | I have a simple website I'm testing. It's running on localhost and I can access it in my web browser. The index page is simply the word "running". urllib.urlopen will successfully read the page but urllib2.urlopen will not. Here's a script which demonstrates the problem (this is the actual script and not a simplification of a different test script):
import urllib, urllib2
print urllib.urlopen("http://127.0.0.1").read() # prints "running"
print urllib2.urlopen("http://127.0.0.1").read() # throws an exception
Here's the stack trace:
Traceback (most recent call last):
File "urltest.py", line 5, in <module>
print urllib2.urlopen("http://127.0.0.1").read()
File "C:\Python25\lib\urllib2.py", line 121, in urlopen
return _opener.open(url, data)
File "C:\Python25\lib\urllib2.py", line 380, in open
response = meth(req, response)
File "C:\Python25\lib\urllib2.py", line 491, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python25\lib\urllib2.py", line 412, in error
result = self._call_chain(*args)
File "C:\Python25\lib\urllib2.py", line 353, in _call_chain
result = func(*args)
File "C:\Python25\lib\urllib2.py", line 575, in http_error_302
return self.parent.open(new)
File "C:\Python25\lib\urllib2.py", line 380, in open
response = meth(req, response)
File "C:\Python25\lib\urllib2.py", line 491, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python25\lib\urllib2.py", line 418, in error
return self._call_chain(*args)
File "C:\Python25\lib\urllib2.py", line 353, in _call_chain
result = func(*args)
File "C:\Python25\lib\urllib2.py", line 499, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 504: Gateway Timeout
Any ideas? I might end up needing some of the more advanced features of urllib2, so I don't want to just resort to using urllib, plus I want to understand this problem.
| [
"Sounds like you have proxy settings defined that urllib2 is picking up on. When it tries to proxy \"127.0.0.01/\", the proxy gives up and returns a 504 error.\nFrom Obscure python urllib2 proxy gotcha:\nproxy_support = urllib2.ProxyHandler({})\nopener = urllib2.build_opener(proxy_support)\nprint opener.open(\"http://127.0.0.1\").read()\n\n# Optional - makes this opener default for urlopen etc.\nurllib2.install_opener(opener)\nprint urllib2.urlopen(\"http://127.0.0.1\").read()\n\n",
"Does calling urlib2.open first followed by urllib.open have the same results? Just wondering if the first call to open is causing the http server to get busy causing the timeout?\n",
"I don't know what's going on, but you may find this helpful in figuring it out:\n>>> import urllib2\n>>> urllib2.urlopen('http://mit.edu').read()[:10]\n'<!DOCTYPE '\n>>> urllib2._opener.handlers[1].set_http_debuglevel(100)\n>>> urllib2.urlopen('http://mit.edu').read()[:10]\nconnect: (mit.edu, 80)\nsend: 'GET / HTTP/1.1\\r\\nAccept-Encoding: identity\\r\\nHost: mit.edu\\r\\nConnection: close\\r\\nUser-Agent: Python-urllib/2.5\\r\\n\\r\\n'\nreply: 'HTTP/1.1 200 OK\\r\\n'\nheader: Date: Tue, 14 Oct 2008 15:52:03 GMT\nheader: Server: MIT Web Server Apache/1.3.26 Mark/1.5 (Unix) mod_ssl/2.8.9 OpenSSL/0.9.7c\nheader: Last-Modified: Tue, 14 Oct 2008 04:02:15 GMT\nheader: ETag: \"71d3f96-2895-48f419c7\"\nheader: Accept-Ranges: bytes\nheader: Content-Length: 10389\nheader: Connection: close\nheader: Content-Type: text/html\n'<!DOCTYPE '\n\n",
"urllib.urlopen() throws the following request at the server:\nGET / HTTP/1.0\nHost: 127.0.0.1\nUser-Agent: Python-urllib/1.17\n\nwhile urllib2.urlopen() throws this:\nGET / HTTP/1.1\nAccept-Encoding: identity\nHost: 127.0.0.1\nConnection: close\nUser-Agent: Python-urllib/2.5\n\nSo, your server either doesn't understand HTTP/1.1 or the extra header fields.\n"
] | [
16,
1,
1,
1
] | [] | [] | [
"python",
"urllib",
"urllib2"
] | stackoverflow_0000201515_python_urllib_urllib2.txt |
Q:
python name a file same as a lib
i have the following script
import getopt, sys
opts, args = getopt.getopt(sys.argv[1:], "h:s")
for key,value in opts:
print key, "=>", value
if i name this getopt.py and run it doesn't work as it tries to import itself
is there a way around this, so i can keep this filename but specify on import that i want the standard python lib and not this file?
Solution based on Vinko's answer:
import sys
sys.path.reverse()
from getopt import getopt
opts, args = getopt(sys.argv[1:], "h:s")
for key,value in opts:
print key, "=>", value
A:
You shouldn't name your scripts like existing modules. Especially if standard.
That said, you can touch sys.path to modify the library loading order
~# cat getopt.py
print "HI"
~# python
Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> import getopt
HI
~# python
Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path.remove('')
>>> import getopt
>>> dir(getopt)
['GetoptError', '__all__', '__builtins__', '__doc__', '__file__', '__name__', 'do_longs', 'do_shorts', 'error', 'getopt', 'gnu_getopt', 'long_has_args', 'os', 'short_has_arg']
In addition, you may wish to avoid the full import and do it differently, like this:
import sys
sys.path.remove('')
from getopt import getopt
sys.path.insert(0,'')
opts, args = getopt(sys.argv[1:], "h:s")
for key,value in opts:
print key, "=>", value
A:
You should avoid naming your python files with standard library module names.
A:
Python doesn't give you a way to qualify modules. You might be able to accomplish this by removing the '' entry from sys.path or by moving it to the end. I wouldn't recommend it.
A:
Well, you could (re)move the current diretory from sys.path, which contains the modifiable search path for libraries to make it work, if you really need that.
| python name a file same as a lib | i have the following script
import getopt, sys
opts, args = getopt.getopt(sys.argv[1:], "h:s")
for key,value in opts:
print key, "=>", value
if i name this getopt.py and run it doesn't work as it tries to import itself
is there a way around this, so i can keep this filename but specify on import that i want the standard python lib and not this file?
Solution based on Vinko's answer:
import sys
sys.path.reverse()
from getopt import getopt
opts, args = getopt(sys.argv[1:], "h:s")
for key,value in opts:
print key, "=>", value
| [
"You shouldn't name your scripts like existing modules. Especially if standard. \nThat said, you can touch sys.path to modify the library loading order\n~# cat getopt.py\nprint \"HI\"\n~# python\nPython 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)\n[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys\n>>> import getopt\nHI\n\n~# python\nPython 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)\n[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys\n>>> sys.path.remove('')\n>>> import getopt\n>>> dir(getopt)\n['GetoptError', '__all__', '__builtins__', '__doc__', '__file__', '__name__', 'do_longs', 'do_shorts', 'error', 'getopt', 'gnu_getopt', 'long_has_args', 'os', 'short_has_arg']\n\nIn addition, you may wish to avoid the full import and do it differently, like this:\nimport sys\nsys.path.remove('')\nfrom getopt import getopt\nsys.path.insert(0,'')\nopts, args = getopt(sys.argv[1:], \"h:s\")\nfor key,value in opts:\n print key, \"=>\", value\n\n",
"You should avoid naming your python files with standard library module names.\n",
"Python doesn't give you a way to qualify modules. You might be able to accomplish this by removing the '' entry from sys.path or by moving it to the end. I wouldn't recommend it.\n",
"Well, you could (re)move the current diretory from sys.path, which contains the modifiable search path for libraries to make it work, if you really need that.\n"
] | [
7,
4,
0,
0
] | [
"import getopt as bettername\n\nThis should allow you to call getopt as bettername.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0000201846_python.txt |
Q:
Store simple user settings in Python
I am programming a website in which users will have a number of settings, such as their choice of colour scheme, etc. I'm happy to store these as plain text files, and security is not an issue.
The way I currently see it is: there is a dictionary, where all the keys are users and the values are dictionaries with the users' settings in them.
For example, userdb["bob"]["colour_scheme"] would have the value "blue".
What is the best way to store it on file? Pickling the dictionary?
Are there better ways of doing what I am trying to do?
A:
I would use the ConfigParser module, which produces some pretty readable and user-editable output for your example:
[bob]
colour_scheme: blue
british: yes
[joe]
color_scheme: that's 'color', silly!
british: no
The following code would produce the config file above, and then print it out:
import sys
from ConfigParser import *
c = ConfigParser()
c.add_section("bob")
c.set("bob", "colour_scheme", "blue")
c.set("bob", "british", str(True))
c.add_section("joe")
c.set("joe", "color_scheme", "that's 'color', silly!")
c.set("joe", "british", str(False))
c.write(sys.stdout) # this outputs the configuration to stdout
# you could put a file-handle here instead
for section in c.sections(): # this is how you read the options back in
print section
for option in c.options(section):
print "\t", option, "=", c.get(section, option)
print c.get("bob", "british") # To access the "british" attribute for bob directly
Note that ConfigParser only supports strings, so you'll have to convert as I have above for the Booleans. See effbot for a good run-down of the basics.
A:
Using cPickle on the dictionary would be my choice. Dictionaries are a natural fit for these kind of data, so given your requirements I see no reason not to use them. That, unless you are thinking about reading them from non-python applications, in which case you'd have to use a language neutral text format. And even here you could get away with the pickle plus an export tool.
A:
I don't tackle the question which one is best. If you want to handle text-files, I'd consider ConfigParser -module. Another you could give a try would be simplejson or yaml. You could also consider a real db table.
For instance, you could have a table called userattrs, with three columns:
Int user_id
String attribute_name
String attribute_value
If there's only few, you could store them into cookies for quick retrieval.
A:
Here's the simplest way. Use simple variables and import the settings file.
Call the file userprefs.py
# a user prefs file
color = 0x010203
font = "times new roman"
position = ( 12, 13 )
size = ( 640, 480 )
In your application, you need to be sure that you can import this file. You have many choices.
Using PYTHONPATH. Require PYTHONPATH be set to include the directory with the preferences files.
a. An explicit command-line parameter to name the file (not the best, but simple)
b. An environment variable to name the file.
Extending sys.path to include the user's home directory
Example
import sys
import os
sys.path.insert(0,os.path.expanduser("~"))
import userprefs
print userprefs.color
A:
For a database-driven website, of course, your best option is a db table. I'm assuming that you are not doing the database thing.
If you don't care about human-readable formats, then pickle is a simple and straightforward way to go. I've also heard good reports about simplejson.
If human readability is important, two simple options present themselves:
Module: Just use a module. If all you need are a few globals and nothing fancy, then this is the way to go. If you really got desperate, you could define classes and class variables to emulate sections. The downside here: if the file will be hand-edited by a user, errors could be hard to catch and debug.
INI format: I've been using ConfigObj for this, with quite a bit of success. ConfigObj is essentially a replacement for ConfigParser, with support for nested sections and much more. Optionally, you can define expected types or values for a file and validate it, providing a safety net (and important error feedback) for users/administrators.
A:
I would use shelve or an sqlite database if I would have to store these setting on the file system. Although, since you are building a website you probably use some kind of database so why not just use that?
A:
The built-in sqlite3 module would probably be far simpler than most alternatives, and gets you ready to update to a full RDBMS should you ever want or need to.
A:
If human readablity of configfiles matters an alternative might be the ConfigParser module which allows you to read and write .ini like files. But then you are restricted to one nesting level.
A:
If you have a database, I might suggest storing the settings in the database. However, it sounds like ordinary files might suit your environment better.
You probably don't want to store all the users settings in the same file, because you might run into trouble with concurrent access to that one file. If you stored each user's settings as a dictionary in their own pickled file, then they would be able to act independently.
Pickling is a reasonable way to store such data, but unfortunately the pickle data format is notoriously not-human-readable. You might be better off storing it as repr(dictionary) which will be a more readable format. To reload the user settings, use eval(open("file").read()) or something like that.
A:
Is there are particular reason you're not using the database for this? it seems the normal and natural thing to do - or store a pickle of the settings in the db keyed on user id or something.
You haven't described the usage patterns of the website, but just thinking of a general website - but I would think that keeping the settings in a database would cause much less disk I/O than using files.
OTOH, for settings that might be used by client-side code, storing them as javascript in a static file that can be cached would be handy - at the expense of having multiple places you might have settings. (I'd probably store those settings in the db, and rebuild the static files as necessary)
A:
I agree with the reply about using Pickled Dictionary. Very simple and effective for storing simple data in a Dictionary structure.
A:
If you don't care about being able to edit the file yourself, and want a quick way to persist python objects, go with pickle. If you do want the file to be readable by a human, or readable by some other app, use ConfigParser. If you need anything more complex, go with some sort of database, be it relational (sqlite), or object-oriented (axiom, zodb).
| Store simple user settings in Python | I am programming a website in which users will have a number of settings, such as their choice of colour scheme, etc. I'm happy to store these as plain text files, and security is not an issue.
The way I currently see it is: there is a dictionary, where all the keys are users and the values are dictionaries with the users' settings in them.
For example, userdb["bob"]["colour_scheme"] would have the value "blue".
What is the best way to store it on file? Pickling the dictionary?
Are there better ways of doing what I am trying to do?
| [
"I would use the ConfigParser module, which produces some pretty readable and user-editable output for your example:\n[bob]\ncolour_scheme: blue\nbritish: yes\n[joe]\ncolor_scheme: that's 'color', silly!\nbritish: no\nThe following code would produce the config file above, and then print it out:\nimport sys\nfrom ConfigParser import *\n\nc = ConfigParser()\n\nc.add_section(\"bob\")\nc.set(\"bob\", \"colour_scheme\", \"blue\")\nc.set(\"bob\", \"british\", str(True))\n\nc.add_section(\"joe\")\nc.set(\"joe\", \"color_scheme\", \"that's 'color', silly!\")\nc.set(\"joe\", \"british\", str(False))\n\nc.write(sys.stdout) # this outputs the configuration to stdout\n # you could put a file-handle here instead\n\nfor section in c.sections(): # this is how you read the options back in\n print section\n for option in c.options(section):\n print \"\\t\", option, \"=\", c.get(section, option)\n\nprint c.get(\"bob\", \"british\") # To access the \"british\" attribute for bob directly\n\nNote that ConfigParser only supports strings, so you'll have to convert as I have above for the Booleans. See effbot for a good run-down of the basics.\n",
"Using cPickle on the dictionary would be my choice. Dictionaries are a natural fit for these kind of data, so given your requirements I see no reason not to use them. That, unless you are thinking about reading them from non-python applications, in which case you'd have to use a language neutral text format. And even here you could get away with the pickle plus an export tool.\n",
"I don't tackle the question which one is best. If you want to handle text-files, I'd consider ConfigParser -module. Another you could give a try would be simplejson or yaml. You could also consider a real db table.\nFor instance, you could have a table called userattrs, with three columns:\n\nInt user_id\nString attribute_name\nString attribute_value\n\nIf there's only few, you could store them into cookies for quick retrieval.\n",
"Here's the simplest way. Use simple variables and import the settings file.\nCall the file userprefs.py\n# a user prefs file\ncolor = 0x010203\nfont = \"times new roman\"\nposition = ( 12, 13 )\nsize = ( 640, 480 )\n\nIn your application, you need to be sure that you can import this file. You have many choices.\n\nUsing PYTHONPATH. Require PYTHONPATH be set to include the directory with the preferences files.\na. An explicit command-line parameter to name the file (not the best, but simple)\nb. An environment variable to name the file.\nExtending sys.path to include the user's home directory\n\nExample\nimport sys\nimport os\nsys.path.insert(0,os.path.expanduser(\"~\"))\nimport userprefs \nprint userprefs.color\n\n",
"For a database-driven website, of course, your best option is a db table. I'm assuming that you are not doing the database thing.\nIf you don't care about human-readable formats, then pickle is a simple and straightforward way to go. I've also heard good reports about simplejson.\nIf human readability is important, two simple options present themselves:\nModule: Just use a module. If all you need are a few globals and nothing fancy, then this is the way to go. If you really got desperate, you could define classes and class variables to emulate sections. The downside here: if the file will be hand-edited by a user, errors could be hard to catch and debug.\nINI format: I've been using ConfigObj for this, with quite a bit of success. ConfigObj is essentially a replacement for ConfigParser, with support for nested sections and much more. Optionally, you can define expected types or values for a file and validate it, providing a safety net (and important error feedback) for users/administrators.\n",
"I would use shelve or an sqlite database if I would have to store these setting on the file system. Although, since you are building a website you probably use some kind of database so why not just use that?\n",
"The built-in sqlite3 module would probably be far simpler than most alternatives, and gets you ready to update to a full RDBMS should you ever want or need to.\n",
"If human readablity of configfiles matters an alternative might be the ConfigParser module which allows you to read and write .ini like files. But then you are restricted to one nesting level.\n",
"If you have a database, I might suggest storing the settings in the database. However, it sounds like ordinary files might suit your environment better.\nYou probably don't want to store all the users settings in the same file, because you might run into trouble with concurrent access to that one file. If you stored each user's settings as a dictionary in their own pickled file, then they would be able to act independently.\nPickling is a reasonable way to store such data, but unfortunately the pickle data format is notoriously not-human-readable. You might be better off storing it as repr(dictionary) which will be a more readable format. To reload the user settings, use eval(open(\"file\").read()) or something like that.\n",
"Is there are particular reason you're not using the database for this? it seems the normal and natural thing to do - or store a pickle of the settings in the db keyed on user id or something.\nYou haven't described the usage patterns of the website, but just thinking of a general website - but I would think that keeping the settings in a database would cause much less disk I/O than using files.\nOTOH, for settings that might be used by client-side code, storing them as javascript in a static file that can be cached would be handy - at the expense of having multiple places you might have settings. (I'd probably store those settings in the db, and rebuild the static files as necessary)\n",
"I agree with the reply about using Pickled Dictionary. Very simple and effective for storing simple data in a Dictionary structure.\n",
"If you don't care about being able to edit the file yourself, and want a quick way to persist python objects, go with pickle. If you do want the file to be readable by a human, or readable by some other app, use ConfigParser. If you need anything more complex, go with some sort of database, be it relational (sqlite), or object-oriented (axiom, zodb).\n"
] | [
10,
7,
6,
5,
3,
2,
2,
1,
0,
0,
0,
0
] | [] | [] | [
"database",
"python",
"settings",
"web"
] | stackoverflow_0000200599_database_python_settings_web.txt |
Q:
Creating self-contained python applications
I'm trying to create a self-contained version of pisa (html to pdf converter, latest version), but I can't succeed due to several errors. I've tried py2exe, bb-freeze and cxfreeze.
This has to be in windows, which makes my life a bit harder. I remember that a couple of months ago the author had a zip file containing the install, but now it's gone, leaving me only with the python dependent way.
How would you work this out?
A:
Check out pyinstaller, it makes standalone executables (as in one .EXE file, and that's it).
| Creating self-contained python applications | I'm trying to create a self-contained version of pisa (html to pdf converter, latest version), but I can't succeed due to several errors. I've tried py2exe, bb-freeze and cxfreeze.
This has to be in windows, which makes my life a bit harder. I remember that a couple of months ago the author had a zip file containing the install, but now it's gone, leaving me only with the python dependent way.
How would you work this out?
| [
"Check out pyinstaller, it makes standalone executables (as in one .EXE file, and that's it).\n"
] | [
28
] | [] | [] | [
"executable",
"python",
"self_contained",
"windows"
] | stackoverflow_0000203487_executable_python_self_contained_windows.txt |
Q:
Alert Popups from service in Python
I have been using win32api.MessageBox to do alerts, and this works for apps running from the interactive prompt and normally executed code, however when I built a Python service when a MessageBox is triggered I can hear the 'beep' but the box does not display. Is it possible to display alerts from services?
A:
No, Windows services run on a completely separate hidden desktop and have no access to the logged-on user's desktop. There is no way around this from a service developer's perspective.
In previous versions of Windows, it was possible for a service to be marked as "allowed to interact with the user desktop", but this option was removed in XP or Vista (I forget which). Now, services cannot interact with the user desktop.
One solution to your problem might be to have a desktop application that communicates with the service through some IPC method. When the service wants to alert the user of some condition, it would notify the desktop application which would then display a regular message box.
| Alert Popups from service in Python | I have been using win32api.MessageBox to do alerts, and this works for apps running from the interactive prompt and normally executed code, however when I built a Python service when a MessageBox is triggered I can hear the 'beep' but the box does not display. Is it possible to display alerts from services?
| [
"No, Windows services run on a completely separate hidden desktop and have no access to the logged-on user's desktop. There is no way around this from a service developer's perspective.\nIn previous versions of Windows, it was possible for a service to be marked as \"allowed to interact with the user desktop\", but this option was removed in XP or Vista (I forget which). Now, services cannot interact with the user desktop.\nOne solution to your problem might be to have a desktop application that communicates with the service through some IPC method. When the service wants to alert the user of some condition, it would notify the desktop application which would then display a regular message box.\n"
] | [
5
] | [] | [] | [
"alerts",
"python",
"service",
"winapi"
] | stackoverflow_0000204062_alerts_python_service_winapi.txt |
Q:
Can I implement a web user authentication system in python without POST?
My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?
If it's not, how would you do it with POST? Just out of curiosity.
Cheers!
A:
You can actually do it all with GET methods. However, you'll want to use a full challenge response protocol for the logins. (You can hash on the client side using javascript. You just need to send out a unique challenge each time.) You'll also want to use SSL to ensure that no one can see the strings as they go across.
In some senses there's no real security difference between GET and POST requests as they both go across in plaintext, in other senses and in practice... GET is are a hell of a lot easier to intercept and is all over most people's logs and your web browser's history. :)
(Or as suggested by the other posters, use a different method entirely like HTTP auth, digest auth or some higher level authentication scheme like AD, LDAP, kerberos or shib. However I kinda assumed that if you didn't have POST you wouldn't have these either.)
A:
You could use HTTP Authentication, if supported.
You'd have to add SSL, as all methods, POST, GET and HTTP Auth (well, except Digest HHTP authentication) send plaintext.
GET is basically just like POST, it just has a limit on the amount of data you can send which is usually a lot smaller than POST and a semantic difference which makes GET not a good candidate from that point of view, even if technically they both can do it.
As for examples, what are you using? There are many choices in Python, like the cgi module or some framework like Django, CherryPy, and so on
A:
With a bit of JavaScript, you could have the client hash the entered password and a server-generated nonce, and use that in an HTTP GET.
A:
A good choice: HTTP Digest authentication
Harder to pull off well, but an option: Client-side hashing with Javascript
A:
Javascript is the best option in this case.
Along with the request for the username and password, it sends a unique random string. You can then use a javascript md5 library to generate a hashed password, by combining the random string and the password [pwhash = md5(randomstring+password)]. The javascript then instantiates the call to http://SERVER/login.cgi?username=TheUsername&random=RANDOMSTRING&pwhash=0123456789abcdef0123456789abcdef
The server must then do two things:
Check if the random string has EVER been used before, and it if has, deny the request. (very important for security)
Lookup the plaintext password for username, and do md5(randomstring+password). If that matches what the user supplied in the URL as a pwhash, then you know it's the user.
The reason you check if the random string has ever been used before is to stop a repeat attack. If somebody is able to see the network traffic or the browser history or logs, then they could simply log in again using the same URL, and it doesn't matter whether they know the original password or not.
I also recommend putting "Pragma: no-cache" and "Cache-Control: no-cache" at the top of the headers returned by the CGI script, just so that the authenticated session is not stored in the browser's or your ISPs web cache.
An even more secure solution would be using proper encryption and Challenge-Response. You tell the server your username, the server sends back a Challenge (some random string encrypted with your password), and you tell the server what the random string was. If you're able to tell the server, then obviously you have the password and are who you say you are! Kerberos does it this way, but quite a lot more carefully to prevent all sorts of attacks.
| Can I implement a web user authentication system in python without POST? | My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?
If it's not, how would you do it with POST? Just out of curiosity.
Cheers!
| [
"You can actually do it all with GET methods. However, you'll want to use a full challenge response protocol for the logins. (You can hash on the client side using javascript. You just need to send out a unique challenge each time.) You'll also want to use SSL to ensure that no one can see the strings as they go across.\nIn some senses there's no real security difference between GET and POST requests as they both go across in plaintext, in other senses and in practice... GET is are a hell of a lot easier to intercept and is all over most people's logs and your web browser's history. :)\n(Or as suggested by the other posters, use a different method entirely like HTTP auth, digest auth or some higher level authentication scheme like AD, LDAP, kerberos or shib. However I kinda assumed that if you didn't have POST you wouldn't have these either.)\n",
"You could use HTTP Authentication, if supported.\nYou'd have to add SSL, as all methods, POST, GET and HTTP Auth (well, except Digest HHTP authentication) send plaintext.\nGET is basically just like POST, it just has a limit on the amount of data you can send which is usually a lot smaller than POST and a semantic difference which makes GET not a good candidate from that point of view, even if technically they both can do it.\nAs for examples, what are you using? There are many choices in Python, like the cgi module or some framework like Django, CherryPy, and so on\n",
"With a bit of JavaScript, you could have the client hash the entered password and a server-generated nonce, and use that in an HTTP GET.\n",
"A good choice: HTTP Digest authentication\nHarder to pull off well, but an option: Client-side hashing with Javascript\n",
"Javascript is the best option in this case.\nAlong with the request for the username and password, it sends a unique random string. You can then use a javascript md5 library to generate a hashed password, by combining the random string and the password [pwhash = md5(randomstring+password)]. The javascript then instantiates the call to http://SERVER/login.cgi?username=TheUsername&random=RANDOMSTRING&pwhash=0123456789abcdef0123456789abcdef\nThe server must then do two things:\n Check if the random string has EVER been used before, and it if has, deny the request. (very important for security)\nLookup the plaintext password for username, and do md5(randomstring+password). If that matches what the user supplied in the URL as a pwhash, then you know it's the user.\nThe reason you check if the random string has ever been used before is to stop a repeat attack. If somebody is able to see the network traffic or the browser history or logs, then they could simply log in again using the same URL, and it doesn't matter whether they know the original password or not.\nI also recommend putting \"Pragma: no-cache\" and \"Cache-Control: no-cache\" at the top of the headers returned by the CGI script, just so that the authenticated session is not stored in the browser's or your ISPs web cache.\nAn even more secure solution would be using proper encryption and Challenge-Response. You tell the server your username, the server sends back a Challenge (some random string encrypted with your password), and you tell the server what the random string was. If you're able to tell the server, then obviously you have the password and are who you say you are! Kerberos does it this way, but quite a lot more carefully to prevent all sorts of attacks.\n"
] | [
5,
1,
0,
0,
0
] | [
"Logging in securely is very subjective. Full 'security' is not easy to achieve (if at all possible...debatable). However, you can come close. \nIf POST is not an option, maybe you can use a directory security method such as .htaccess or windows authentication depending on what system you're on.\nBoth of the above will get you the pop-up window that allows for a username and password to be entered.\nTo use POST as the method to send the login credentials, you'd just use an HTML form with method=\"post\" and retrieve the information from, say, a PHP or ASP page, using the $_POST['varname'] method in PHP or the request.form(\"varname\") method in ASP. From the PHP or ASP page, as an example, you can do a lookup in a database of users, to see if that username/password combination exists, and if so, redirect them to the appropriate page.\nAs reference, use http://www.w3schools.com/ASP/showasp.asp?filename=demo_simpleform for the HTML/ASP portion\n"
] | [
-1
] | [
"authentication",
"cgi",
"python"
] | stackoverflow_0000069979_authentication_cgi_python.txt |
Q:
Failed to get separate instances of a class under mod_python
I'm trying to run some python code under Apache 2.2 / mod_python 3.2.8. Eventually the code does os.fork() and spawns 2 separate long-run processes. Each of those processes has to create a separate instance of a class in order to avoid any possible collision in the parallel flow.
class Foo(object):
pass
kidprocs = []
for kid in ('kid1', 'kid2'):
pid = os.fork()
if pid:
# parent
kidprocs.append(pid)
time.sleep(5)
else:
# child
fooobj = Foo()
print "Starting %s in sub-process %s" % (kid, os.getpid())
print "Kid fooobj: %s" % repr(fooobj)
os._exit(0)
for kidproc in kidprocs:
os.waitpid(kidproc, 0)
Those print outputs look like this:
Starting kid1 in sub-process 20906
foo obj: <__main__.Foo instance at 0xb7da5fec>
Starting kid2 in sub-process 20909
foo obj: <__main__.Foo instance at 0xb7da5fec>
As you can see I got the same object for both sub-processes.
Do you have an idea why it's going like this under mod_python and is there a way to get separate instances anyway?
Thanks a lot.
A:
The memory location given by the repr() function is an address in virtual memory, not an address in the system's global memory. Each of your processes returned by fork() has its own virtual memory space which is completely distinct from other processes. They do not share memory.
Edit: Per brian's comments below, technically they do share memory until the kernel decides to segregate them (when a child writes to a portion of shared memory). The behavior, though, is effectively the same.
The structure of your programs is the same, so python uses the same virtual memory location in each processes' distinct virtual memory store for each of your identical objects for each child.
If you actually modify the content of the objects and test them, you will see that even though the memory location looks the same, the two are completely distinct objects, because they belong to two distinct processes. In reality you can't modify one from the other (without some kind of interprocess communication to mediate).
| Failed to get separate instances of a class under mod_python | I'm trying to run some python code under Apache 2.2 / mod_python 3.2.8. Eventually the code does os.fork() and spawns 2 separate long-run processes. Each of those processes has to create a separate instance of a class in order to avoid any possible collision in the parallel flow.
class Foo(object):
pass
kidprocs = []
for kid in ('kid1', 'kid2'):
pid = os.fork()
if pid:
# parent
kidprocs.append(pid)
time.sleep(5)
else:
# child
fooobj = Foo()
print "Starting %s in sub-process %s" % (kid, os.getpid())
print "Kid fooobj: %s" % repr(fooobj)
os._exit(0)
for kidproc in kidprocs:
os.waitpid(kidproc, 0)
Those print outputs look like this:
Starting kid1 in sub-process 20906
foo obj: <__main__.Foo instance at 0xb7da5fec>
Starting kid2 in sub-process 20909
foo obj: <__main__.Foo instance at 0xb7da5fec>
As you can see I got the same object for both sub-processes.
Do you have an idea why it's going like this under mod_python and is there a way to get separate instances anyway?
Thanks a lot.
| [
"The memory location given by the repr() function is an address in virtual memory, not an address in the system's global memory. Each of your processes returned by fork() has its own virtual memory space which is completely distinct from other processes. They do not share memory.\nEdit: Per brian's comments below, technically they do share memory until the kernel decides to segregate them (when a child writes to a portion of shared memory). The behavior, though, is effectively the same.\nThe structure of your programs is the same, so python uses the same virtual memory location in each processes' distinct virtual memory store for each of your identical objects for each child.\nIf you actually modify the content of the objects and test them, you will see that even though the memory location looks the same, the two are completely distinct objects, because they belong to two distinct processes. In reality you can't modify one from the other (without some kind of interprocess communication to mediate).\n"
] | [
3
] | [] | [] | [
"apache",
"mod_python",
"python"
] | stackoverflow_0000204427_apache_mod_python_python.txt |
Q:
Checking for member existence in Python
I regularly want to check if an object has a member or not. An example is the creation of a singleton in a function. For that purpose, you can use hasattr like this:
class Foo(object):
@classmethod
def singleton(self):
if not hasattr(self, 'instance'):
self.instance = Foo()
return self.instance
But you can also do this:
class Foo(object):
@classmethod
def singleton(self):
try:
return self.instance
except AttributeError:
self.instance = Foo()
return self.instance
Is one method better of the other?
Edit: Added the @classmethod ... But note that the question is not about how to make a singleton but how to check the presence of a member in an object.
Edit: For that example, a typical usage would be:
s = Foo.singleton()
Then s is an object of type Foo, the same each time. And, typically, the method is called many times.
A:
These are two different methodologies: №1 is LBYL (look before you leap) and №2 is EAFP (easier to ask forgiveness than permission).
Pythonistas typically suggest that EAFP is better, with arguments in style of "what if a process creates the file between the time you test for it and the time you try to create it yourself?". This argument does not apply here, but it's the general idea. Exceptions should not be treated as too exceptional.
Performance-wise in your case —since setting up exception managers (the try keyword) is very cheap in CPython while creating an exception (the raise keyword and internal exception creation) is what is relatively expensive— using method №2 the exception would be raised only once; afterwards, you just use the property.
A:
I just tried to measure times:
class Foo(object):
@classmethod
def singleton(self):
if not hasattr(self, 'instance'):
self.instance = Foo()
return self.instance
class Bar(object):
@classmethod
def singleton(self):
try:
return self.instance
except AttributeError:
self.instance = Bar()
return self.instance
from time import time
n = 1000000
foo = [Foo() for i in xrange(0,n)]
bar = [Bar() for i in xrange(0,n)]
print "Objs created."
print
for times in xrange(1,4):
t = time()
for d in foo: d.singleton()
print "#%d Foo pass in %f" % (times, time()-t)
t = time()
for d in bar: d.singleton()
print "#%d Bar pass in %f" % (times, time()-t)
print
On my machine:
Objs created.
#1 Foo pass in 1.719000
#1 Bar pass in 1.140000
#2 Foo pass in 1.750000
#2 Bar pass in 1.187000
#3 Foo pass in 1.797000
#3 Bar pass in 1.203000
It seems that try/except is faster. It seems also more readable to me, anyway depends on the case, this test was very simple maybe you'd need a more complex one.
A:
It depends on which case is "typical", because exceptions should model, well, atypical conditions. So, if the typical case is that the instance attribute should exist, then use the second code style. If not having instance is as typical as having instance, then use the first style.
In the specific case of creating a singleton, I'm inclined to go with the first style, because creating a singleton the initial time is a typical use case. :-)
A:
A little off-topic in the way of using it. Singletons are overrated, and a "shared-state" method is as effective, and mostly, very clean in python, for example:
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
# and whatever else you want in your class -- that's all!
Now every time you do:
obj = Borg()
it will have the same information, or, be somewhat the same instance.
A:
I have to agree with Chris. Remember, don't optimize until you actually need to do so. I really doubt checking for existence is going to be a bottleneck in any reasonable program.
I did see http://code.activestate.com/recipes/52558/ as a way to do this, too. Uncommented copy of that code ("spam" is just a random method the class interface has):
class Singleton:
class __impl:
def spam(self):
return id(self)
__instance = None
def __init__(self):
if Singleton.__instance is None:
Singleton.__instance = Singleton.__impl()
self.__dict__['_Singleton__instance'] = Singleton.__instance
def __getattr__(self, attr):
return getattr(self.__instance, attr)
def __setattr__(self, attr, value):
return setattr(self.__instance, attr, value)
| Checking for member existence in Python | I regularly want to check if an object has a member or not. An example is the creation of a singleton in a function. For that purpose, you can use hasattr like this:
class Foo(object):
@classmethod
def singleton(self):
if not hasattr(self, 'instance'):
self.instance = Foo()
return self.instance
But you can also do this:
class Foo(object):
@classmethod
def singleton(self):
try:
return self.instance
except AttributeError:
self.instance = Foo()
return self.instance
Is one method better of the other?
Edit: Added the @classmethod ... But note that the question is not about how to make a singleton but how to check the presence of a member in an object.
Edit: For that example, a typical usage would be:
s = Foo.singleton()
Then s is an object of type Foo, the same each time. And, typically, the method is called many times.
| [
"These are two different methodologies: №1 is LBYL (look before you leap) and №2 is EAFP (easier to ask forgiveness than permission).\nPythonistas typically suggest that EAFP is better, with arguments in style of \"what if a process creates the file between the time you test for it and the time you try to create it yourself?\". This argument does not apply here, but it's the general idea. Exceptions should not be treated as too exceptional.\nPerformance-wise in your case —since setting up exception managers (the try keyword) is very cheap in CPython while creating an exception (the raise keyword and internal exception creation) is what is relatively expensive— using method №2 the exception would be raised only once; afterwards, you just use the property.\n",
"I just tried to measure times:\nclass Foo(object):\n @classmethod\n def singleton(self):\n if not hasattr(self, 'instance'):\n self.instance = Foo()\n return self.instance\n\n\n\nclass Bar(object):\n @classmethod\n def singleton(self):\n try:\n return self.instance\n except AttributeError:\n self.instance = Bar()\n return self.instance\n\n\n\nfrom time import time\n\nn = 1000000\nfoo = [Foo() for i in xrange(0,n)]\nbar = [Bar() for i in xrange(0,n)]\n\nprint \"Objs created.\"\nprint\n\n\nfor times in xrange(1,4):\n t = time()\n for d in foo: d.singleton()\n print \"#%d Foo pass in %f\" % (times, time()-t)\n\n t = time()\n for d in bar: d.singleton()\n print \"#%d Bar pass in %f\" % (times, time()-t)\n\n print\n\nOn my machine:\nObjs created.\n\n#1 Foo pass in 1.719000\n#1 Bar pass in 1.140000\n\n#2 Foo pass in 1.750000\n#2 Bar pass in 1.187000\n\n#3 Foo pass in 1.797000\n#3 Bar pass in 1.203000\n\nIt seems that try/except is faster. It seems also more readable to me, anyway depends on the case, this test was very simple maybe you'd need a more complex one.\n",
"It depends on which case is \"typical\", because exceptions should model, well, atypical conditions. So, if the typical case is that the instance attribute should exist, then use the second code style. If not having instance is as typical as having instance, then use the first style.\nIn the specific case of creating a singleton, I'm inclined to go with the first style, because creating a singleton the initial time is a typical use case. :-)\n",
"A little off-topic in the way of using it. Singletons are overrated, and a \"shared-state\" method is as effective, and mostly, very clean in python, for example:\nclass Borg:\n __shared_state = {}\n def __init__(self):\n self.__dict__ = self.__shared_state\n # and whatever else you want in your class -- that's all!\n\nNow every time you do:\nobj = Borg()\n\nit will have the same information, or, be somewhat the same instance.\n",
"I have to agree with Chris. Remember, don't optimize until you actually need to do so. I really doubt checking for existence is going to be a bottleneck in any reasonable program.\nI did see http://code.activestate.com/recipes/52558/ as a way to do this, too. Uncommented copy of that code (\"spam\" is just a random method the class interface has):\nclass Singleton:\n class __impl:\n def spam(self):\n return id(self)\n __instance = None\n def __init__(self):\n if Singleton.__instance is None:\n Singleton.__instance = Singleton.__impl()\n self.__dict__['_Singleton__instance'] = Singleton.__instance\n def __getattr__(self, attr):\n return getattr(self.__instance, attr)\n def __setattr__(self, attr, value):\n return setattr(self.__instance, attr, value)\n\n"
] | [
22,
10,
5,
1,
0
] | [] | [] | [
"exception",
"hasattr",
"introspection",
"python"
] | stackoverflow_0000204308_exception_hasattr_introspection_python.txt |
Q:
Capture the contents of a regex and delete them, efficiently
Situation:
text: a string
R: a regex that matches part of the string. This might be expensive to calculate.
I want to both delete the R-matches from the text, and see what they actually contain. Currently, I do this like:
import re
ab_re = re.compile("[ab]")
text="abcdedfe falijbijie bbbb laifsjelifjl"
ab_re.findall(text)
# ['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']
ab_re.sub('',text)
# 'cdedfe flijijie lifsjelifjl'
This runs the regex twice, near as I can tell. Is there a technique to do it all on pass, perhaps using re.split? It seems like with split based solutions I'd need to do the regex at least twice as well.
A:
import re
r = re.compile("[ab]")
text = "abcdedfe falijbijie bbbb laifsjelifjl"
matches = []
replaced = []
pos = 0
for m in r.finditer(text):
matches.append(m.group(0))
replaced.append(text[pos:m.start()])
pos = m.end()
replaced.append(text[pos:])
print matches
print ''.join(replaced)
Outputs:
['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']
cdedfe flijijie lifsjelifjl
A:
What about this:
import re
text = "abcdedfe falijbijie bbbb laifsjelifjl"
matches = []
ab_re = re.compile( "[ab]" )
def verboseTest( m ):
matches.append( m.group(0) )
return ''
textWithoutMatches = ab_re.sub( verboseTest, text )
print matches
# ['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']
print textWithoutMatches
# cdedfe flijijie lifsjelifjl
The 'repl' argument of the re.sub function can be a function so you can report or save the matches from there and whatever the function returns is what 'sub' will substitute.
The function could easily be modified to do a lot more too! Check out the re module documentation on docs.python.org for more information on what else is possible.
A:
My revised answer, using re.split(), which does things in one regex pass:
import re
text="abcdedfe falijbijie bbbb laifsjelifjl"
ab_re = re.compile("([ab])")
tokens = ab_re.split(text)
non_matches = tokens[0::2]
matches = tokens[1::2]
(edit: here is a complete function version)
def split_matches(text,compiled_re):
''' given a compiled re, split a text
into matching and nonmatching sections
returns m, n_m, two lists
'''
tokens = compiled_re.split(text)
matches = tokens[1::2]
non_matches = tokens[0::2]
return matches,non_matches
m,nm = split_matches(text,ab_re)
''.join(nm) # equivalent to ab_re.sub('',text)
A:
You could use split with capturing parantheses. If you do, then the text of all groups in the pattern are also returned as part of the resulting list (from python doc).
So the code would be
import re
ab_re = re.compile("([ab])")
text="abcdedfe falijbijie bbbb laifsjelifjl"
matches = ab_re.split(text)
# matches = ['', 'a', '', 'b', 'cdedfe f', 'a', 'lij', 'b', 'ijie ', 'b', '', 'b', '', 'b', '', 'b', ' l', 'a', 'ifsjelifjl']
# now extract the matches
Rmatches = []
remaining = []
for i in range(1, len(matches), 2):
Rmatches.append(matches[i])
# Rmatches = ['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']
for i in range(0, len(matches), 2):
remaining.append(matches[i])
remainingtext = ''.join(remaining)
# remainingtext = 'cdedfe flijijie lifsjelifjl'
| Capture the contents of a regex and delete them, efficiently | Situation:
text: a string
R: a regex that matches part of the string. This might be expensive to calculate.
I want to both delete the R-matches from the text, and see what they actually contain. Currently, I do this like:
import re
ab_re = re.compile("[ab]")
text="abcdedfe falijbijie bbbb laifsjelifjl"
ab_re.findall(text)
# ['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']
ab_re.sub('',text)
# 'cdedfe flijijie lifsjelifjl'
This runs the regex twice, near as I can tell. Is there a technique to do it all on pass, perhaps using re.split? It seems like with split based solutions I'd need to do the regex at least twice as well.
| [
"import re\n\nr = re.compile(\"[ab]\")\ntext = \"abcdedfe falijbijie bbbb laifsjelifjl\"\n\nmatches = []\nreplaced = []\npos = 0\nfor m in r.finditer(text):\n matches.append(m.group(0))\n replaced.append(text[pos:m.start()])\n pos = m.end()\nreplaced.append(text[pos:])\n\nprint matches\nprint ''.join(replaced)\n\nOutputs:\n['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']\ncdedfe flijijie lifsjelifjl\n\n",
"What about this:\nimport re\n\ntext = \"abcdedfe falijbijie bbbb laifsjelifjl\"\nmatches = []\n\nab_re = re.compile( \"[ab]\" )\n\ndef verboseTest( m ):\n matches.append( m.group(0) )\n return ''\n\ntextWithoutMatches = ab_re.sub( verboseTest, text )\n\nprint matches\n# ['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']\nprint textWithoutMatches\n# cdedfe flijijie lifsjelifjl\n\nThe 'repl' argument of the re.sub function can be a function so you can report or save the matches from there and whatever the function returns is what 'sub' will substitute.\nThe function could easily be modified to do a lot more too! Check out the re module documentation on docs.python.org for more information on what else is possible.\n",
"My revised answer, using re.split(), which does things in one regex pass:\nimport re\ntext=\"abcdedfe falijbijie bbbb laifsjelifjl\"\nab_re = re.compile(\"([ab])\")\ntokens = ab_re.split(text)\nnon_matches = tokens[0::2]\nmatches = tokens[1::2]\n\n(edit: here is a complete function version)\ndef split_matches(text,compiled_re):\n ''' given a compiled re, split a text \n into matching and nonmatching sections\n returns m, n_m, two lists\n '''\n tokens = compiled_re.split(text)\n matches = tokens[1::2]\n non_matches = tokens[0::2]\n return matches,non_matches\n\nm,nm = split_matches(text,ab_re)\n''.join(nm) # equivalent to ab_re.sub('',text)\n\n",
"You could use split with capturing parantheses. If you do, then the text of all groups in the pattern are also returned as part of the resulting list (from python doc).\nSo the code would be \nimport re\nab_re = re.compile(\"([ab])\")\ntext=\"abcdedfe falijbijie bbbb laifsjelifjl\"\nmatches = ab_re.split(text)\n# matches = ['', 'a', '', 'b', 'cdedfe f', 'a', 'lij', 'b', 'ijie ', 'b', '', 'b', '', 'b', '', 'b', ' l', 'a', 'ifsjelifjl']\n\n# now extract the matches\nRmatches = []\nremaining = []\nfor i in range(1, len(matches), 2):\n Rmatches.append(matches[i])\n# Rmatches = ['a', 'b', 'a', 'b', 'b', 'b', 'b', 'b', 'a']\n\nfor i in range(0, len(matches), 2):\n remaining.append(matches[i])\nremainingtext = ''.join(remaining)\n# remainingtext = 'cdedfe flijijie lifsjelifjl'\n\n"
] | [
4,
4,
3,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000204829_python_regex.txt |
Q:
Sometimes can't delete an Oracle database row using Django
I have a unit test which contains the following line of code
Site.objects.get(name="UnitTest").delete()
and this has worked just fine until now. However, that statement is currently hanging. It'll sit there forever trying to execute the delete. If I just say
print Site.objects.get(name="UnitTest")
then it works, so I know that it can retrieve the site. No other program is connected to Oracle, so it's not like there are two developers stepping on each other somehow. I assume that some sort of table lock hasn't been released.
So short of shutting down the Oracle database and bringing it back up, how do I release that lock or whatever is blocking me? I'd like to not resort to a database shutdown because in the future that may be disruptive to some of the other developers.
EDIT: Justin suggested that I look at the DBA_BLOCKERS and DBA_WAITERS tables. Unfortunately, I don't understand these tables at all, and I'm not sure what I'm looking for. So here's the information that seemed relevant to me:
The DBA_WAITERS table has 182 entries with lock type "DML". The DBA_BLOCKERS table has 14 entries whose session ids all correspond to the username used by our application code.
Since this needs to get resolved, I'm going to just restart the web server, but I'd still appreciate any suggestions about what to do if this problem repeats itself. I'm a real novice when it comes to Oracle administration and have mostly just used MySQL in the past, so I'm definitely out of my element.
EDIT #2: It turns out that despite what I thought, another programmer was indeed accessing the database at the same time as me. So what's the best way to detect this in the future? Perhaps I should have shut down my program and then queried the DBA_WAITERS and DBA_BLOCKERS tables to make sure they were empty.
A:
From a separate session, can you query the DBA_BLOCKERS and DBA_WAITERS data dictionary tables and post the results? That will tell you if your session is getting blocked by a lock held by some other session, as well as what other session is holding the lock.
| Sometimes can't delete an Oracle database row using Django | I have a unit test which contains the following line of code
Site.objects.get(name="UnitTest").delete()
and this has worked just fine until now. However, that statement is currently hanging. It'll sit there forever trying to execute the delete. If I just say
print Site.objects.get(name="UnitTest")
then it works, so I know that it can retrieve the site. No other program is connected to Oracle, so it's not like there are two developers stepping on each other somehow. I assume that some sort of table lock hasn't been released.
So short of shutting down the Oracle database and bringing it back up, how do I release that lock or whatever is blocking me? I'd like to not resort to a database shutdown because in the future that may be disruptive to some of the other developers.
EDIT: Justin suggested that I look at the DBA_BLOCKERS and DBA_WAITERS tables. Unfortunately, I don't understand these tables at all, and I'm not sure what I'm looking for. So here's the information that seemed relevant to me:
The DBA_WAITERS table has 182 entries with lock type "DML". The DBA_BLOCKERS table has 14 entries whose session ids all correspond to the username used by our application code.
Since this needs to get resolved, I'm going to just restart the web server, but I'd still appreciate any suggestions about what to do if this problem repeats itself. I'm a real novice when it comes to Oracle administration and have mostly just used MySQL in the past, so I'm definitely out of my element.
EDIT #2: It turns out that despite what I thought, another programmer was indeed accessing the database at the same time as me. So what's the best way to detect this in the future? Perhaps I should have shut down my program and then queried the DBA_WAITERS and DBA_BLOCKERS tables to make sure they were empty.
| [
"From a separate session, can you query the DBA_BLOCKERS and DBA_WAITERS data dictionary tables and post the results? That will tell you if your session is getting blocked by a lock held by some other session, as well as what other session is holding the lock.\n"
] | [
1
] | [] | [] | [
"django",
"oracle",
"python"
] | stackoverflow_0000205136_django_oracle_python.txt |
Q:
Any good team-chat websites?
Are there any good team-chat websites, preferably in Python, ideally with CherryPy or Trac?
This is similar to https://stackoverflow.com/questions/46612/whats-a-good-freeware-collaborative-ie-multiuser-instant-messenger#46660, but a few primary differences:
1) I very much want to host the server.
2) I don't care if Smileys are included or not in the client.
3) I'd like two options for the users:
a) Ability to host a private IRC like chat on my Trac page (or link to such a page),
b) allow remote clients to also interact.
A:
Campfire from 37 signals - the rails guys.
Edit: It doesn't meet your requirements but it has some great features...
| Any good team-chat websites? | Are there any good team-chat websites, preferably in Python, ideally with CherryPy or Trac?
This is similar to https://stackoverflow.com/questions/46612/whats-a-good-freeware-collaborative-ie-multiuser-instant-messenger#46660, but a few primary differences:
1) I very much want to host the server.
2) I don't care if Smileys are included or not in the client.
3) I'd like two options for the users:
a) Ability to host a private IRC like chat on my Trac page (or link to such a page),
b) allow remote clients to also interact.
| [
"Campfire from 37 signals - the rails guys.\nEdit: It doesn't meet your requirements but it has some great features...\n"
] | [
2
] | [] | [] | [
"collaboration",
"instant_messaging",
"python"
] | stackoverflow_0000206040_collaboration_instant_messaging_python.txt |
Q:
Why do attribute references act like this with Python inheritance?
The following seems strange.. Basically, the somedata attribute seems shared between all the classes that inherited from the_base_class.
class the_base_class:
somedata = {}
somedata['was_false_in_base'] = False
class subclassthing(the_base_class):
def __init__(self):
print self.somedata
first = subclassthing()
{'was_false_in_base': False}
first.somedata['was_false_in_base'] = True
second = subclassthing()
{'was_false_in_base': True}
>>> del first
>>> del second
>>> third = subclassthing()
{'was_false_in_base': True}
Defining self.somedata in the __init__ function is obviously the correct way to get around this (so each class has it's own somedata dict) - but when is such behavior desirable?
A:
You are right, somedata is shared between all instances of the class and it's subclasses, because it is created at class definition time. The lines
somedata = {}
somedata['was_false_in_base'] = False
are executed when the class is defined, i.e. when the interpreter encounters the class statement - not when the instance is created (think static initializer blocks in Java). If an attribute does not exist in a class instance, the class object is checked for the attribute.
At class definition time, you can run arbritrary code, like this:
import sys
class Test(object):
if sys.platform == "linux2":
def hello(self):
print "Hello Linux"
else:
def hello(self):
print "Hello ~Linux"
On a Linux system, Test().hello() will print Hello Linux, on all other systems the other string will be printed.
In constrast, objects in __init__ are created at instantiation time and belong to the instance only (when they are assigned to self):
class Test(object):
def __init__(self):
self.inst_var = [1, 2, 3]
Objects defined on a class object rather than instance can be useful in many cases. For instance, you might want to cache instances of your class, so that instances with the same member values can be shared (assuming they are supposed to be immutable):
class SomeClass(object):
__instances__ = {}
def __new__(cls, v1, v2, v3):
try:
return cls.__insts__[(v1, v2, v3)]
except KeyError:
return cls.__insts__.setdefault(
(v1, v2, v3),
object.__new__(cls, v1, v2, v3))
Mostly, I use data in class bodies in conjunction with metaclasses or generic factory methods.
A:
Note that part of the behavior you’re seeing is due to somedata being a dict, as opposed to a simple data type such as a bool.
For instance, see this different example which behaves differently (although very similar):
class the_base_class:
somedata = False
class subclassthing(the_base_class):
def __init__(self):
print self.somedata
>>> first = subclassthing()
False
>>> first.somedata = True
>>> print first.somedata
True
>>> second = subclassthing()
False
>>> print first.somedata
True
>>> del first
>>> del second
>>> third = subclassthing()
False
The reason this example behaves differently from the one given in the question is because here first.somedata is being given a new value (the object True), whereas in the first example the dict object referenced by first.somedata (and also by the other subclass instances) is being modified.
See Torsten Marek’s comment to this answer for further clarification.
A:
I think the easiest way to understand this (so that you can predict behavior) is to realize that your somedata is an attribute of the class and not the instance of that class if you define it that way.
There is really only one somedata at all times because in your example you didn't assign to that name but used it to look up a dict and then assign an item (key, value) to it. It's a gotcha that is a consequence of how the python interpreter works and can be confusing at first.
| Why do attribute references act like this with Python inheritance? | The following seems strange.. Basically, the somedata attribute seems shared between all the classes that inherited from the_base_class.
class the_base_class:
somedata = {}
somedata['was_false_in_base'] = False
class subclassthing(the_base_class):
def __init__(self):
print self.somedata
first = subclassthing()
{'was_false_in_base': False}
first.somedata['was_false_in_base'] = True
second = subclassthing()
{'was_false_in_base': True}
>>> del first
>>> del second
>>> third = subclassthing()
{'was_false_in_base': True}
Defining self.somedata in the __init__ function is obviously the correct way to get around this (so each class has it's own somedata dict) - but when is such behavior desirable?
| [
"You are right, somedata is shared between all instances of the class and it's subclasses, because it is created at class definition time. The lines \nsomedata = {}\nsomedata['was_false_in_base'] = False\n\nare executed when the class is defined, i.e. when the interpreter encounters the class statement - not when the instance is created (think static initializer blocks in Java). If an attribute does not exist in a class instance, the class object is checked for the attribute.\nAt class definition time, you can run arbritrary code, like this:\n import sys\n class Test(object):\n if sys.platform == \"linux2\":\n def hello(self):\n print \"Hello Linux\"\n else:\n def hello(self):\n print \"Hello ~Linux\"\n\nOn a Linux system, Test().hello() will print Hello Linux, on all other systems the other string will be printed.\nIn constrast, objects in __init__ are created at instantiation time and belong to the instance only (when they are assigned to self):\nclass Test(object):\n def __init__(self):\n self.inst_var = [1, 2, 3]\n\nObjects defined on a class object rather than instance can be useful in many cases. For instance, you might want to cache instances of your class, so that instances with the same member values can be shared (assuming they are supposed to be immutable):\nclass SomeClass(object):\n __instances__ = {}\n\n def __new__(cls, v1, v2, v3):\n try:\n return cls.__insts__[(v1, v2, v3)]\n except KeyError:\n return cls.__insts__.setdefault(\n (v1, v2, v3), \n object.__new__(cls, v1, v2, v3))\n\nMostly, I use data in class bodies in conjunction with metaclasses or generic factory methods.\n",
"Note that part of the behavior you’re seeing is due to somedata being a dict, as opposed to a simple data type such as a bool.\nFor instance, see this different example which behaves differently (although very similar):\nclass the_base_class:\n somedata = False\n\nclass subclassthing(the_base_class):\n def __init__(self):\n print self.somedata\n\n\n>>> first = subclassthing()\nFalse\n>>> first.somedata = True\n>>> print first.somedata\nTrue\n>>> second = subclassthing()\nFalse\n>>> print first.somedata\nTrue\n>>> del first\n>>> del second\n>>> third = subclassthing()\nFalse\n\nThe reason this example behaves differently from the one given in the question is because here first.somedata is being given a new value (the object True), whereas in the first example the dict object referenced by first.somedata (and also by the other subclass instances) is being modified.\nSee Torsten Marek’s comment to this answer for further clarification.\n",
"I think the easiest way to understand this (so that you can predict behavior) is to realize that your somedata is an attribute of the class and not the instance of that class if you define it that way.\nThere is really only one somedata at all times because in your example you didn't assign to that name but used it to look up a dict and then assign an item (key, value) to it. It's a gotcha that is a consequence of how the python interpreter works and can be confusing at first.\n"
] | [
24,
12,
3
] | [] | [] | [
"class",
"inheritance",
"python"
] | stackoverflow_0000206734_class_inheritance_python.txt |
Q:
Receive socket size limits good?
I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?
More info:
So certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it?
A:
Most likely you've seen code which protects against "extra" incoming data. This is often due to the possibility of buffer overruns, where the extra data being copied into memory overruns the pre-allocated array and overwrites executable code with attacker code. Code written in languages like C typically has a lot of length checking to prevent this type of attack. Functions such as gets, and strcpy are replaced with their safer counterparts like fgets and strncpy which have a length argument to prevent buffer overruns.
If you use a dynamic language like Python, your arrays resize so they won't overflow and clobber other memory, but you still have to be careful about sanitizing foreign data.
Chat programs likely limit the size of a message for reasons such as database field size. If 80% of your incoming messages are 40 characters or less, 90% are 60 characters or less, and 98% are 80 characters or less, why make your message text field allow 10k characters per message?
A:
What is your question exactly?
What happens when you do receive on a socket is that the current available data in the socket buffer is immediately returned. If you give receive (or read, I guess), a huge buffer size, such as 40000, it'll likely never return that much data at once. If you give it a tiny buffer size like 100, then it'll return the 100 bytes it has immediately and still have more available. Either way, you're not imposing a limit on how much data the client is sending you.
A:
I don't know what your actual application is, however, setting a hard limit on the total amount of data that a client can send could be useful in reducing your exposure to denial of service attacks, e.g. client connects and sends 100MB of data which could load your application unacceptably.
But it really depends on what you application is. Are you after a per line limit or a total per connection limit or what?
| Receive socket size limits good? | I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?
More info:
So certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it?
| [
"Most likely you've seen code which protects against \"extra\" incoming data. This is often due to the possibility of buffer overruns, where the extra data being copied into memory overruns the pre-allocated array and overwrites executable code with attacker code. Code written in languages like C typically has a lot of length checking to prevent this type of attack. Functions such as gets, and strcpy are replaced with their safer counterparts like fgets and strncpy which have a length argument to prevent buffer overruns.\nIf you use a dynamic language like Python, your arrays resize so they won't overflow and clobber other memory, but you still have to be careful about sanitizing foreign data.\nChat programs likely limit the size of a message for reasons such as database field size. If 80% of your incoming messages are 40 characters or less, 90% are 60 characters or less, and 98% are 80 characters or less, why make your message text field allow 10k characters per message?\n",
"What is your question exactly? \nWhat happens when you do receive on a socket is that the current available data in the socket buffer is immediately returned. If you give receive (or read, I guess), a huge buffer size, such as 40000, it'll likely never return that much data at once. If you give it a tiny buffer size like 100, then it'll return the 100 bytes it has immediately and still have more available. Either way, you're not imposing a limit on how much data the client is sending you.\n",
"I don't know what your actual application is, however, setting a hard limit on the total amount of data that a client can send could be useful in reducing your exposure to denial of service attacks, e.g. client connects and sends 100MB of data which could load your application unacceptably.\nBut it really depends on what you application is. Are you after a per line limit or a total per connection limit or what?\n"
] | [
2,
1,
0
] | [] | [] | [
"python",
"sockets"
] | stackoverflow_0000203758_python_sockets.txt |
Q:
How to update a Tix.ComboBox's text?
I have a Tix.ComboBox with an editable text field. How do I force the variable holding the value for the text to update?
Let me give a more concrete explanation. I have a combo box and a button. When I click the button, it pops up a message box with the value of the combo box. Let's say the combo box text field currently has the value "thing1". If I type "new" into the box and then click on the button with my mouse, it will pops up the message "thing1". If I type "new" in the box and then tab focus away from the combo box and then click the button the pop up message says "new".
Ho do I force the combo box to update it's value to new without requiring that I tab away from the combo box?
I have included sample code.
import Tix
import tkMessageBox
class App(object):
def __init__(self, window):
window.winfo_toplevel().wm_title("test")
self.window = window
self.combo = Tix.ComboBox(window)
self.combo.insert(Tix.END, 'thing1')
self.combo.insert(Tix.END, 'thing2')
self.combo.entry['state'] = "normal"
self.combo['editable'] = True
self.combo.pack()
button = Tix.Button(window)
button['text'] = "Go"
button['command'] = self.go
button.pack()
def go(self):
tkMessageBox.showinfo('info', self.combo['value'])
if __name__ == '__main__':
root = Tix.Tk()
App(root)
root.mainloop()
A:
woo!
solved it on my own.
Use
self.combo['selection']
instead of
self.combo['value']
A:
NOTE: copy of Moe's answer that can be selected as chosen answer
woo!
solved it on my own.
Use
self.combo['selection']
instead of
self.combo['value']
| How to update a Tix.ComboBox's text? | I have a Tix.ComboBox with an editable text field. How do I force the variable holding the value for the text to update?
Let me give a more concrete explanation. I have a combo box and a button. When I click the button, it pops up a message box with the value of the combo box. Let's say the combo box text field currently has the value "thing1". If I type "new" into the box and then click on the button with my mouse, it will pops up the message "thing1". If I type "new" in the box and then tab focus away from the combo box and then click the button the pop up message says "new".
Ho do I force the combo box to update it's value to new without requiring that I tab away from the combo box?
I have included sample code.
import Tix
import tkMessageBox
class App(object):
def __init__(self, window):
window.winfo_toplevel().wm_title("test")
self.window = window
self.combo = Tix.ComboBox(window)
self.combo.insert(Tix.END, 'thing1')
self.combo.insert(Tix.END, 'thing2')
self.combo.entry['state'] = "normal"
self.combo['editable'] = True
self.combo.pack()
button = Tix.Button(window)
button['text'] = "Go"
button['command'] = self.go
button.pack()
def go(self):
tkMessageBox.showinfo('info', self.combo['value'])
if __name__ == '__main__':
root = Tix.Tk()
App(root)
root.mainloop()
| [
"woo!\nsolved it on my own.\nUse \nself.combo['selection']\n\ninstead of\nself.combo['value']\n\n",
"NOTE: copy of Moe's answer that can be selected as chosen answer\nwoo!\nsolved it on my own.\nUse \nself.combo['selection']\n\ninstead of\nself.combo['value']\n\n"
] | [
5,
1
] | [] | [] | [
"combobox",
"python",
"tix",
"tkinter"
] | stackoverflow_0000117211_combobox_python_tix_tkinter.txt |
Q:
variables as parameters in field options
I want to create a model, that will set editable=False on creation, and editable=True on editing item. I thought it should be something like this:
home = models.ForeignKey(Team, editable=lambda self: True if self.id else False)
But it doesn't work. Maybe something with overriding the init can help me, but i don't sure what can do the trick. I know i can check for self.id in save() method, but is too late, i want this kind of logic in admin app when im filling the fields.
A:
Add the following (a small extension of this code) to your admin.py:
from django import forms
class ReadOnlyWidget(forms.Widget):
def __init__(self, original_value, display_value):
self.original_value = original_value
self.display_value = display_value
super(ReadOnlyWidget, self).__init__()
def render(self, name, value, attrs=None):
if self.display_value is not None:
return unicode(self.display_value)
return unicode(self.original_value)
def value_from_datadict(self, data, files, name):
return self.original_value
class ReadOnlyAdminFields(object):
def get_form(self, request, obj=None):
form = super(ReadOnlyAdminFields, self).get_form(request, obj)
fields = getattr(self, 'readonly', [])
if obj is not None:
fields += getattr(self, 'readonly_on_edit', [])
for field_name in fields:
if field_name in form.base_fields:
if hasattr(obj, 'get_%s_display' % field_name):
display_value = getattr(obj, 'get_%s_display' % field_name)()
else:
display_value = None
form.base_fields[field_name].widget = ReadOnlyWidget(getattr(obj, field_name, ''), display_value)
form.base_fields[field_name].required = False
return form
You can then specify that certain fields should by readonly when the object is edited:
class PersonAdmin(ReadOnlyAdminFields, admin.ModelAdmin):
readonly_on_edit = ('home',)
admin.site.register(Person, PersonAdmin)
| variables as parameters in field options | I want to create a model, that will set editable=False on creation, and editable=True on editing item. I thought it should be something like this:
home = models.ForeignKey(Team, editable=lambda self: True if self.id else False)
But it doesn't work. Maybe something with overriding the init can help me, but i don't sure what can do the trick. I know i can check for self.id in save() method, but is too late, i want this kind of logic in admin app when im filling the fields.
| [
"Add the following (a small extension of this code) to your admin.py:\nfrom django import forms\n\nclass ReadOnlyWidget(forms.Widget):\n def __init__(self, original_value, display_value):\n self.original_value = original_value\n self.display_value = display_value\n\n super(ReadOnlyWidget, self).__init__()\n\n def render(self, name, value, attrs=None):\n if self.display_value is not None:\n return unicode(self.display_value)\n return unicode(self.original_value)\n\n def value_from_datadict(self, data, files, name):\n return self.original_value\n\nclass ReadOnlyAdminFields(object):\n def get_form(self, request, obj=None):\n form = super(ReadOnlyAdminFields, self).get_form(request, obj)\n fields = getattr(self, 'readonly', [])\n if obj is not None:\n fields += getattr(self, 'readonly_on_edit', [])\n\n for field_name in fields:\n if field_name in form.base_fields:\n if hasattr(obj, 'get_%s_display' % field_name):\n display_value = getattr(obj, 'get_%s_display' % field_name)()\n else:\n display_value = None\n\n form.base_fields[field_name].widget = ReadOnlyWidget(getattr(obj, field_name, ''), display_value)\n form.base_fields[field_name].required = False\n\n return form\n\nYou can then specify that certain fields should by readonly when the object is edited:\nclass PersonAdmin(ReadOnlyAdminFields, admin.ModelAdmin):\n readonly_on_edit = ('home',)\n\nadmin.site.register(Person, PersonAdmin)\n\n"
] | [
4
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000206245_django_python.txt |
Q:
Howto do python command-line autocompletion but NOT only at the beginning of a string
Python, through it's readline bindings allows for great command-line autocompletion (as described in here).
But, the completion only seems to work at the beginning of strings. If you want to match the middle or end of a string readline doesn't work.
I would like to autocomplete strings, in a command-line python program by matching what I type with any of the strings in a list of available strings.
A good example of the type of autocompletion I would like to have is the type that happens in GMail when you type in the To field. If you type one of your contacts' last name, it will come up just as well as if you typed her first name.
Some use of the up and down arrows or some other method to select from the matched strings may be needed (and not needed in the case of readline) and that is fine in my case.
My particular use case is a command-line program that sends emails.
Specific code examples would be very helpful.
Using terminal emulators like curses would be fine. It only has to run on linux, not Mac or Windows.
Here is an example:
Say I have the following three strings in a list
['Paul Eden <[email protected]>',
'Eden Jones <[email protected]>',
'Somebody Else <[email protected]>']
I would like some code that will autocomplete the first two items in the list after I type 'Eden' and then allow me to pick one of them (all through the command-line using the keyboard).
A:
I'm not sure I understand the problem. You could use readline.clear_history and readline.add_history to set up the completable strings you want, then control-r to search backword in the history (just as if you were at a shell prompt). For example:
#!/usr/bin/env python
import readline
readline.clear_history()
readline.add_history('foo')
readline.add_history('bar')
while 1:
print raw_input('> ')
Alternatively, you could write your own completer version and bind the appropriate key to it. This version uses caching in case your match list is huge:
#!/usr/bin/env python
import readline
values = ['Paul Eden <[email protected]>',
'Eden Jones <[email protected]>',
'Somebody Else <[email protected]>']
completions = {}
def completer(text, state):
try:
matches = completions[text]
except KeyError:
matches = [value for value in values
if text.upper() in value.upper()]
completions[text] = matches
try:
return matches[state]
except IndexError:
return None
readline.set_completer(completer)
readline.parse_and_bind('tab: menu-complete')
while 1:
a = raw_input('> ')
print 'said:', a
| Howto do python command-line autocompletion but NOT only at the beginning of a string | Python, through it's readline bindings allows for great command-line autocompletion (as described in here).
But, the completion only seems to work at the beginning of strings. If you want to match the middle or end of a string readline doesn't work.
I would like to autocomplete strings, in a command-line python program by matching what I type with any of the strings in a list of available strings.
A good example of the type of autocompletion I would like to have is the type that happens in GMail when you type in the To field. If you type one of your contacts' last name, it will come up just as well as if you typed her first name.
Some use of the up and down arrows or some other method to select from the matched strings may be needed (and not needed in the case of readline) and that is fine in my case.
My particular use case is a command-line program that sends emails.
Specific code examples would be very helpful.
Using terminal emulators like curses would be fine. It only has to run on linux, not Mac or Windows.
Here is an example:
Say I have the following three strings in a list
['Paul Eden <[email protected]>',
'Eden Jones <[email protected]>',
'Somebody Else <[email protected]>']
I would like some code that will autocomplete the first two items in the list after I type 'Eden' and then allow me to pick one of them (all through the command-line using the keyboard).
| [
"I'm not sure I understand the problem. You could use readline.clear_history and readline.add_history to set up the completable strings you want, then control-r to search backword in the history (just as if you were at a shell prompt). For example:\n#!/usr/bin/env python\n\nimport readline\n\nreadline.clear_history()\nreadline.add_history('foo')\nreadline.add_history('bar')\n\nwhile 1:\n print raw_input('> ')\n\nAlternatively, you could write your own completer version and bind the appropriate key to it. This version uses caching in case your match list is huge:\n#!/usr/bin/env python\n\nimport readline\n\nvalues = ['Paul Eden <[email protected]>', \n 'Eden Jones <[email protected]>', \n 'Somebody Else <[email protected]>']\ncompletions = {}\n\ndef completer(text, state):\n try:\n matches = completions[text]\n except KeyError:\n matches = [value for value in values\n if text.upper() in value.upper()]\n completions[text] = matches\n try:\n return matches[state]\n except IndexError:\n return None\n\nreadline.set_completer(completer)\nreadline.parse_and_bind('tab: menu-complete')\n\nwhile 1:\n a = raw_input('> ')\n print 'said:', a\n\n"
] | [
10
] | [] | [] | [
"autocomplete",
"command_line",
"linux",
"python",
"unix"
] | stackoverflow_0000209484_autocomplete_command_line_linux_python_unix.txt |
Q:
Advanced Python FTP - can I control how ftplib talks to a server?
I need to send a very specific (non-standard) string to an FTP server:
dir "SYS:\IC.ICAMA."
The case is critical, as are the style of quotes and their content.
Unfortunately, ftplib.dir() seems to use the 'LIST' command rather than 'dir' (and it uses the wrong case for this application).
The FTP server is actually a telephone switch and it's a very non-standard implementation.
I tried using ftplib.sendcmd(), but it also sends 'pasv' as part of the command sequence.
Is there an easy way of issuing specific commands to an FTP server?
A:
Try the following. It is a modification of the original FTP.dir command which uses "dir" instead of "LIST". It gives a "DIR not understood" error with the ftp server I tested it on, but it does send the command you're after. (You will want to remove the print command I used to check that.)
import ftplib
class FTP(ftplib.FTP):
def shim_dir(self, *args):
'''List a directory in long form.
By default list current directory to stdout.
Optional last argument is callback function; all
non-empty arguments before it are concatenated to the
LIST command. (This *should* only be used for a pathname.)'''
cmd = 'dir'
func = None
if args[-1:] and type(args[-1]) != type(''):
args, func = args[:-1], args[-1]
for arg in args:
if arg:
cmd = cmd + (' ' + arg)
print cmd
self.retrlines(cmd, func)
if __name__ == '__main__':
f = FTP('ftp.ncbi.nih.gov')
f.login()
f.shim_dir('"blast"')
| Advanced Python FTP - can I control how ftplib talks to a server? | I need to send a very specific (non-standard) string to an FTP server:
dir "SYS:\IC.ICAMA."
The case is critical, as are the style of quotes and their content.
Unfortunately, ftplib.dir() seems to use the 'LIST' command rather than 'dir' (and it uses the wrong case for this application).
The FTP server is actually a telephone switch and it's a very non-standard implementation.
I tried using ftplib.sendcmd(), but it also sends 'pasv' as part of the command sequence.
Is there an easy way of issuing specific commands to an FTP server?
| [
"Try the following. It is a modification of the original FTP.dir command which uses \"dir\" instead of \"LIST\". It gives a \"DIR not understood\" error with the ftp server I tested it on, but it does send the command you're after. (You will want to remove the print command I used to check that.)\nimport ftplib\n\nclass FTP(ftplib.FTP):\n\n def shim_dir(self, *args):\n '''List a directory in long form.\n By default list current directory to stdout.\n Optional last argument is callback function; all\n non-empty arguments before it are concatenated to the\n LIST command. (This *should* only be used for a pathname.)'''\n cmd = 'dir'\n func = None\n if args[-1:] and type(args[-1]) != type(''):\n args, func = args[:-1], args[-1]\n for arg in args:\n if arg:\n cmd = cmd + (' ' + arg)\n print cmd\n self.retrlines(cmd, func)\n\nif __name__ == '__main__':\n f = FTP('ftp.ncbi.nih.gov')\n f.login()\n f.shim_dir('\"blast\"')\n\n"
] | [
4
] | [] | [] | [
"ftp",
"python"
] | stackoverflow_0000210067_ftp_python.txt |
Q:
py3k RC-1: "LookupError: unknown encoding: uft-8"
I just installed the first release candidate of Python 3.0 and got this error after typing:
>>> help('modules foo')
[...]
LookupError: unknown encoding: uft-8
Notice that it says uft-8 and not utf-8
Is this a py3k specific bug or a misconfiguration on my part? I do not have any other versions of Python installed on this French locale Windows XP SP3 machine.
Edit
A bug has been filled by Alex Coventry on October 16th.
A:
It's not a typo, it's a deliberate error in a test module.
met% pwd
/home/coventry/src/Python-3.0rc1
met% rgrep uft-8 .
./Lib/test/bad_coding.py:# -*- coding: uft-8 -*-
./py3k/Lib/test/bad_coding.py:# -*- coding: uft-8 -*-
Removing this module causes the help command to fall over in a different way.
It is a bug, however. Someone should file a report.
A:
Looks like a typo in a config file somewhere, whether in the Py3k package or on your machine. You might try installing the stable final Python 2.6 (which supports 3.0 syntax changes with imports from __future__), and if that works you should probably file a bug report.
| py3k RC-1: "LookupError: unknown encoding: uft-8" | I just installed the first release candidate of Python 3.0 and got this error after typing:
>>> help('modules foo')
[...]
LookupError: unknown encoding: uft-8
Notice that it says uft-8 and not utf-8
Is this a py3k specific bug or a misconfiguration on my part? I do not have any other versions of Python installed on this French locale Windows XP SP3 machine.
Edit
A bug has been filled by Alex Coventry on October 16th.
| [
"It's not a typo, it's a deliberate error in a test module.\nmet% pwd\n/home/coventry/src/Python-3.0rc1\nmet% rgrep uft-8 .\n./Lib/test/bad_coding.py:# -*- coding: uft-8 -*-\n./py3k/Lib/test/bad_coding.py:# -*- coding: uft-8 -*-\n\nRemoving this module causes the help command to fall over in a different way.\nIt is a bug, however. Someone should file a report.\n",
"Looks like a typo in a config file somewhere, whether in the Py3k package or on your machine. You might try installing the stable final Python 2.6 (which supports 3.0 syntax changes with imports from __future__), and if that works you should probably file a bug report.\n"
] | [
5,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0000210344_python_python_3.x.txt |
Q:
How to base64 encode a PDF file in Python
How should I base64 encode a PDF file for transport over XML-RPC in Python?
A:
If you don't want to use the xmlrpclib's Binary class, you can just use the .encode() method of strings:
a = open("pdf_reference.pdf", "rb").read().encode("base64")
A:
Actually, after some more digging, it looks like the xmlrpclib module may have the piece I need with it's Binary helper class:
binary_obj = xmlrpclib.Binary( open('foo.pdf').read() )
Here's an example from the Trac XML-RPC documentation
import xmlrpclib
server = xmlrpclib.ServerProxy("http://athomas:password@localhost:8080/trunk/login/xmlrpc")
server.wiki.putAttachment('WikiStart/t.py', xmlrpclib.Binary(open('t.py').read()))
A:
You can do it with the base64 library, legacy interface.
A:
Looks like you might be able to use the binascii module
binascii.b2a_base64(data)
Convert binary data to a line of ASCII characters in base64 coding. The return value is the converted line, including a newline char. The length of data should be at most 57 to adhere to the base64 standard.
| How to base64 encode a PDF file in Python | How should I base64 encode a PDF file for transport over XML-RPC in Python?
| [
"If you don't want to use the xmlrpclib's Binary class, you can just use the .encode() method of strings:\na = open(\"pdf_reference.pdf\", \"rb\").read().encode(\"base64\")\n\n",
"Actually, after some more digging, it looks like the xmlrpclib module may have the piece I need with it's Binary helper class:\n\nbinary_obj = xmlrpclib.Binary( open('foo.pdf').read() )\n\nHere's an example from the Trac XML-RPC documentation\n\nimport xmlrpclib \nserver = xmlrpclib.ServerProxy(\"http://athomas:password@localhost:8080/trunk/login/xmlrpc\") \nserver.wiki.putAttachment('WikiStart/t.py', xmlrpclib.Binary(open('t.py').read())) \n\n",
"You can do it with the base64 library, legacy interface.\n",
"Looks like you might be able to use the binascii module\n\nbinascii.b2a_base64(data)\nConvert binary data to a line of ASCII characters in base64 coding. The return value is the converted line, including a newline char. The length of data should be at most 57 to adhere to the base64 standard.\n\n"
] | [
25,
5,
2,
0
] | [] | [] | [
"base64",
"encoding",
"python",
"xml_rpc"
] | stackoverflow_0000208894_base64_encoding_python_xml_rpc.txt |
Q:
Extracting a parenthesized Python expression from a string
I've been wondering about how hard it would be to write some Python code to search a string for the index of a substring of the form ${expr}, for example, where expr is meant to be a Python expression or something resembling one. Given such a thing, one could easily imagine going on to check the expression's syntax with compile(), evaluate it against a particular scope with eval(), and perhaps even substitute the result into the original string. People must do very similar things all the time.
I could imagine solving such a problem using a third-party parser generator [oof], or by hand-coding some sort of state machine [eek], or perhaps by convincing Python's own parser to do the heavy lifting somehow [hmm]. Maybe there's a third-party templating library somewhere that can be made to do exactly this. Maybe restricting the syntax of expr is likely to be a worthwhile compromise in terms of simplicity or execution time or cutting down on external dependencies -- for example, maybe all I really need is something that matches any expr that has balanced curly braces.
What's your sense?
Update:
Thanks very much for your responses so far! Looking back at what I wrote yesterday, I'm not sure I was sufficiently clear about what I'm asking. Template substitution is indeed an interesting problem, and probably much more useful to many more people than the expression extraction subproblem I'm wondering about, but I brought it up only as a simple example of how the answer to my question might be useful in real life. Some other potential applications might include passing the extracted expressions to a syntax highlighter; passing the result to a real Python parser and looking at or monkeying with the parse tree; or using the sequence of extracted expressions to build up a larger Python program, perhaps in conjunction with some information taken from the surrounding text.
The ${expr} syntax I mentioned is also intended as an example, and in fact I wonder if I shouldn't have used $(expr) as my example instead, because it makes the potential drawbacks of the obvious approach, along the lines of re.finditer(r'$\{([^}]+)\}', s), a bit easier to see. Python expressions can (and often do) contain the ) (or }) character. It seems possible that handling any of those cases might be much more trouble than it's worth, but I'm not convinced of that yet. Please feel free to try to make this case!
Prior to posting this question, I spent quite a bit of time looking at Python template engines hoping that one might expose the sort of low-level functionality I'm asking about -- namely, something that can find expressions in a variety of contexts and tell me where they are rather than being limited to finding expressions embedded using a single hard-coded syntax, always evaluating them, and always substituting the results back into the original string. I haven't figured out how to use any of them to solve my problem yet, but I do very much appreciate the suggestions regarding more to look at (can't believe I missed that wonderful list on the wiki!). The API documentation for these things tends to be pretty high-level, and I'm not too familiar with the internals of any of them, so I'm sure I could use help looking at those and figuring out how to get them to do this sort of thing.
Thanks for your patience!
A:
I think what you're asking about is being able to insert Python code into text files to be evaluated. There are several modules that already exist to provide this kind of functionality. You can check the Python.org Templating wiki page for a comprehensive list.
Some google searching also turned up a few other modules you might be interested in:
texttemplate (part of py-templates project)
template module
If you're really looking just into writing this yourself for whatever reason, you can also dig into this Python cookbook solution Yet Another Python Templating Utility (YAPTU) :
"Templating" (copying an input file to output, on the fly inserting Python
expressions and statements) is a frequent need, and YAPTU is a small but
complete Python module for that; expressions and statements are identified
by arbitrary user-chosen regular-expressions.
EDIT: Just for the heck of it, I whipped up a severely simplistic code sample for this. I'm sure it has bugs but it illustrates a simplified version of the concept at least:
#!/usr/bin/env python
import sys
import re
FILE = sys.argv[1]
handle = open(FILE)
fcontent = handle.read()
handle.close()
for myexpr in re.finditer(r'\${([^}]+)}', fcontent, re.M|re.S):
text = myexpr.group(1)
try:
exec text
except SyntaxError:
print "ERROR: unable to compile expression '%s'" % (text)
Tested against the following text:
This is some random text, with embedded python like
${print "foo"} and some bogus python like
${any:thing}.
And a multiline statement, just for kicks:
${
def multiline_stmt(foo):
print foo
multiline_stmt("ahem")
}
More text here.
Output:
[user@host]$ ./exec_embedded_python.py test.txt
foo
ERROR: unable to compile expression 'any:thing'
ahem
A:
I think your best bet is to match for all curly braced entries, and then check against Python itself whether or not it's valid Python, for which compiler would be helpful.
A:
If you want to handle arbitrary expressions like {'{spam': 42}["spam}"], you can't get away without full-blown parser.
A:
After posting this, reading the replies so far (thanks everyone!), and thinking about the problem for a while, here is the best approach I've been able to come up with:
Find the first ${.
Find the next } after that.
Feed whatever's in between to compile(). If it works, stick a fork in it and we're done.
Otherwise, keep extending the string by looking for subsequent occurences of }. As soon as something compiles, return it.
If we run out of } without being able to compile anything, use the results of the last compilation attempt to give information about where the problem lies.
Advantages of this approach:
The code is quite short and easy to understand.
It's pretty efficient -- optimal, even, in the case where the expression contains no }. Worst-case seems like it wouldn't be too bad either.
It works on many expressions that contain ${ and/or }.
No external dependencies. No need to import anything, in fact. (This surprised me.)
Disadvantages:
Sometimes it grabs too much or too little. See below for an example of the latter. I could imagine a scary example where you have two expressions and the first one is subtly wrong and the algorithm ends up mistakenly grabbing the whole thing and everything in between and returning it as valid, though I haven't been able to demonstrate this. Perhaps things are not so bad as I fear. I don't think misunderstandings can be avoided in general -- the problem definition is kind of slippery -- but it seems like it ought to be possible to do better, especially if one were willing to trade simplicity or execution time.
I haven't done any benchmarks, but I could imagine there being faster alternatives, especially in cases that involve lots of } in the expression. That could be a big deal if one wanted to apply this technique to sizable blocks of Python code rather than just very short expressions.
Here is my implementation.
def findExpr(s, i0=0, begin='${', end='}', compArgs=('<string>', 'eval')):
assert '\n' not in s, 'line numbers not implemented'
i0 = s.index(begin, i0) + len(begin)
i1 = s.index(end, i0)
code = errMsg = None
while code is None and errMsg is None:
expr = s[i0:i1]
try: code = compile(expr, *compArgs)
except SyntaxError, e:
i1 = s.find(end, i1 + 1)
if i1 < 0: errMsg, i1 = e.msg, i0 + e.offset
return i0, i1, code, errMsg
And here's the docstring with some illustrations in doctest format, which I didn't insert into the middle of the function above only because it's long and I feel like the code is easier to read without it.
'''
Search s for a (possibly invalid) Python expression bracketed by begin
and end, which default to '${' and '}'. Return a 4-tuple.
>>> s = 'foo ${a*b + c*d} bar'
>>> i0, i1, code, errMsg = findExpr(s)
>>> i0, i1, s[i0:i1], errMsg
(6, 15, 'a*b + c*d', None)
>>> ' '.join('%02x' % ord(byte) for byte in code.co_code)
'65 00 00 65 01 00 14 65 02 00 65 03 00 14 17 53'
>>> code.co_names
('a', 'b', 'c', 'd')
>>> eval(code, {'a': 1, 'b': 2, 'c': 3, 'd': 4})
14
>>> eval(code, {'a': 'a', 'b': 2, 'c': 'c', 'd': 4})
'aacccc'
>>> eval(code, {'a': None})
Traceback (most recent call last):
...
NameError: name 'b' is not defined
Expressions containing start and/or end are allowed.
>>> s = '{foo ${{"}": "${"}["}"]} bar}'
>>> i0, i1, code, errMsg = findExpr(s)
>>> i0, i1, s[i0:i1], errMsg
(7, 23, '{"}": "${"}["}"]', None)
If the first match is syntactically invalid Python, i0 points to the
start of the match, i1 points to the parse error, code is None and
errMsg contains a message from the compiler.
>>> s = '{foo ${qwerty asdf zxcvbnm!!!} ${7} bar}'
>>> i0, i1, code, errMsg = findExpr(s)
>>> i0, i1, s[i0:i1], errMsg
(7, 18, 'qwerty asdf', 'invalid syntax')
>>> print code
None
If a second argument is given, start searching there.
>>> i0, i1, code, errMsg = findExpr(s, i1)
>>> i0, i1, s[i0:i1], errMsg
(33, 34, '7', None)
Raise ValueError if there are no further matches.
>>> i0, i1, code, errMsg = findExpr(s, i1)
Traceback (most recent call last):
...
ValueError: substring not found
In ambiguous cases, match the shortest valid expression. This is not
always ideal behavior.
>>> s = '{foo ${x or {} # return {} instead of None} bar}'
>>> i0, i1, code, errMsg = findExpr(s)
>>> i0, i1, s[i0:i1], errMsg
(7, 25, 'x or {} # return {', None)
This implementation must not be used with multi-line strings. It does
not adjust line number information in the returned code object, and it
does not take the line number into account when computing the offset
of a parse error.
'''
| Extracting a parenthesized Python expression from a string | I've been wondering about how hard it would be to write some Python code to search a string for the index of a substring of the form ${expr}, for example, where expr is meant to be a Python expression or something resembling one. Given such a thing, one could easily imagine going on to check the expression's syntax with compile(), evaluate it against a particular scope with eval(), and perhaps even substitute the result into the original string. People must do very similar things all the time.
I could imagine solving such a problem using a third-party parser generator [oof], or by hand-coding some sort of state machine [eek], or perhaps by convincing Python's own parser to do the heavy lifting somehow [hmm]. Maybe there's a third-party templating library somewhere that can be made to do exactly this. Maybe restricting the syntax of expr is likely to be a worthwhile compromise in terms of simplicity or execution time or cutting down on external dependencies -- for example, maybe all I really need is something that matches any expr that has balanced curly braces.
What's your sense?
Update:
Thanks very much for your responses so far! Looking back at what I wrote yesterday, I'm not sure I was sufficiently clear about what I'm asking. Template substitution is indeed an interesting problem, and probably much more useful to many more people than the expression extraction subproblem I'm wondering about, but I brought it up only as a simple example of how the answer to my question might be useful in real life. Some other potential applications might include passing the extracted expressions to a syntax highlighter; passing the result to a real Python parser and looking at or monkeying with the parse tree; or using the sequence of extracted expressions to build up a larger Python program, perhaps in conjunction with some information taken from the surrounding text.
The ${expr} syntax I mentioned is also intended as an example, and in fact I wonder if I shouldn't have used $(expr) as my example instead, because it makes the potential drawbacks of the obvious approach, along the lines of re.finditer(r'$\{([^}]+)\}', s), a bit easier to see. Python expressions can (and often do) contain the ) (or }) character. It seems possible that handling any of those cases might be much more trouble than it's worth, but I'm not convinced of that yet. Please feel free to try to make this case!
Prior to posting this question, I spent quite a bit of time looking at Python template engines hoping that one might expose the sort of low-level functionality I'm asking about -- namely, something that can find expressions in a variety of contexts and tell me where they are rather than being limited to finding expressions embedded using a single hard-coded syntax, always evaluating them, and always substituting the results back into the original string. I haven't figured out how to use any of them to solve my problem yet, but I do very much appreciate the suggestions regarding more to look at (can't believe I missed that wonderful list on the wiki!). The API documentation for these things tends to be pretty high-level, and I'm not too familiar with the internals of any of them, so I'm sure I could use help looking at those and figuring out how to get them to do this sort of thing.
Thanks for your patience!
| [
"I think what you're asking about is being able to insert Python code into text files to be evaluated. There are several modules that already exist to provide this kind of functionality. You can check the Python.org Templating wiki page for a comprehensive list.\nSome google searching also turned up a few other modules you might be interested in:\n\ntexttemplate (part of py-templates project)\ntemplate module\n\nIf you're really looking just into writing this yourself for whatever reason, you can also dig into this Python cookbook solution Yet Another Python Templating Utility (YAPTU) :\n\n\"Templating\" (copying an input file to output, on the fly inserting Python \n expressions and statements) is a frequent need, and YAPTU is a small but \n complete Python module for that; expressions and statements are identified \n by arbitrary user-chosen regular-expressions.\n\nEDIT: Just for the heck of it, I whipped up a severely simplistic code sample for this. I'm sure it has bugs but it illustrates a simplified version of the concept at least: \n#!/usr/bin/env python\n\nimport sys\nimport re\n\nFILE = sys.argv[1]\n\nhandle = open(FILE)\nfcontent = handle.read()\nhandle.close()\n\nfor myexpr in re.finditer(r'\\${([^}]+)}', fcontent, re.M|re.S):\n text = myexpr.group(1)\n try:\n exec text\n except SyntaxError:\n print \"ERROR: unable to compile expression '%s'\" % (text)\n\nTested against the following text: \nThis is some random text, with embedded python like \n${print \"foo\"} and some bogus python like\n\n${any:thing}.\n\nAnd a multiline statement, just for kicks: \n\n${\ndef multiline_stmt(foo):\n print foo\n\nmultiline_stmt(\"ahem\")\n}\n\nMore text here.\n\nOutput: \n[user@host]$ ./exec_embedded_python.py test.txt\nfoo\nERROR: unable to compile expression 'any:thing'\nahem\n\n",
"I think your best bet is to match for all curly braced entries, and then check against Python itself whether or not it's valid Python, for which compiler would be helpful.\n",
"If you want to handle arbitrary expressions like {'{spam': 42}[\"spam}\"], you can't get away without full-blown parser.\n",
"After posting this, reading the replies so far (thanks everyone!), and thinking about the problem for a while, here is the best approach I've been able to come up with:\n\nFind the first ${.\nFind the next } after that.\nFeed whatever's in between to compile(). If it works, stick a fork in it and we're done.\nOtherwise, keep extending the string by looking for subsequent occurences of }. As soon as something compiles, return it.\nIf we run out of } without being able to compile anything, use the results of the last compilation attempt to give information about where the problem lies.\n\nAdvantages of this approach:\n\nThe code is quite short and easy to understand.\nIt's pretty efficient -- optimal, even, in the case where the expression contains no }. Worst-case seems like it wouldn't be too bad either.\nIt works on many expressions that contain ${ and/or }.\nNo external dependencies. No need to import anything, in fact. (This surprised me.)\n\nDisadvantages:\n\nSometimes it grabs too much or too little. See below for an example of the latter. I could imagine a scary example where you have two expressions and the first one is subtly wrong and the algorithm ends up mistakenly grabbing the whole thing and everything in between and returning it as valid, though I haven't been able to demonstrate this. Perhaps things are not so bad as I fear. I don't think misunderstandings can be avoided in general -- the problem definition is kind of slippery -- but it seems like it ought to be possible to do better, especially if one were willing to trade simplicity or execution time.\nI haven't done any benchmarks, but I could imagine there being faster alternatives, especially in cases that involve lots of } in the expression. That could be a big deal if one wanted to apply this technique to sizable blocks of Python code rather than just very short expressions.\n\nHere is my implementation.\ndef findExpr(s, i0=0, begin='${', end='}', compArgs=('<string>', 'eval')):\n assert '\\n' not in s, 'line numbers not implemented'\n i0 = s.index(begin, i0) + len(begin)\n i1 = s.index(end, i0)\n code = errMsg = None\n while code is None and errMsg is None:\n expr = s[i0:i1]\n try: code = compile(expr, *compArgs)\n except SyntaxError, e:\n i1 = s.find(end, i1 + 1)\n if i1 < 0: errMsg, i1 = e.msg, i0 + e.offset\n return i0, i1, code, errMsg\n\nAnd here's the docstring with some illustrations in doctest format, which I didn't insert into the middle of the function above only because it's long and I feel like the code is easier to read without it.\n'''\nSearch s for a (possibly invalid) Python expression bracketed by begin\nand end, which default to '${' and '}'. Return a 4-tuple.\n\n>>> s = 'foo ${a*b + c*d} bar'\n>>> i0, i1, code, errMsg = findExpr(s)\n>>> i0, i1, s[i0:i1], errMsg\n(6, 15, 'a*b + c*d', None)\n>>> ' '.join('%02x' % ord(byte) for byte in code.co_code)\n'65 00 00 65 01 00 14 65 02 00 65 03 00 14 17 53'\n>>> code.co_names\n('a', 'b', 'c', 'd')\n>>> eval(code, {'a': 1, 'b': 2, 'c': 3, 'd': 4})\n14\n>>> eval(code, {'a': 'a', 'b': 2, 'c': 'c', 'd': 4})\n'aacccc'\n>>> eval(code, {'a': None})\nTraceback (most recent call last):\n ...\nNameError: name 'b' is not defined\n\nExpressions containing start and/or end are allowed.\n\n>>> s = '{foo ${{\"}\": \"${\"}[\"}\"]} bar}'\n>>> i0, i1, code, errMsg = findExpr(s)\n>>> i0, i1, s[i0:i1], errMsg\n(7, 23, '{\"}\": \"${\"}[\"}\"]', None)\n\nIf the first match is syntactically invalid Python, i0 points to the\nstart of the match, i1 points to the parse error, code is None and\nerrMsg contains a message from the compiler.\n\n>>> s = '{foo ${qwerty asdf zxcvbnm!!!} ${7} bar}'\n>>> i0, i1, code, errMsg = findExpr(s)\n>>> i0, i1, s[i0:i1], errMsg\n(7, 18, 'qwerty asdf', 'invalid syntax')\n>>> print code\nNone\n\nIf a second argument is given, start searching there.\n\n>>> i0, i1, code, errMsg = findExpr(s, i1)\n>>> i0, i1, s[i0:i1], errMsg\n(33, 34, '7', None)\n\nRaise ValueError if there are no further matches.\n\n>>> i0, i1, code, errMsg = findExpr(s, i1)\nTraceback (most recent call last):\n ...\nValueError: substring not found\n\nIn ambiguous cases, match the shortest valid expression. This is not\nalways ideal behavior.\n\n>>> s = '{foo ${x or {} # return {} instead of None} bar}'\n>>> i0, i1, code, errMsg = findExpr(s)\n>>> i0, i1, s[i0:i1], errMsg\n(7, 25, 'x or {} # return {', None)\n\nThis implementation must not be used with multi-line strings. It does\nnot adjust line number information in the returned code object, and it\ndoes not take the line number into account when computing the offset\nof a parse error.\n\n'''\n\n"
] | [
2,
1,
1,
0
] | [] | [] | [
"parsing",
"python"
] | stackoverflow_0000207290_parsing_python.txt |
Q:
Is it possible to compile Python natively (beyond pyc byte code)?
I wonder if it is possible to create an executable module from a Python script. I need to have the most performance and the flexibility of Python script, without needing to run in the Python environment. I would use this code to load on demand user modules to customize my application.
A:
There's pyrex that compiles python like source to python extension modules
rpython which allows you to compile python with some restrictions to various backends like C, LLVM, .Net etc.
There's also shed-skin which translates python to C++, but I can't say if it's any good.
PyPy implements a JIT compiler which attempts to optimize runtime by translating pieces of what's running at runtime to machine code, if you write for the PyPy interpreter that might be a feasible path.
The same author that is working on JIT in PyPy wrote psyco previously which optimizes python in the CPython interpreter.
A:
You can use something like py2exe to compile your python script into an exe, or Freeze for a linux binary.
see: How can I create a directly-executable cross-platform GUI app using Python?
A:
I've had a lot of success using Cython, which is based on and extends pyrex:
Cython is a language that makes
writing C extensions for the Python
language as easy as Python itself.
Cython is based on the well-known
Pyrex, but supports more cutting edge
functionality and optimizations.
The Cython language is very close to
the Python language, but Cython
additionally supports calling C
functions and declaring C types on
variables and class attributes. This
allows the compiler to generate very
efficient C code from Cython code.
This makes Cython the ideal language
for wrapping for external C libraries,
and for fast C modules that speed up
the execution of Python code.
A:
I think you can use jython to compile python to Java bytecode, and then compile that with GCJ.
| Is it possible to compile Python natively (beyond pyc byte code)? | I wonder if it is possible to create an executable module from a Python script. I need to have the most performance and the flexibility of Python script, without needing to run in the Python environment. I would use this code to load on demand user modules to customize my application.
| [
"\nThere's pyrex that compiles python like source to python extension modules \nrpython which allows you to compile python with some restrictions to various backends like C, LLVM, .Net etc. \nThere's also shed-skin which translates python to C++, but I can't say if it's any good. \nPyPy implements a JIT compiler which attempts to optimize runtime by translating pieces of what's running at runtime to machine code, if you write for the PyPy interpreter that might be a feasible path. \nThe same author that is working on JIT in PyPy wrote psyco previously which optimizes python in the CPython interpreter.\n\n",
"You can use something like py2exe to compile your python script into an exe, or Freeze for a linux binary.\nsee: How can I create a directly-executable cross-platform GUI app using Python?\n",
"I've had a lot of success using Cython, which is based on and extends pyrex:\n\nCython is a language that makes\n writing C extensions for the Python\n language as easy as Python itself.\n Cython is based on the well-known\n Pyrex, but supports more cutting edge\n functionality and optimizations.\nThe Cython language is very close to\n the Python language, but Cython\n additionally supports calling C\n functions and declaring C types on\n variables and class attributes. This\n allows the compiler to generate very\n efficient C code from Cython code.\nThis makes Cython the ideal language\n for wrapping for external C libraries,\n and for fast C modules that speed up\n the execution of Python code.\n\n",
"I think you can use jython to compile python to Java bytecode, and then compile that with GCJ.\n"
] | [
14,
7,
2,
1
] | [] | [] | [
"compilation",
"module",
"python"
] | stackoverflow_0000205062_compilation_module_python.txt |
Q:
arguments to cryptographic functions
I'm a bit confused that the argument to crypto functions is a string. Should I simply wrap non-string arguments with str() e.g.
hashlib.sha256(str(user_id)+str(expiry_time))
hmac.new(str(random.randbits(256)))
(ignore for the moment that random.randbits() might not be cryptographically good).
edit: I realise that the hmac example is silly because I'm not storing the key anywhere!
A:
Well, usually hash-functions (and cryptographic functions generally) work on bytes. The Python strings are basically byte-strings. If you want to compute the hash of some object you have to convert it to a string representation. Just make sure to apply the same operation later if you want to check if the hash is correct. And make sure that your string representation doesn't contain any changing data that you don't want to be checked.
Edit: Due to popular request a short reminder that Python unicode strings don't contain bytes but unicode code points. Each unicode code point contains multiple bytes (2 or 4, depending on how the Python interpreter was compiled). Python strings only contain bytes. So Python strings (type str) are the type most similar to an array of bytes.
A:
You can.
However, for the HMAC, you actually want to store the key somewhere. Without the key, there is no way for you to verify the hash value later. :-)
| arguments to cryptographic functions | I'm a bit confused that the argument to crypto functions is a string. Should I simply wrap non-string arguments with str() e.g.
hashlib.sha256(str(user_id)+str(expiry_time))
hmac.new(str(random.randbits(256)))
(ignore for the moment that random.randbits() might not be cryptographically good).
edit: I realise that the hmac example is silly because I'm not storing the key anywhere!
| [
"Well, usually hash-functions (and cryptographic functions generally) work on bytes. The Python strings are basically byte-strings. If you want to compute the hash of some object you have to convert it to a string representation. Just make sure to apply the same operation later if you want to check if the hash is correct. And make sure that your string representation doesn't contain any changing data that you don't want to be checked.\nEdit: Due to popular request a short reminder that Python unicode strings don't contain bytes but unicode code points. Each unicode code point contains multiple bytes (2 or 4, depending on how the Python interpreter was compiled). Python strings only contain bytes. So Python strings (type str) are the type most similar to an array of bytes.\n",
"You can.\nHowever, for the HMAC, you actually want to store the key somewhere. Without the key, there is no way for you to verify the hash value later. :-)\n"
] | [
6,
1
] | [
"Oh and Sha256 isn't really an industrial strength cryptographic function (although unfortunately it's used quite commonly on many sites). It's not a real way to protect passwords or other critical data, but more than good enough for generating temporal tokens\nEdit: As mentioned Sha256 needs at least some salt. Without salt, Sha256 has a low barrier to being cracked with a dictionary attack (time-wise) and there are plenty of Rainbow tables to use as well. Personally I'd not use anything less than Blowfish for passwords (but that's because I'm paranoid) \n"
] | [
-1
] | [
"cryptography",
"python"
] | stackoverflow_0000211483_cryptography_python.txt |