GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 5,852,526 | 0 | 0 | 0 | 0 | 3 | false | 14 | 2011-05-01T20:23:00.000 | 1 | 10 | 0 | best algorithm for finding distance for all pairs where edges' weight is 1 | 5,851,154 | 0.019997 | python,algorithm,dijkstra,shortest-path,graph-algorithm | I would refer you to the following paper: "Sub-cubic Cost Algorithms for the All Pairs Shortest Path Problem" by Tadao Takaoka. There a sequential algorithm with sub-cubic complexity for graphs with unit weight (actually max edge weight = O(n ^ 0.624)) is available. | As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you)
The graph is unweighted. Meaning that all the edges can be considered as having weight of 1.
|E| <= 4*|V|
The graph is pretty big (at most ~144 depth)
The graph is directed
There might be cycles
I'm writing my code in python (please if you reference algorithms, code would be nice too :))
I know about Johnson's algorithm, Floyd-Warshal, and Dijkstra for all pairs. But these algorithms are good when the graph has weights.
I was wondering if there is a better algorithm for my case, because those algorithms are intended for weighted graphs.
Thanks! | 0 | 1 | 6,778 |
0 | 6,589,501 | 0 | 0 | 0 | 0 | 3 | false | 14 | 2011-05-01T20:23:00.000 | 1 | 10 | 0 | best algorithm for finding distance for all pairs where edges' weight is 1 | 5,851,154 | 0.019997 | python,algorithm,dijkstra,shortest-path,graph-algorithm | I'm assuming the graph is dynamic; otherwise, there's no reason not to use Floyd-Warshall to precompute all-pairs distances on such a small graph ;)
Suppose you have a grid of points (x, y) with 0 <= x <= n, 0 <= y <= n. Upon removing an edge E: (i, j) <-> (i+1, j), you partition row j into sets A = { (0, j), ..., (i, j) }, B = { (i+1, j), ..., (n, j) } such that points a in A, b in B are forced to route around E - so you need only recompute distance for all pairs (a, b) in (A, B).
Maybe you can precompute Floyd-Warshall, then, and use something like this to cut recomputation down to O(n^2) (or so) per graph modification... | As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you)
The graph is unweighted. Meaning that all the edges can be considered as having weight of 1.
|E| <= 4*|V|
The graph is pretty big (at most ~144 depth)
The graph is directed
There might be cycles
I'm writing my code in python (please if you reference algorithms, code would be nice too :))
I know about Johnson's algorithm, Floyd-Warshal, and Dijkstra for all pairs. But these algorithms are good when the graph has weights.
I was wondering if there is a better algorithm for my case, because those algorithms are intended for weighted graphs.
Thanks! | 0 | 1 | 6,778 |
0 | 5,851,436 | 0 | 0 | 0 | 0 | 3 | false | 14 | 2011-05-01T20:23:00.000 | 9 | 10 | 0 | best algorithm for finding distance for all pairs where edges' weight is 1 | 5,851,154 | 1 | python,algorithm,dijkstra,shortest-path,graph-algorithm | Run a breadth-first search from each node. Total time: O(|V| |E|) = O(|V|2), which is optimal. | As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you)
The graph is unweighted. Meaning that all the edges can be considered as having weight of 1.
|E| <= 4*|V|
The graph is pretty big (at most ~144 depth)
The graph is directed
There might be cycles
I'm writing my code in python (please if you reference algorithms, code would be nice too :))
I know about Johnson's algorithm, Floyd-Warshal, and Dijkstra for all pairs. But these algorithms are good when the graph has weights.
I was wondering if there is a better algorithm for my case, because those algorithms are intended for weighted graphs.
Thanks! | 0 | 1 | 6,778 |
1 | 5,859,924 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2011-05-02T14:34:00.000 | 0 | 3 | 0 | IplImage 'None' error on CaptureFromFile() - Python 2.7.1 and OpenCV 2.2 WinXP | 5,858,446 | 0 | opencv,python-2.7,iplimage | This must be an issue with the default codecs. OpenCV uses brute force methods to open video files or capture from camera. It goes by trial and error through all sources/codecs/apis it can find in some reasonable order. (at least 1.1 did so).
That means that on n different systems (or days) you may get n different ways of accessing the same video. The order of multiple webcams for instance, is also non-deterministic and may depend on plugging order or butterflies.
Find out what your laptop uses, (re)install that on all the systems and retry.
Also, in the c version, you can look at the capture's properties
look for cvGetCaptureProperty and cvSetCaptureProperty where you might be able to hint to the format.
[EDIT]
Just looked i tup in the docs, these functions are also available in Python. Take a look, it should help. | I am running Python2.7.1 and OpenCV 2.2 without problems in my WinXP laptop and wrote a tracking program that is working without a glitch. But for some strange reason I cannot get the same program to run in any other computer where I tried to install OpenCV and Python (using the same binaries or appropriate 64 bit binaries). In those computers OpenCV seems to be correctly installed (although I have only tested and CaptureFromCamera() in the webcam of the laptop), but CaptureFromFile() return 'None' and give "error: Array should be CvMat or IplImage" after a QueryFrame, for example.
This simple code:
import cv /
videofile = cv.CaptureFromFile('a.avi') /
frame = cv.QueryFrame(videofile) /
print type(videofile) /
print type(frame)
returns:
type 'cv.Capture' /
type 'NoneType'
OpenCV and Python are in the windows PATH...
I have moved the OpenCV site-packages content back and forth to the Pyhton27 Lib\Site-packages folder.
I tried different avi files (just in case it was some CODEC problem). This AVI uses MJPEG encoding (and GSpot reports that ffdshow Video Decoder is used for reading).
Images work fine (I think): the simple convert code:
im = cv.LoadImageM("c:\tests\colormap3.tif")
cv.SaveImage("c:\tests\colormap3-out.png", im)
opens, converts and saves the new image...
I have tested with AVI files in different folders, using "c:\", "c:/", "c:\" and "c://".
I am lost here... Anyone has any idea of what stupid and noob mistake may be the cause of this? Thanks | 0 | 1 | 1,705 |
0 | 5,891,605 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2011-05-04T23:13:00.000 | 2 | 4 | 0 | Acquiring basic skills working with visualizing/analyzing large data sets | 5,890,935 | 0.099668 | python,dataset,visualization,data-visualization | If you are looking for visualization rather than data mining and analysis, The Visual Display of Quantitative Information by Edward Tufte is considered one of the best books in the field. | I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of "nice" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15 MB CSV file.
I wrote most of my data wrangling in Python, visualized using GNUPlot.
Are there any accessible books or websites on the subject out there? Bonus points for using Python, more bonus points for a more "basic" visualization system than relying on gnuplot. Cairo or something, I suppose.
Looking for something that takes me from data mining, to processing, to visualization.
EDIT: I'm more looking for something that will teach me the "big ideas". I can write the code myself, but looking for techniques people use to deal with large data sets. I mean, my 15 MB is small enough where I can put everything I would ever need into memory and just start crunching. What do people do to visualize 5 GB data sets? | 0 | 1 | 2,645 |
0 | 5,908,938 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2011-05-04T23:13:00.000 | 1 | 4 | 0 | Acquiring basic skills working with visualizing/analyzing large data sets | 5,890,935 | 0.049958 | python,dataset,visualization,data-visualization | I like the book Data Analysis with Open Source Tools by Janert. It is a pretty broad survey of data analysis methods, focusing on how to understand the system that produced the data, rather than on sophisticated statistical methods. One caveat: while the mathematics used isn't especially advanced, I do think you will need to be comfortable with mathematical arguments to gain much from the book. | I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of "nice" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15 MB CSV file.
I wrote most of my data wrangling in Python, visualized using GNUPlot.
Are there any accessible books or websites on the subject out there? Bonus points for using Python, more bonus points for a more "basic" visualization system than relying on gnuplot. Cairo or something, I suppose.
Looking for something that takes me from data mining, to processing, to visualization.
EDIT: I'm more looking for something that will teach me the "big ideas". I can write the code myself, but looking for techniques people use to deal with large data sets. I mean, my 15 MB is small enough where I can put everything I would ever need into memory and just start crunching. What do people do to visualize 5 GB data sets? | 0 | 1 | 2,645 |
0 | 67,419,895 | 0 | 0 | 0 | 0 | 1 | false | 51 | 2011-05-08T11:36:00.000 | 0 | 7 | 0 | How do I remove all zero elements from a NumPy array? | 5,927,180 | 0 | python,arrays,numpy,filtering | [i for i in Array if i != 0.0] if the numbers are float
or [i for i in SICER if i != 0] if the numbers are int. | I have a rank-1 numpy.array of which I want to make a boxplot. However, I want to exclude all values equal to zero in the array. Currently, I solved this by looping the array and copy the value to a new array if not equal to zero. However, as the array consists of 86 000 000 values and I have to do this multiple times, this takes a lot of patience.
Is there a more intelligent way to do this? | 0 | 1 | 149,688 |
0 | 5,950,881 | 0 | 0 | 0 | 1 | 4 | false | 0 | 2011-05-10T13:02:00.000 | 0 | 4 | 0 | Insert performance with Cassandra | 5,950,427 | 0 | python,multithreading,insert,cassandra | It's possible you're hitting the python GIL but more likely you're doing something wrong.
For instance, putting 2M rows in a single batch would be Doing It Wrong. | sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.
That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?
Some informations :
- I use pycassa
- I have separated commitlog repertory and data repertory on differents disks
- I use batch insert for each thread
- Consistency Level : ONE
- Replicator factor : 1 | 0 | 1 | 1,686 |
0 | 5,956,519 | 0 | 0 | 0 | 1 | 4 | false | 0 | 2011-05-10T13:02:00.000 | 0 | 4 | 0 | Insert performance with Cassandra | 5,950,427 | 0 | python,multithreading,insert,cassandra | Try running multiple clients in multiple processes, NOT threads.
Then experiment with different insert sizes.
1M inserts in 3 mins is about 5500 inserts/sec, which is pretty good for a single local client. On a multi-core machine you should be able to get several times this amount provided that you use multiple clients, probably inserting small batches of rows, or individual rows. | sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.
That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?
Some informations :
- I use pycassa
- I have separated commitlog repertory and data repertory on differents disks
- I use batch insert for each thread
- Consistency Level : ONE
- Replicator factor : 1 | 0 | 1 | 1,686 |
0 | 6,078,703 | 0 | 0 | 0 | 1 | 4 | false | 0 | 2011-05-10T13:02:00.000 | 0 | 4 | 0 | Insert performance with Cassandra | 5,950,427 | 0 | python,multithreading,insert,cassandra | You might consider Redis. Its single-node throughput is supposed to be faster. It's different from Cassandra though, so whether or not it's an appropriate option would depend on your use case. | sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.
That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?
Some informations :
- I use pycassa
- I have separated commitlog repertory and data repertory on differents disks
- I use batch insert for each thread
- Consistency Level : ONE
- Replicator factor : 1 | 0 | 1 | 1,686 |
0 | 8,491,215 | 0 | 0 | 0 | 1 | 4 | false | 0 | 2011-05-10T13:02:00.000 | 0 | 4 | 0 | Insert performance with Cassandra | 5,950,427 | 0 | python,multithreading,insert,cassandra | The time taken doubled because you inserted twice as much data. Is it possible that you are I/O bound? | sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.
That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?
Some informations :
- I use pycassa
- I have separated commitlog repertory and data repertory on differents disks
- I use batch insert for each thread
- Consistency Level : ONE
- Replicator factor : 1 | 0 | 1 | 1,686 |
0 | 5,987,204 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2011-05-13T04:19:00.000 | 4 | 3 | 1 | does C has anything like python pickle for object serialisation? | 5,987,185 | 1.2 | python,c | An emphatic NO on that one, I'm afraid. C has basic file I/O. Any structuring of data is up to you. Make up a format, dump it out, read it in.
There may be libraries which can do this, but by itself no C doesn't do this. | I'm wondering if C has anything similar to the python pickle module that can dump some structured data on disk and then load it back later.
I know that I can write my structure byte by byte to a file on disk and then read it back later, but with this approach there's still quite some work to do. For example, if I have a single link list structure, I can traverse the list from head to tail and write each node's data on disk. When I read the list back from the on-disk file, I have to reconstruct all links
between each pair of nodes.
Please advise if there's an easier way.
Thanks heaps! | 0 | 1 | 207 |
0 | 5,987,230 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2011-05-13T04:19:00.000 | 2 | 3 | 1 | does C has anything like python pickle for object serialisation? | 5,987,185 | 0.132549 | python,c | The C library functions fread(3) and fwrite(3) will read and write 'elements of data', but that's pretty fanciful way of saying "the C library will do some multiplication and pread(2) or pwrite(2) calls behind the scenes to fill your array".
You can use them on structs, but it is probably not a good idea:
holes in the structs get written and read
you're baking in the endianness of your integers
While you can make your own format for writing objects, you might want to see if your application could use SQLite3 for on-disk storage of objects. It's well-debugged, and if your application fits its abilities well, it might be just the ticket. (And a lot easier than writing all your own formatting code.) | I'm wondering if C has anything similar to the python pickle module that can dump some structured data on disk and then load it back later.
I know that I can write my structure byte by byte to a file on disk and then read it back later, but with this approach there's still quite some work to do. For example, if I have a single link list structure, I can traverse the list from head to tail and write each node's data on disk. When I read the list back from the on-disk file, I have to reconstruct all links
between each pair of nodes.
Please advise if there's an easier way.
Thanks heaps! | 0 | 1 | 207 |
0 | 6,022,144 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2011-05-16T18:13:00.000 | 2 | 1 | 0 | Purging numpy.memmap | 6,021,550 | 1.2 | python,numpy,mmap,memory-mapped-files,large-data | If you run "pmap SCRIPT-PID", the "real" memory shows as "[ anon ]" blocks, and all memory-mapped files show up with the file name in the last column.
Purging the pages is possible at C level, if you manage to get ahold of the pointer to the beginning of the mapping and call madvise(ptr, length, MADV_DONTNEED) on it, but it's going to be cludgy. | Given a numpy.memmap object created with mode='r' (i.e. read-only), is there a way to force it to purge all loaded pages out of physical RAM, without deleting the object itself?
In other words, I'd like the reference to the memmap instance to remain valid, but all physical memory that's being used to cache the on-disk data to be uncommitted. Any views onto to the memmap array must also remain valid.
I am hoping to use this as a diagnostic tool, to help separate "real" memory requirements of a script from "transient" requirements induced by the use of memmap.
I'm using Python 2.7 on RedHat. | 0 | 1 | 487 |
0 | 6,046,352 | 0 | 1 | 0 | 0 | 1 | false | 10 | 2011-05-18T07:42:00.000 | 3 | 2 | 0 | Python memory serialisation | 6,041,395 | 0.291313 | python,class,serialization,memory-management,pickle | Do you construct your tree once and then use it without modifying it further? In that case you might want to consider using separate structures for the dynamic construction and the static usage.
Dicts and objects are very good for dynamic modification, but they are not very space efficient in a read-only scenario. I don't know exactly what you are using your suffix tree for, but you could let each node be represented by a 2-tuple of a sorted array.array('c') and an equally long tuple of subnodes (a tuple instead of a vector to avoid overallocation). You traverse the tree using the bisect-module for lookup in the array. The index of a character in the array will correspond to a subnode in the subnode-tuple. This way you avoid dicts, objects and vector.
You could do something similar during the construction process, perhaps using a subnode-vector instead of subnode-tuple. But this will of course make construction slower, since inserting new nodes in a sorted vector is O(N). | I was wondering whether someone might know the answer to the following.
I'm using Python to build a character-based suffix tree. There are over 11 million nodes in the tree which fits in to approximately 3GB of memory. This was down from 7GB by using the slot class method rather than the Dict method.
When I serialise the tree (using the highest protocol) the resulting file is more than a hundred times smaller.
When I load the pickled file back in, it again consumes 3GB of memory. Where does this extra overhead come from, is it something to do with Pythons handling of memory references to class instances?
Update
Thank you larsmans and Gurgeh for your very helpful explanations and advice. I'm using the tree as part of an information retrieval interface over a corpus of texts.
I originally stored the children (max of 30) as a Numpy array, then tried the hardware version (ctypes.py_object*30), the Python array (ArrayType), as well as the dictionary and Set types.
Lists seemed to do better (using guppy to profile the memory, and __slots__['variable',...]), but I'm still trying to squash it down a bit more if I can. The only problem I had with arrays is having to specify their size in advance, which causes a bit of redundancy in terms of nodes with only one child, and I have quite a lot of them. ;-)
After the tree is constructed I intend to convert it to a probabilistic tree with a second pass, but may be I can do this as the tree is constructed. As construction time is not too important in my case, the array.array() sounds like something that would be useful to try, thanks for the tip, really appreciated.
I'll let you know how it goes. | 0 | 1 | 415 |
0 | 6,042,505 | 0 | 0 | 0 | 0 | 1 | true | 13 | 2011-05-18T09:09:00.000 | 7 | 1 | 0 | numpy: inverting an upper triangular matrix | 6,042,308 | 1.2 | python,matrix,numpy,scipy,matrix-inverse | There really isn't an inversion routine, per se. scipy.linalg.solve is the canonical way of solving a matrix-vector or matrix-matrix equation, and it can be given explicit information about the structure of the matrix which it will use to choose the correct routine (probably the equivalent of BLAS3 dtrsm in this case).
LAPACK does include doptri for this purpose, and scipy.linalg does expose a raw C lapack interface. If the inverse matrix is really what you want, then you could try using that. | In numpy/scipy, what's the canonical way to compute the inverse of an upper triangular matrix?
The matrix is stored as 2D numpy array with zero sub-diagonal elements, and the result should also be stored as a 2D array.
edit The best I've found so far is scipy.linalg.solve_triangular(A, np.identity(n)). Is that it? | 0 | 1 | 6,594 |
0 | 40,091,714 | 0 | 0 | 0 | 0 | 1 | false | 711 | 2011-05-21T10:01:00.000 | 2 | 12 | 0 | Dump a NumPy array into a csv file | 6,081,008 | 0.033321 | python,arrays,csv,numpy | If you want to save your numpy array (e.g. your_array = np.array([[1,2],[3,4]])) to one cell, you could convert it first with your_array.tolist().
Then save it the normal way to one cell, with delimiter=';'
and the cell in the csv-file will look like this [[1, 2], [2, 4]]
Then you could restore your array like this:
your_array = np.array(ast.literal_eval(cell_string)) | Is there a way to dump a NumPy array into a CSV file? I have a 2D NumPy array and need to dump it in human-readable format. | 0 | 1 | 1,040,617 |
0 | 6,086,737 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-05-22T07:07:00.000 | 1 | 1 | 0 | Backward propagation - Character Recognizing - Seeking an example | 6,086,560 | 0.197375 | java,php,python,neural-network | Support Vector Machines tend to work much better for character recognition with error rates around 2% generally reported. I suggest that as an alternative if you're just using the character recognition as a module in a larger project. | i looking for a example of character recognizing (just one - for example X or A)
using MLP, Backward propagation.
I want a simple example, and not the entire library. Language does not matter, preferably one of those Java, Python, PHP | 0 | 1 | 162 |
0 | 6,090,407 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2011-05-22T19:43:00.000 | 1 | 2 | 0 | python numpy array slicing | 6,090,288 | 0.099668 | python,arrays,indexing,numpy,slice | I don't think there is a better solution, unless you have some extra information about what's in those arrays. If they're just random numbers, you have to do (n^2)/2 calculations, and your algorithm is reflecting that, running in O((n^2)/2). | I have an 2d array, A that is 6x6. I would like to take the first 2 values (index 0,0 and 0,1) and take the average of the two and insert the average into a new array that is half the column size of A (6x3) at index 0,0. Then i would get the next two indexes at A, take average and put into the new array at 0,1.
The only way I know how to do this is using a double for loop, but for performance purposes (I will be using arrays as big as 3000x3000) I know there is a better solution out there! Thanks! | 0 | 1 | 1,976 |
0 | 6,127,643 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2011-05-25T15:53:00.000 | 0 | 3 | 0 | Opencv... getting at the data in an IPLImage or CvMat | 6,127,314 | 0 | python,opencv,iplimage | I do not know opencv python bindings, but in C or C++ you have to get the buffer pointer stored in IplImage. This buffer is coded according to the image format (also stored in IplImage). For RGB you have a byte for R, a byte for G, a byte for B, and so on.
Look at the API of python bindings,you will find how to access the buffer and then you can get to pixel info.
my2c | I am doing some simple programs with opencv in python. I want to write a few algorithms myself, so need to get at the 'raw' image data inside an image. I can't just do image[i,j] for example, how can I get at the numbers?
Thanks | 0 | 1 | 7,276 |
0 | 6,213,966 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2011-06-02T11:24:00.000 | 3 | 2 | 0 | Finding the calculation that generates a NaN | 6,213,869 | 0.291313 | python,debugging,numpy,scipy,nan | You can use numpy.seterr to set floating point error handling behaviour globally for all numpy routines. That should let you pinpoint where in the code they are arising from (or a least where numpy see them for the first time). | I have a moderately large piece (a few thousand lines) of Python/Numpy/Scipy code that is throwing up NaNs with certain inputs. I've looked for, and found, some of the usual suspects (log(0) and the like), but none of the obvious ones seem to be the culprits in this case.
Is there a relatively painless way (i.e., apart from putting exception handling code around each potential culprit), to find out where these NaNs are coming from? | 0 | 1 | 1,232 |
0 | 6,229,101 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2011-06-03T13:18:00.000 | 1 | 2 | 0 | Non sorted eigenvalues for finding features in Python | 6,227,589 | 0.099668 | python,pca | What Sven mentioned in his comments is correct. There is no "default" ordering of the eigenvalues. Each eigenvalue is associated with an eigenvector, and it is important is that the eigenvalue-eigenvector pair is matched correctly. You'll find that all languages and packages will do so.
So if R gives you eigenvalues [e1,e2,e3 and eigenvectors [v1,v2,v3], python probably will give you (say) [e3,e2,e1] and [v3,v2,v1].
Recall that an eigenvalue tells you how much of the variance in your data is explained by the eigenvector associated with it. So, a natural sorting of the eigenvalues (that is intuitive to us) that is useful in PCA, is by size (either ascending or descending). That way, you can easily look at the eigenvalues and identify which ones to keep (large, as they explain most of the data) and which ones to throw (small, which could be high frequency features or just noise) | I am now trying some stuff with PCA but it's very important for me to know which are the features responsible for each eigenvalue.
numpy.linalg.eig gives us the diagonal matrix already sorted but I wanted this matrix with them at the original positions. Does anybody know how I can make it? | 0 | 1 | 1,121 |
0 | 6,312,608 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2011-06-10T17:38:00.000 | 2 | 3 | 0 | sparse matrix from dictionaries | 6,310,087 | 0.132549 | python,scipy,sparse-matrix | No. Any matrix in Scipy, sparse or not, must be instantiated with a size. | I just started to learn to program in Python and I am trying to construct a sparse matrix using Scipy package. I found that there are different types of sparse matrices, but all of them require to store using three vectors like row, col, data; or if you want to each new entry separately, like S(i,j) = s_ij you need to initiate the matrix with a given size.
My question is if there is a way to store the matrix entrywise without needing the initial size, like a dictionary. | 0 | 1 | 4,150 |
0 | 6,320,431 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2011-06-12T05:49:00.000 | 2 | 1 | 0 | UnicodeDecodeError: 'gbk' codec can't decode bytes | 6,320,415 | 1.2 | python,unicode,python-3.x,decode,pickle | It's hard to say without you showing your code, but it looks like you opened the file in text mode with a "gbk" encoding. It should probably be opened in binary mode. If that doesn't happen, make a small code example that fails, and paste it in here. | I'm trying to load an object (of a custom class Area) from a file using pickler. I'm using python 3.1.
The file was made with pickle.dump(area, f)
I get the following error, and I would like help trying to understand and fix it.
File "editIO.py", line 12, in load
area = pickle.load(f)
File "C:\Python31\lib\pickle.py", line 1356, in load
encoding=encoding, errors=errors).load()
UnicodeDecodeError: 'gbk' codec can't decode bytes in position 0-1: illegal multibyte sequence | 0 | 1 | 4,835 |
0 | 26,627,020 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2011-06-13T16:32:00.000 | 0 | 2 | 0 | python matplotlib -- regenerate graph? | 6,333,345 | 0 | python,matplotlib | from matplotlib.widgets import Button
real_points = plt.axes().scatter(x=xpts, y=ypts, alpha=.4, s=size, c='green', label='real data')
#Reset Button
#rect = [left, bottom, width, height]
reset_axis = plt.axes([0.4, 0.15, 0.1, 0.04])
button = Button(ax=reset_axis, label='Reset', color='lightblue' , hovercolor='0.975')
def reset(event):
real_points.remove()
button.on_clicked(reset)
plt.show() | I have a python function that generates a list with random values.
After I call this function, I call another function that plots the random values using matplotlib.
I want to be able to click some key on the keyboard / mouse, and have the following happen:
(1) a new list of random values will be re-generated
(2) the values from (1) will be plotted (replacing the current matplotlib chart)
Meaning, I want to be able to view new charts with a click of a button. How do I go about doing this in python? | 0 | 1 | 1,009 |
0 | 6,368,360 | 0 | 0 | 0 | 0 | 1 | false | 31 | 2011-06-15T23:28:00.000 | 2 | 6 | 0 | Improving FFT performance in Python | 6,365,623 | 0.066568 | python,numpy,scipy,fft,fftw | Where I work some researchers have compiled this Fortran library which setups and calls the FFTW for a particular problem. This Fortran library (module with some subroutines) expect some input data (2D lists) from my Python program.
What I did was to create a little C-extension for Python wrapping the Fortran library, where I basically calls "init" to setup a FFTW planner, and another function to feed my 2D lists (arrays), and a "compute" function.
Creating a C-extensions is a small task, and there a lot of good tutorials out there for that particular task.
To good thing about this approach is that we get speed .. a lot of speed. The only drawback is in the C-extension where we must iterate over the Python list, and extract all the Python data into a memory buffer. | What is the fastest FFT implementation in Python?
It seems numpy.fft and scipy.fftpack both are based on fftpack, and not FFTW. Is fftpack as fast as FFTW? What about using multithreaded FFT, or using distributed (MPI) FFT? | 0 | 1 | 27,277 |
0 | 34,919,615 | 0 | 0 | 0 | 0 | 2 | false | 421 | 2011-06-17T18:49:00.000 | 23 | 10 | 0 | Matplotlib make tick labels font size smaller | 6,390,393 | 1 | python,matplotlib | In current versions of Matplotlib, you can do axis.set_xticklabels(labels, fontsize='small'). | In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller?
Further, how can one rotate it from horizontal to vertical? | 0 | 1 | 870,128 |
0 | 37,869,225 | 0 | 0 | 0 | 0 | 2 | false | 421 | 2011-06-17T18:49:00.000 | 16 | 10 | 0 | Matplotlib make tick labels font size smaller | 6,390,393 | 1 | python,matplotlib | For smaller font, I use
ax1.set_xticklabels(xticklabels, fontsize=7)
and it works! | In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller?
Further, how can one rotate it from horizontal to vertical? | 0 | 1 | 870,128 |
0 | 6,398,543 | 0 | 0 | 0 | 0 | 1 | true | 15 | 2011-06-18T16:54:00.000 | 15 | 1 | 0 | Unmap of NumPy memmap | 6,397,495 | 1.2 | python,numpy,mmap | Yes, it's only closed when the object is garbage-collected; memmap.close method does nothing.
You can call x._mmap.close(), but keep in mind that any further access to the x object will crash python. | I can't find any documentation on how numpy handles unmapping of previously memory mapped regions: munmap for numpy.memmap() and numpy.load(mmap_mode).
My guess is it's done only at garbage collection time, is that correct? | 0 | 1 | 2,924 |
0 | 44,827,155 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2011-06-21T17:54:00.000 | 0 | 5 | 0 | ML/Data Mining/Big Data : Popular language for programming and community support | 6,429,772 | 0 | java,python,hadoop,machine-learning,bigdata | Python is gaining in popularity, has a lot of libraries, and is very useful for prototyping. I find that due to the many versions of python and its dependencies on C libs to be difficult to deploy though.
R is also very popular, has a lot of libraries, and was designed for data science. However, the underlying language design tends to make things overcomplicated.
Personally, I prefer Clojure because it has great data manipulation support and can interop with Java ecosystem. The downside of it currently is that there aren't too many data science libraries yet! | I am not sure if this question is correct, but I am asking to resolve the doubts I have.
For Machine Learning/Data Mining, we need to learn about data, which means you need to learn Hadoop, which has implementation in Java for MapReduce(correct me if I am wrong).
Hadoop also provides streaming api to support other languages(like python)
Most grad students/researchers I know solve ML problems in python
we see job posts for hadoop and Java combination very often
I observed that Java and Python(in my observation) are most widely used languages for this domain.
My question is what is most popular language for working on this domain.
what factors involve in deciding which language/framework one should choose
I know both Java and python but confused always :
whether I start programming in Java(because of hadoop implementation)
whether I start programming in Python(because its easier and quicker to write)
This is a very open ended question, I am sure the advices might help me and people who have same doubt.
Thanks a lot in advance | 0 | 1 | 1,400 |
0 | 6,436,938 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2011-06-21T17:54:00.000 | 0 | 5 | 0 | ML/Data Mining/Big Data : Popular language for programming and community support | 6,429,772 | 0 | java,python,hadoop,machine-learning,bigdata | I think in this field most popular combination is Java/Hadoop. When vacancies requires also python/perl/ruby it usually means that they are migrating from those script languages(usually main languages till that time) to java due to moving from startup code base to enterprise.
Also in real world data mining application python is frequently used for prototyping, small sized data processing tasks. | I am not sure if this question is correct, but I am asking to resolve the doubts I have.
For Machine Learning/Data Mining, we need to learn about data, which means you need to learn Hadoop, which has implementation in Java for MapReduce(correct me if I am wrong).
Hadoop also provides streaming api to support other languages(like python)
Most grad students/researchers I know solve ML problems in python
we see job posts for hadoop and Java combination very often
I observed that Java and Python(in my observation) are most widely used languages for this domain.
My question is what is most popular language for working on this domain.
what factors involve in deciding which language/framework one should choose
I know both Java and python but confused always :
whether I start programming in Java(because of hadoop implementation)
whether I start programming in Python(because its easier and quicker to write)
This is a very open ended question, I am sure the advices might help me and people who have same doubt.
Thanks a lot in advance | 0 | 1 | 1,400 |
0 | 6,432,586 | 0 | 0 | 0 | 0 | 2 | false | 29 | 2011-06-21T21:56:00.000 | 1 | 9 | 0 | How to do weighted random sample of categories in python | 6,432,499 | 0.022219 | python,statistics,numpy,probability,random-sample | Howabout creating 3 "a", 4 "b" and 3 "c" in a list an then just randomly select one. With enough iterations you will get the desired probability. | Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time.
What's the canonical way of doing this in python?
I've looked at the random module which doesn't seem to have an appropriate function and at numpy.random which although it has a multinomial function doesn't seem to return the results in a nice form for this problem. I'm basically looking for something like mnrnd in matlab.
Many thanks.
Thanks for all the answers so quickly. To clarify, I'm not looking for explanations of how to write a sampling scheme, but rather to be pointed to an easy way to sample from a multinomial distribution given a set of objects and weights, or to be told that no such function exists in a standard library and so one should write one's own. | 0 | 1 | 12,259 |
0 | 6,432,588 | 0 | 0 | 0 | 0 | 2 | false | 29 | 2011-06-21T21:56:00.000 | 0 | 9 | 0 | How to do weighted random sample of categories in python | 6,432,499 | 0 | python,statistics,numpy,probability,random-sample | I'm not sure if this is the pythonic way of doing what you ask, but you could use
random.sample(['a','a','a','b','b','b','b','c','c','c'],k)
where k is the number of samples you want.
For a more robust method, bisect the unit interval into sections based on the cumulative probability and draw from the uniform distribution (0,1) using random.random(). In this case the subintervals would be (0,.3)(.3,.7)(.7,1). You choose the element based on which subinterval it falls into. | Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time.
What's the canonical way of doing this in python?
I've looked at the random module which doesn't seem to have an appropriate function and at numpy.random which although it has a multinomial function doesn't seem to return the results in a nice form for this problem. I'm basically looking for something like mnrnd in matlab.
Many thanks.
Thanks for all the answers so quickly. To clarify, I'm not looking for explanations of how to write a sampling scheme, but rather to be pointed to an easy way to sample from a multinomial distribution given a set of objects and weights, or to be told that no such function exists in a standard library and so one should write one's own. | 0 | 1 | 12,259 |
0 | 37,608,130 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2011-06-26T21:03:00.000 | 0 | 4 | 0 | Clustering using Latent Dirichlet Allocation algo in gensim | 6,486,738 | 0 | python,algorithm,cluster-analysis,latent-semantic-indexing | The basic thing to understand here is that clustering requires your data to be present in a format and is not concerned with how did you arrive at your data. So, whether you apply clustering on the term-document matrix or on the reduced-dimension (LDA output matrix), clustering will work irrespective of that.
Just do the other things right though, small mistakes in data formats can cost you a lot of time of research. | Is it possible to do clustering in gensim for a given set of inputs using LDA? How can I go about it? | 0 | 1 | 13,867 |
0 | 6,525,268 | 0 | 0 | 0 | 0 | 2 | true | 10 | 2011-06-26T21:03:00.000 | 10 | 4 | 0 | Clustering using Latent Dirichlet Allocation algo in gensim | 6,486,738 | 1.2 | python,algorithm,cluster-analysis,latent-semantic-indexing | LDA produces a lower dimensional representation of the documents in a corpus. To this low-d representation you could apply a clustering algorithm, e.g. k-means. Since each axis corresponds to a topic, a simpler approach would be assigning each document to the topic onto which its projection is largest. | Is it possible to do clustering in gensim for a given set of inputs using LDA? How can I go about it? | 0 | 1 | 13,867 |
0 | 6,550,992 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2011-07-01T14:44:00.000 | 1 | 3 | 1 | Distributing Real-Time Market Data Using ZeroMQ / NFS? | 6,549,488 | 0.066568 | python,linux,zeromq | I'm pretty sure sending with ZeroMQ will be substantially quicker than saving and loading files.
There are other ways to send information over the network, such as raw sockets (lower level), AMQP implementations like RabbitMQ (more structured/complicated), HTTP requests/replies, and so on. ZeroMQ is a pretty good option, but it probably depends on your situation.
You could also look at frameworks for distributed computing, such as that in IPython. | Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data.
What would be the best / fastest way to send the data over to the other 50 machines?
I am looking for a solution that would work on linux, using python as the programming language. Some ideas that I had are:
(1) Send it to the other machines using python's ZeroMQ module
(2) Save the data to a shared folder and have the 50 machines read it using NFS
Any other ideas? | 0 | 1 | 1,280 |
0 | 6,552,072 | 0 | 0 | 0 | 0 | 3 | true | 1 | 2011-07-01T14:44:00.000 | 1 | 3 | 1 | Distributing Real-Time Market Data Using ZeroMQ / NFS? | 6,549,488 | 1.2 | python,linux,zeromq | I would go with zeromq with pub/sub sockets..
in your 2 option, your "clients" will have to refresh in order to get your file modifications.. like polling.. if you have some write error, you will have to handle this by hand, which won't be easy as well..
zeromq is simple, reliable and powerful.. i think that perfectly fit your case.. | Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data.
What would be the best / fastest way to send the data over to the other 50 machines?
I am looking for a solution that would work on linux, using python as the programming language. Some ideas that I had are:
(1) Send it to the other machines using python's ZeroMQ module
(2) Save the data to a shared folder and have the 50 machines read it using NFS
Any other ideas? | 0 | 1 | 1,280 |
0 | 6,643,883 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2011-07-01T14:44:00.000 | 0 | 3 | 1 | Distributing Real-Time Market Data Using ZeroMQ / NFS? | 6,549,488 | 0 | python,linux,zeromq | Definatly do NOT use the file system. ZeroMQ is a great solution wiht bindings in Py. I have some examples here: www.coastrd.com. Contact me if you need more help. | Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data.
What would be the best / fastest way to send the data over to the other 50 machines?
I am looking for a solution that would work on linux, using python as the programming language. Some ideas that I had are:
(1) Send it to the other machines using python's ZeroMQ module
(2) Save the data to a shared folder and have the 50 machines read it using NFS
Any other ideas? | 0 | 1 | 1,280 |
0 | 6,581,184 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2011-07-05T03:04:00.000 | 1 | 1 | 0 | Efficient Datatype Python (list or numpy array?) | 6,577,657 | 1.2 | python,arrays,performance,numpy | As I see it, if you were doing this in C or Fortran, you'd have to have an idea of the size of the array so that you can allocate the correct amount of memory (ignoring realloc!). So assuming you do know this, why do you need to append to the array?
In any case, numpy arrays have the resize method, which you can use to extend the size of the array. | I'm still confused whether to use list or numpy array.
I started with the latter, but since I have to do a lot of append
I ended up with many vstacks slowing my code down.
Using list would solve this problem, but I also need to delete elements
which again works well with delete on numpy array.
As it looks now I'll have to write my own data type (in a compiled language, and wrap).
I'm just curious if there isn't a way to get the job done using a python type.
To summarize this are the criterions my data type would have to fulfil:
2d n (variable) rows, each row k (fixed) elements
in memory in one piece (would be nice for efficient operating)
append row (with an in average constant time, like C++ vector just always k elements)
delete a set of elements (best: inplace, keep free space at the end for later append)
access element given the row and column index ( O(1) like data[row*k+ column]
It appears generally useful to me to have a data type like this and not impossible to implement in C/Fortran.
What would be the closest I could get with python?
(Or maybe, Do you think it would work to write a python class for the datatype? what performance should I expect in this case?) | 0 | 1 | 762 |
0 | 6,580,497 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2011-07-05T03:49:00.000 | 0 | 2 | 0 | matplotlib.pyplot how to add labels with .clabel? | 6,577,807 | 0 | python,matplotlib | You may use plt.annotate or plt.text.
And, as an aside, 1) you probably want to use different variables for the file names and numpy arrays you're loading your data into (what is data in data=plb.loadtxt(data)),
2) you probably want to move the label positioning into the loop (in your code, what is data in the plt.clabel(data)). | How can I use pyplot.clabel to attach the file names to the lines being plotted?
plt.clabel(data) line gives the error | 0 | 1 | 1,086 |
0 | 6,585,193 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2011-07-05T15:29:00.000 | 4 | 2 | 0 | Python: 64bit Numpy? | 6,585,176 | 1.2 | python,numpy | NumPy has been used on 64-bit systems of all types for years now. I doubt you will find anything new that doesn't show up elsewhere as well. | I am currently working with numpy on a 32bit system (Ubuntu 10.04 LTS).
Can I expect my code to work fluently, in the same manner, on a 64bit (Ubuntu) system?
Does numpy have an compatibility issues with 64bit python? | 0 | 1 | 2,531 |
0 | 9,271,325 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2011-07-07T17:12:00.000 | 5 | 3 | 0 | Python random seed not working with Genetic Programming example code | 6,614,447 | 0.321513 | python | I had the same problem just now with some completely unrelated code. I believe my solution was similar to that in eryksun's answer, though I didn't have any trees. What I did have were some sets, and I was doing random.choice(list(set)) to pick values from them. Sometimes my results (the items picked) were diverging even with the same seed each time and I was close to pulling my hair out. After seeing eryksun's answer here I tried random.choice(sorted(set)) instead, and the problem appears to have disappeared. I don't know enough about the inner workings of Python to explain it. | I am trying to get reproducible results with the genetic programming code in chapter 11 of "Programming Collective Intelligence" by Toby Segaran. However, simply setting seed "random.seed(55)" does not appear to work, changing the original code "from random import ...." to "import random" doesn't help, nor does changing Random(). These all seem to do approximately the same thing, the trees start out building the same, then diverge.
In reading various entries about the behavior of random, I can find no reason, given his GP code, why this divergence should happen. There doesn't appear to be anything in the code except calls to random, that has any variability that would account for this behavior. My understanding is that calling random.seed() should set all the calls correctly and since the code isn't threaded at all, I'm not sure how or why the divergence is happening.
Has anyone modified this code to behave reproducibly? Is there some form of calling random.seed() that may work better?
I apologize for not posting an example, but the code is obviously not mine (I'm adding only the call to seed and changing how random is called in the code) and this doesn't appear to be a simple issue with random (I've read all the entries on Python random here and many on the web in general).
Thanks.
Mark L. | 0 | 1 | 2,656 |
0 | 19,444,825 | 0 | 0 | 0 | 0 | 2 | false | 41 | 2011-07-07T18:58:00.000 | 0 | 7 | 0 | Kmeans without knowing the number of clusters? | 6,615,665 | 0 | python,machine-learning,data-mining,k-means | If the cluster number is unknow, why not use Hierarchical Clustering instead?
At the begining, every isolated one is a cluster, then every two cluster will be merged if their distance is lower than a threshold, the algorithm will end when no more merger goes.
The Hierarchical clustering algorithm can carry out a suitable "K" for your data. | I am attempting to apply k-means on a set of high-dimensional data points (about 50 dimensions) and was wondering if there are any implementations that find the optimal number of clusters.
I remember reading somewhere that the way an algorithm generally does this is such that the inter-cluster distance is maximized and intra-cluster distance is minimized but I don't remember where I saw that. It would be great if someone can point me to any resources that discuss this. I am using SciPy for k-means currently but any related library would be fine as well.
If there are alternate ways of achieving the same or a better algorithm, please let me know. | 0 | 1 | 26,563 |
0 | 33,374,054 | 0 | 0 | 0 | 0 | 2 | false | 41 | 2011-07-07T18:58:00.000 | 0 | 7 | 0 | Kmeans without knowing the number of clusters? | 6,615,665 | 0 | python,machine-learning,data-mining,k-means | One way to do it is to run k-means with large k (much larger than what you think is the correct number), say 1000. then, running mean-shift algorithm on the these 1000 point (mean shift uses the whole data but you will only "move" these 1000 points). mean shift will find the amount of clusters then.
Running mean shift without the k-means before is a possibility but it is just too slow usually O(N^2*#steps), so running k-means before will speed things up: O(NK#steps) | I am attempting to apply k-means on a set of high-dimensional data points (about 50 dimensions) and was wondering if there are any implementations that find the optimal number of clusters.
I remember reading somewhere that the way an algorithm generally does this is such that the inter-cluster distance is maximized and intra-cluster distance is minimized but I don't remember where I saw that. It would be great if someone can point me to any resources that discuss this. I am using SciPy for k-means currently but any related library would be fine as well.
If there are alternate ways of achieving the same or a better algorithm, please let me know. | 0 | 1 | 26,563 |
0 | 6,631,872 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2011-07-07T22:59:00.000 | 2 | 1 | 0 | Any way to get a figure from Python's matplotlib into Matlab? | 6,618,132 | 0.379949 | python,matlab,matplotlib | Without access to (or experience with matlab) this is going to be a bit tricky. As Amro stated, .fig files store the underlying data, and not just an image, and you're going to have a hard time saving .fig files from python. There are however a couple of things which might work in your favour, these are:
numpy/scipy can read and write matlab .mat files
the matplotlib plotting commands are very similar to/ based on the matlab ones, so the code to generate plots from the data is going to be nearly identical (modulo round/square brackets and 0/1 based indexing).
My approach would be to write your data out as .mat files, and then just put your plotting commands in a script and give that to your supervisor - with any luck it shouldn't be too hard for him to recreate the plots based on that information.
If you had access to Matlab to test/debug, I'm sure it would be possible to create some code which automagically created .mat files and a matlab .m file which would recreate the figures.
There's a neat list of matlab/scipy equivalent commands on the scipy web site.
good luck! | I'm processing some data for a research project, and I'm writing all my scripts in python. I've been using matplotlib to create graphs to present to my supervisor. However, he is a die-hard MATLAB user and he wants me to send him MATLAB .fig files rather than SVG images.
I've looked all over but can't find anything to do the job. Is there any way to either export .fig files from matplotlib, convert .svg files to .fig, or import .svg files into MATLAB? | 0 | 1 | 2,637 |
0 | 6,620,533 | 0 | 0 | 0 | 0 | 1 | false | 181 | 2011-07-08T06:00:00.000 | 3 | 13 | 0 | Fitting empirical distribution to theoretical ones with Scipy (Python)? | 6,620,471 | 0.046121 | python,numpy,statistics,scipy,distribution | What about storing your data in a dictionary where keys would be the numbers between 0 and 47 and values the number of occurrences of their related keys in your original list?
Thus your likelihood p(x) will be the sum of all the values for keys greater than x divided by 30000. | INTRODUCTION: I have a list of more than 30,000 integer values ranging from 0 to 47, inclusive, e.g.[0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47,47,47,...] sampled from some continuous distribution. The values in the list are not necessarily in order, but order doesn't matter for this problem.
PROBLEM: Based on my distribution I would like to calculate p-value (the probability of seeing greater values) for any given value. For example, as you can see p-value for 0 would be approaching 1 and p-value for higher numbers would be tending to 0.
I don't know if I am right, but to determine probabilities I think I need to fit my data to a theoretical distribution that is the most suitable to describe my data. I assume that some kind of goodness of fit test is needed to determine the best model.
Is there a way to implement such an analysis in Python (Scipy or Numpy)?
Could you present any examples? | 0 | 1 | 181,959 |
0 | 6,626,730 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-07-08T15:20:00.000 | 2 | 1 | 0 | replace/modify tail of a gz file with gzip.open | 6,626,629 | 0.379949 | python,gzip,tail | Not possible - you can not replace parts of a compressed file without decompressing it first. At least not with the common compression algorithms. | I have a gz file that with a huge size, is it possible to replace the tail without touching the rest of the file? I tried gzip.open( filePath, mode = 'r+' ) but the write method was blocked .... saying it is a read-only object ... any idea?
what I am doing now is... gzip.open as r and once I get the offset of the start of the tail, I close it and re-open it with gzip.open as a and seek (offset)... which is not likely the best idea
thanks
John | 0 | 1 | 350 |
0 | 6,696,330 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2011-07-14T16:02:00.000 | 3 | 3 | 0 | Two dimensional associative array in Python | 6,696,279 | 1.2 | python,associative-array | Is there any reason not to use a dict of dicts? It does what you want (though note that there's no such thing as ++ in Python), after all.
There's nothing stylistically poor or non-Pythonic about using a dict of dicts. | I have a set() with terms like 'A' 'B' 'C'. I want a 2-d associative array so that i can perform an operation like d['A']['B'] += 1 . What is the pythonic way of doing this, I was thinking a dicts of dicts. Is there a better way. | 0 | 1 | 10,309 |
0 | 11,928,786 | 0 | 0 | 0 | 0 | 1 | false | 70 | 2011-07-14T17:10:00.000 | 2 | 6 | 0 | Interactive matplotlib plot with two sliders | 6,697,259 | 0.066568 | python,keyboard,matplotlib,interactive | Use waitforbuttonpress(timeout=0.001) then plot will see your mouse ticks. | I used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy.
I wonder if somehow I could interact with the plot regeneration using keyboard keys to increase / decrease values of some of the variables and see instantly how the plot changes.
What is the best approach for this?
Also if you can point me to interesting links or a link with a plot example with just two sliders? | 0 | 1 | 98,213 |
0 | 6,831,300 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2011-07-14T21:29:00.000 | -2 | 3 | 0 | Python zeromq -- Multiple Publishers To a Single Subscriber? | 6,700,149 | -0.132549 | python,zeromq | In ZeroMQ there can only be one publisher per port. The only (ugly) workaround is to start each child PUB socket on a different port and have the parent listen on all those ports.
but the pipeline pattern describe on 0MQ, user guide is a much better way to do this. | I'd like to write a python script (call it parent) that does the following:
(1) defines a multi-dimensional numpy array
(2) forks 10 different python scripts (call them children). Each of them must be able to read the contents of the numpy array from (1) at any single point in time (as long as they are alive).
(3) each of the child scripts will do it's own work (children DO NOT share any info with each other)
(4) at any point in time, the parent script must be able to accept messages from all of its children. These messages will be parsed by the parent and cause the numpy array from (1) to change.
How do I go about this, when working in python in a Linux environment? I thought of using zeroMQ and have the parent be a single subscriber while the children will all be publishers; does it make sense or is there a better way for this?
Also, how do I allow all the children to continuously read the contents of the numpy array that was defined by the parent ? | 0 | 1 | 14,920 |
0 | 7,152,292 | 0 | 0 | 0 | 0 | 1 | false | 18 | 2011-07-17T08:40:00.000 | 1 | 5 | 0 | OpenCV Python and SIFT features | 6,722,736 | 0.039979 | python,opencv,sift | Are you sure OpenCV is allowed to support SIFT? SIFT is a proprietary feature type, patented within the U.S. by the University of British Columbia and by David Lowe, the inventor of the algorithm. In my own research, I have had to re-write this algorithm many times. In fact, some vision researchers try to avoid SIFT and use other scale-invariant models because SIFT is proprietary. | I know there is a lot of questions about Python and OpenCV but I didn't find help on this special topic.
I want to extract SIFT keypoints from an image in python OpenCV.
I have recently installed OpenCV 2.3 and can access to SURF and MSER but not SIFT.
I can't see anything related to SIFT in python modules (cv and cv2) (well I'm lying a bit: there are 2 constants: cv2.SIFT_COMMON_PARAMS_AVERAGE_ANGLE and cv2.SIFT_COMMON_PARAMS_FIRST_ANGLE).
This puzzles me since a while.
Is that related to the fact that some parts of OpenCV are in C and other in C++?
Any idea?
P.S.: I have also tried pyopencv (another python binding for OpenCV <= 2.1) without success. | 0 | 1 | 12,701 |
0 | 46,067,557 | 0 | 0 | 0 | 0 | 2 | false | 146 | 2011-07-18T17:10:00.000 | 5 | 8 | 0 | Fast check for NaN in NumPy | 6,736,590 | 0.124353 | python,performance,numpy,nan | use .any()
if numpy.isnan(myarray).any()
numpy.isfinite maybe better than isnan for checking
if not np.isfinite(prop).all() | I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic.
I tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-efficient way to do this at all?
(To those who would ask "how gigantic": I can't tell. This is input validation for library code.) | 0 | 1 | 183,568 |
0 | 6,736,673 | 0 | 0 | 0 | 0 | 2 | false | 146 | 2011-07-18T17:10:00.000 | 34 | 8 | 0 | Fast check for NaN in NumPy | 6,736,590 | 1 | python,performance,numpy,nan | I think np.isnan(np.min(X)) should do what you want. | I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic.
I tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-efficient way to do this at all?
(To those who would ask "how gigantic": I can't tell. This is input validation for library code.) | 0 | 1 | 183,568 |
0 | 6,743,440 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2011-07-19T07:05:00.000 | 5 | 4 | 0 | How to sort files in a directory before reading? | 6,743,407 | 0.244919 | python,sorting,file-io | Sort your list of files in the program. Don't rely on operating system calls to give the files in the right order, it depends on the actual file system being used. | I am working with a program that writes output to a csv file based on the order that files are read in from a directory. However with a large number of files with the endings 1,2,3,4,5,6,7,8,9,10,11,12. My program actually reads the files by I guess alphabetical ordering: 1,10,11,12....,2,20,21.....99. The problem is that another program assumes that the ordering is in numerical ordering, and skews the graph results.
The actually file looks like: String.ext.ext2.1.txt, String.ext.ext2.2.txt, and so on...
How can I do this with a python script? | 0 | 1 | 34,380 |
0 | 6,760,471 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2011-07-20T10:22:00.000 | 7 | 2 | 0 | N-Dimensional Matrix Array in Python (with different sizes) | 6,760,380 | 1.2 | python,matrix | Just use a tuple or list.
A tuple matrices = tuple(matrix1, matrix2, matrix3) will be slightly more efficient;
A list matrices = [matrix1, matrix2, matrix3] is more flexible as you can matrix.append(matrix4).
Either way you can access them as matrices[0] or for matrix in matricies: pass # do stuff. | In Matlab, there is something called struct, which allow the user to have a dynamic set of matrices.
I'm basically looking for a function that allows me to index over dynamic matrices that have different sizes.
Example: (with 3 matrices)
Matrix 1: 3x2
Matrix 2: 2x2
Matrix 3: 2x1
Basically I want to store the 3 matrices on the same variable. To called them by their index number afterward (i.e. Matrix[1], Matrx[2]). Conventional python arrays do not allow for arrays with different dimensions to be stacked.
I was looking into creating classes, but maybe someone her has a better alternative to this.
Thanks | 0 | 1 | 4,483 |
0 | 6,760,481 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2011-07-20T10:22:00.000 | 0 | 2 | 0 | N-Dimensional Matrix Array in Python (with different sizes) | 6,760,380 | 0 | python,matrix | Put those arrays into a list. | In Matlab, there is something called struct, which allow the user to have a dynamic set of matrices.
I'm basically looking for a function that allows me to index over dynamic matrices that have different sizes.
Example: (with 3 matrices)
Matrix 1: 3x2
Matrix 2: 2x2
Matrix 3: 2x1
Basically I want to store the 3 matrices on the same variable. To called them by their index number afterward (i.e. Matrix[1], Matrx[2]). Conventional python arrays do not allow for arrays with different dimensions to be stacked.
I was looking into creating classes, but maybe someone her has a better alternative to this.
Thanks | 0 | 1 | 4,483 |
0 | 6,761,407 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2011-07-20T11:33:00.000 | 0 | 1 | 0 | Ported python3 csv module to C# what license should I use for my module? | 6,761,201 | 1.2 | python,module,licensing | You need to pay a copyright lawyer to tell you that. But my guess is that you need to use the PSF license. Note that PSF does not have the copyright to Python source code. They coders do how that copyright translates into you making a C# port is something only a copyright expert can say. Also note that it is likely to vary from country to country.
Copyright sucks. | I have ported python3 csv module to C# what license could I use for my module?
Should I distribute my module?
Should I put PSF copyright in every header of my module?
thanks | 0 | 1 | 122 |
0 | 6,767,866 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2011-07-20T20:01:00.000 | 9 | 1 | 0 | NLTK - when to normalize the text? | 6,767,770 | 1 | python,nlp,nltk | By "normalize" do you just mean making everything lowercase?
The decision about whether to lowercase everything is really dependent of what you plan to do. For some purposes, lowercasing everything is better because it lowers the sparsity of the data (uppercase words are rarer and might confuse the system unless you have a massive corpus such that the statistics on capitalized words are decent). In other tasks, case information might be valuable.
Additionally, there are other considerations you'll have to make that are similar. For example, should "can't" be treated as ["can't"], ["can", "'t"], or ["ca", "n't"] (I've seen all three in different corpora). What about 7-year-old? Is it one long word? Or three words that should be separated?
That said, there's no reason to reformat the corpus. You can just have your code make these changes on the fly. That way the original information is still around later if you ever need it. | I've finished gathering my data I plan to use for my corpus, but I'm a bit confused about whether I should normalize the text. I plan to tag & chunk the corpus in the future. Some of NLTK's corpora are all lower case and others aren't.
Can anyone shed some light on this subject, please? | 0 | 1 | 2,668 |
0 | 6,787,446 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-07-22T08:20:00.000 | 4 | 1 | 0 | Quiz Generator using NLTK/Python | 6,787,345 | 0.664037 | python,nlp,nltk | In the general case, this is a very hard open research question. However, you might be able to get away with a simple solution a long as your "facts" follow a pretty simple grammar.
You could write a fairly simple solution by creating a set of transformation rules that act on parse trees. So if you saw a structure that matched the grammar for "X was Y in Z" you could transform it to "Was X Y in Z ?", etc. Then all you would have to do is parse the fact, transform, and read off the question that is produced. | The goal of this application is produce a system that can generate quizzes automatically. The user should be able to supply any word or phrase they like (e.g. "Sachin Tendulkar"); the system will then look for suitable topics online, identify a range of interesting facts, and rephrase them as quiz questions.
If I have the sentence "Sachin was born in year 1973", how can I rephrase it to "Which Year was sachin born?" | 0 | 1 | 1,334 |
0 | 6,795,732 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2011-07-22T20:20:00.000 | 0 | 5 | 0 | Numpy: arr[...,0,:] works. But how do I store the data contained in the slice command (..., 0, :)? | 6,795,657 | 0 | python,indexing,numpy,slice | I think you want to just do myslice = slice(1,2) to for example define a slice that will return the 2nd element (i.e. myarray[myslice] == myarray[1:2]) | In Numpy (and Python in general, I suppose), how does one store a slice-index, such as (...,0,:), in order to pass it around and apply it to various arrays? It would be nice to, say, be able to pass a slice-index to and from functions. | 0 | 1 | 389 |
0 | 6,801,439 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-07-23T13:03:00.000 | 2 | 3 | 0 | numpy array access | 6,800,534 | 0.132549 | python,numpy | Use A[n-offset]. this turns offset to offset+len(A) into 0 to len(A). | I need to create a numpy array of N elements, but I want to access the
array with an offset Noff, i.e. the first element should be at Noff and
not at 0. In C this is simple to do with some simple pointer arithmetic, i.e.
I malloc the array and then define a pointer and shift it appropriately.
Furthermore, I do not want to allocate N+Noff elements, but only N elements.
Now for numpy there are many methods that come to my mind:
(1) define a wrapper function to access the array
(2) overwrite the [] operator
(3) etc
But what is the fastest method to realize this?
Thanks a lot!
Mark | 0 | 1 | 2,276 |
0 | 6,812,332 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-07-23T13:03:00.000 | 2 | 3 | 0 | numpy array access | 6,800,534 | 0.132549 | python,numpy | I would be very cautious about over-riding the [] operator through the __getitem__() method. Although it will be fine with your own code, I can easily imagine that when the array gets passed to an arbitrary library function, you could get problems.
For example, if the function explicitly tried to get all values in the array as A[0:-1], it would maps to A[offset:offset-1], which will be an empty array for any positive or negative value of offset. This may be a little contrived, but it illustrates the general problem.
Therefore, I would suggest that you create a wrapper function for your own use (as a member function may be most convenient), but don't muck around with __getitem__(). | I need to create a numpy array of N elements, but I want to access the
array with an offset Noff, i.e. the first element should be at Noff and
not at 0. In C this is simple to do with some simple pointer arithmetic, i.e.
I malloc the array and then define a pointer and shift it appropriately.
Furthermore, I do not want to allocate N+Noff elements, but only N elements.
Now for numpy there are many methods that come to my mind:
(1) define a wrapper function to access the array
(2) overwrite the [] operator
(3) etc
But what is the fastest method to realize this?
Thanks a lot!
Mark | 0 | 1 | 2,276 |
0 | 6,819,725 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2011-07-25T16:55:00.000 | 1 | 8 | 0 | Plotting points in python | 6,819,653 | 0.024995 | python,plot | You could always write a plotting function that uses the turtle module from the standard library. | I want to plot some (x,y) points on the same graph and I don't need any special features at all short of support for polar coordinates which would be nice but not necessary. It's mostly for visualizing my data. Is there a simple way to do this? Matplotlib seems like way more than I need right now. Are there any more basic modules available? What do You recommend? | 0 | 1 | 69,333 |
0 | 11,173,545 | 0 | 0 | 1 | 0 | 1 | false | 7 | 2011-07-27T17:42:00.000 | 1 | 4 | 0 | Embed a function from a Matlab MEX file directly in Python | 6,848,790 | 0.049958 | python,matlab,mex | A mex function is an api that allows Matlab (i.e. a matlab program) to call a function written in c/c++. This function, in turn, can call Matlab own internal functions. As such, the mex function will be linked against Matlab libraries. Thus, to call a mex function directly from a Python program w/o Matlab libraries doesn't look possible (and doesn't makes sense for that matter).
Of consideration is why was the mex function created in the first place? Was it to make some non-matlab c libraries (or c code) available to matlab users, or was it to hide some proprietery matlab-code while still making it available to matlab users? If its the first case, then you could request the owners of the mex function to provide it in a non-mex dynamic lib form that you can include in another c or python program. This should be easy if the mex function doesnt depend on Matlab internal functions.
Others above have mentioned the matlab compiler... yes, you can include a mex function in a stand alone binary callable from unix (thus from python but as a unix call) if you use the Matlab Compiler to produce such binary. This would require the binary to be deployed along with Matlab's runtime environment. This is not quite the same as calling a function directly from python-- there are no return values for example. | I am using a proprietary Matlab MEX file to import some simulation results in Matlab (no source code available of course!). The interface with Matlab is actually really simple, as there is a single function, returning a Matlab struct. I would like to know if there is any way to call this function in the MEX file directly from Python, without having to use Matlab?
What I have in mind is for example using something like SWIG to import the C function into Python by providing a custom Matlab-wrapper around it...
By the way, I know that with scipy.io.loadmat it is already possible to read Matlab binary *.mat data files, but I don't know if the data representation in a mat file is the same as the internal representation in Matlab (in which case it might be useful for the MEX wrapper).
The idea would be of course to be able to use the function provided in the MEX with no Matlab installation present on the system.
Thanks. | 0 | 1 | 7,858 |
0 | 6,854,030 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2011-07-28T03:53:00.000 | 1 | 6 | 0 | Python: handling a large set of data. Scipy or Rpy? And how? | 6,853,923 | 0.033321 | python,r,numpy,scipy,memory-mapped-files | I don't know anything about Rpy. I do know that SciPy is used to do serious number-crunching with truly large data sets, so it should work for your problem.
As zephyr noted, you may not need either one; if you just need to keep some running sums, you can probably do it in Python. If it is a CSV file or other common file format, check and see if there is a Python module that will parse it for you, and then write a loop that sums the appropriate values.
I'm not sure how to get the top ten rows. Can you gather them on the fly as you go, or do you need to compute the sums and then choose the rows? To gather them you might want to use a dictionary to keep track of the current 10 best rows, and use the keys to store the metric you used to rank them (to make it easy to find and toss out a row if another row supersedes it). If you need to find the rows after the computation is done, slurp all the data into a numpy.array, or else just take a second pass through the file to pull out the ten rows. | In my python environment, the Rpy and Scipy packages are already installed.
The problem I want to tackle is such:
1) A huge set of financial data are stored in a text file. Loading into Excel is not possible
2) I need to sum a certain fields and get the totals.
3) I need to show the top 10 rows based on the totals.
Which package (Scipy or Rpy) is best suited for this task?
If so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution?
Speed is a concern. Ideally scipy and Rpy can handle the large files when even when the files are so large that they cannot be fitted into memory | 0 | 1 | 3,050 |
0 | 6,853,981 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2011-07-28T03:53:00.000 | 5 | 6 | 0 | Python: handling a large set of data. Scipy or Rpy? And how? | 6,853,923 | 0.16514 | python,r,numpy,scipy,memory-mapped-files | Neither Rpy or Scipy is necessary, although numpy may make it a bit easier.
This problem seems ideally suited to a line-by-line parser.
Simply open the file, read a row into a string, scan the row into an array (see numpy.fromstring), update your running sums and move to the next line. | In my python environment, the Rpy and Scipy packages are already installed.
The problem I want to tackle is such:
1) A huge set of financial data are stored in a text file. Loading into Excel is not possible
2) I need to sum a certain fields and get the totals.
3) I need to show the top 10 rows based on the totals.
Which package (Scipy or Rpy) is best suited for this task?
If so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution?
Speed is a concern. Ideally scipy and Rpy can handle the large files when even when the files are so large that they cannot be fitted into memory | 0 | 1 | 3,050 |
0 | 7,559,475 | 0 | 0 | 0 | 0 | 3 | true | 7 | 2011-07-28T03:53:00.000 | 2 | 6 | 0 | Python: handling a large set of data. Scipy or Rpy? And how? | 6,853,923 | 1.2 | python,r,numpy,scipy,memory-mapped-files | As @gsk3 noted, bigmemory is a great package for this, along with the packages biganalytics and bigtabulate (there are more, but these are worth checking out). There's also ff, though that isn't as easy to use.
Common to both R and Python is support for HDF5 (see the ncdf4 or NetCDF4 packages in R), which makes it very speedy and easy to access massive data sets on disk. Personally, I primarily use bigmemory, though that's R specific. As HDF5 is available in Python and is very, very fast, it's probably going to be your best bet in Python. | In my python environment, the Rpy and Scipy packages are already installed.
The problem I want to tackle is such:
1) A huge set of financial data are stored in a text file. Loading into Excel is not possible
2) I need to sum a certain fields and get the totals.
3) I need to show the top 10 rows based on the totals.
Which package (Scipy or Rpy) is best suited for this task?
If so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution?
Speed is a concern. Ideally scipy and Rpy can handle the large files when even when the files are so large that they cannot be fitted into memory | 0 | 1 | 3,050 |
0 | 6,863,816 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2011-07-28T18:21:00.000 | 2 | 2 | 0 | Effeciently Removing Duplicates from a CSV in Python | 6,863,756 | 1.2 | python,csv,performance | In order to remove duplicates you will have to have some sort of memory that tells you if you have seen a line before. Either by remembering the lines or perhaps a checksum of them (which is almost safe...)
Any solution like that will probably have a "brute force" feel to it.
If you could have the lines sorted before processing them, then the task is fairly easy as duplicates would be next to each other. | I am trying to effeciently remove duplicate rows from relatively large (several hundred MB) CSV files that are not ordered in any meaningful way. Although I have a technique to do this, it is very brute force and I am certain there is a moe elegant and more effecient way. | 0 | 1 | 1,090 |
0 | 15,331,547 | 0 | 0 | 0 | 0 | 1 | false | 42 | 2011-08-03T18:16:00.000 | 16 | 3 | 0 | Difference between scipy.spatial.KDTree and scipy.spatial.cKDTree | 6,931,209 | 1 | python,scipy,kdtree | In a use case (5D nearest neighbor look ups in a KDTree with approximately 100K points) cKDTree is around 12x faster than KDTree. | What is the difference between these two algorithms? | 0 | 1 | 11,234 |
0 | 6,938,587 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-08-04T08:32:00.000 | 1 | 2 | 0 | How to rotate a numpy array? | 6,938,377 | 0.099668 | python,arrays,numpy,scipy,rotation | Take a look at the command numpy.shape
I used it once to transpose an array, but I don't know if it might fit your needs.
Cheers! | I have some numpy/scipy issue. I have a 3D array that represent an ellipsoid in a binary way [ 0 out of the ellipsoid].
The thing is I would like to rotate my shape of a certain degree. Do you think it's possible ?
Or is there an efficient way to write directly the ellipsoid equation with the rotation ? | 0 | 1 | 2,379 |
0 | 6,941,191 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-08-04T08:32:00.000 | 2 | 2 | 0 | How to rotate a numpy array? | 6,938,377 | 0.197375 | python,arrays,numpy,scipy,rotation | Just a short answer. If you need more informations or you don't know how to do it, then I will edit this post and add a small example.
The right way to rotate your matrix of data points is to do a matrix multiplication. Your rotation matrix would be probably an n*n-matrix and you have to multiply it with every point. If you have your 3d-matrix you have some thing like i*j*k-points for plotting. This means for your case you have to do it i*j*k-times to find the new points. Maybe you should consider an other matrix for plotting which is just a 2D matrix and just store the plotting points and no zero values.
There are some algorithm to calculate faster the results for low valued matrix, but just google for this.
Did you understood me or do you still have some questions? Sorry for this rough overview.
Best regards | I have some numpy/scipy issue. I have a 3D array that represent an ellipsoid in a binary way [ 0 out of the ellipsoid].
The thing is I would like to rotate my shape of a certain degree. Do you think it's possible ?
Or is there an efficient way to write directly the ellipsoid equation with the rotation ? | 0 | 1 | 2,379 |
0 | 6,948,576 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2011-08-04T21:03:00.000 | 6 | 4 | 0 | Best language for Molecular Dynamics Simulator, to be run in production. (Python+Numpy?) | 6,948,483 | 1 | python,scala,numpy,simulation,scientific-computing | I believe that most highly performant MD codes are written in native languages like Fortran, C or C++. Modern GPU programming techniques are also finding favour more recently.
A language like Python would allow for much more rapid development that native code. The flip side of that is that the performance is typically worse than for compiled native code.
A question for you. Why are you writing your own MD code? There are many many libraries out there. Can't you find one to suit your needs? | I need to build a heavy duty molecular dynamics simulator. I am wondering if python+numpy is a good choice. This will be used in production, so I wanted to start with a good language. I am wondering if I should rather start with a functional language like eg.scala. Do we have enough library support for scientific computation in scala? Or any other language/paradigm combination you think is good - and why. If you had actually built something in the past and are talking from experience, please mention it as it will help me with collecting data points.
thanks much! | 0 | 1 | 2,217 |
0 | 6,987,109 | 0 | 0 | 0 | 0 | 1 | false | 185 | 2011-08-08T18:46:00.000 | 5 | 9 | 0 | Bin size in Matplotlib (Histogram) | 6,986,986 | 0.110656 | python,matplotlib,histogram | I guess the easy way would be to calculate the minimum and maximum of the data you have, then calculate L = max - min. Then you divide L by the desired bin width (I'm assuming this is what you mean by bin size) and use the ceiling of this value as the number of bins. | I'm using matplotlib to make a histogram.
Is there any way to manually set the size of the bins as opposed to the number of bins? | 0 | 1 | 355,218 |
0 | 7,000,381 | 0 | 0 | 0 | 0 | 1 | true | 37 | 2011-08-09T16:34:00.000 | 51 | 2 | 0 | how to use 'extent' in matplotlib.pyplot.imshow | 6,999,621 | 1.2 | python,plot,matplotlib | Specify, in the coordinates of your current axis, the corners of the rectangle that you want the image to be pasted over
Extent defines the left and right limits, and the bottom and top limits. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max].
Assuming you have longitude along the horizontal axis, then use extent=[longitude_top_left,longitude_top_right,latitude_bottom_left,latitude_top_left]. longitude_top_left and longitude_bottom_left should be the same, latitude_top_left and latitude_top_right should be the same, and the values within these pairs are interchangeable.
If your first element of your image should be plotted in the lower left, then use the origin='lower' imshow option as well, otherwise the 'upper' default is what you want. | I managed to plot my data and would like to add a background image (map) to it.
Data is plotted by the long/lat values and I have the long/lat values for the image's three corners (top left, top right and bottom left) too.
I am trying to figure out how to use 'extent' option with imshow. However, the examples I found don't explain how to assign x and y for each corner ( in my case I have the information for three corners).
How can I assign the location of three corners for the image when adding it to the plot?
Thanks | 0 | 1 | 80,144 |
0 | 7,011,948 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2011-08-10T13:20:00.000 | 0 | 4 | 0 | imshow for 3D? (Python / Matplotlib) | 7,011,428 | 0 | python,numpy,matplotlib | What you want is a kind of 3D image (a block). Maybe you could plot it by slices (using imshow() or whatever the tool you want).
Maybe you could tell us what kind of plot you want? | does there exist an equivalent to matplotlib's imshow()-function for 3D-drawing of datas stored in a 3D numpy array? | 0 | 1 | 11,267 |
0 | 7,046,562 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2011-08-10T13:30:00.000 | 0 | 2 | 0 | Recode missing data Numpy | 7,011,591 | 0 | python,arrays,numpy,missing-data | you can use mask array when you do calculation. and when pass the array to ATPY, you can call filled(9999) method of the mask array to convert the mask array to normal array with invalid values replaced by 9999. | I am reading in census data using the matplotlib cvs2rec function - works fine gives me a nice ndarray.
But there are several columns where all the values are '"none"" with dtype |04. This is cuasing problems when I lode into Atpy "TypeError: object of NoneType has no len()". Something like '9999' or other missing would work for me. Mask is not going to work in this case because I am passing the real array to ATPY and it will not convert MASK. The Put function in numpy will not work with none values wich is the best way to change values(I think). I think some sort of boolean array is the way to go but I can't get it to work.
So what is a good/fast way to change none values and/or uninitialized numpy array to something like '9999'or other recode. No Masking.
Thanks,
Matthew | 0 | 1 | 2,140 |
0 | 7,030,943 | 0 | 0 | 0 | 0 | 1 | false | 117 | 2011-08-11T17:05:00.000 | 3 | 4 | 0 | Differences between numpy.random and random.random in Python | 7,029,993 | 0.148885 | python,random,random-seed | The source of the seed and the distribution profile used are going to affect the outputs - if you are looking for cryptgraphic randomness, seeding from os.urandom() will get nearly real random bytes from device chatter (ie ethernet or disk) (ie /dev/random on BSD)
this will avoid you giving a seed and so generating determinisitic random numbers. However the random calls then allow you to fit the numbers to a distribution (what I call scientific random ness - eventually all you want is a bell curve distribution of random numbers, numpy is best at delviering this.
SO yes, stick with one generator, but decide what random you want - random, but defitniely from a distrubtuion curve, or as random as you can get without a quantum device. | I have a big script in Python. I inspired myself in other people's code so I ended up using the numpy.random module for some things (for example for creating an array of random numbers taken from a binomial distribution) and in other places I use the module random.random.
Can someone please tell me the major differences between the two?
Looking at the doc webpage for each of the two it seems to me that numpy.random just has more methods, but I am unclear about how the generation of the random numbers is different.
The reason why I am asking is because I need to seed my main program for debugging purposes. But it doesn't work unless I use the same random number generator in all the modules that I am importing, is this correct?
Also, I read here, in another post, a discussion about NOT using numpy.random.seed(), but I didn't really understand why this was such a bad idea. I would really appreciate if someone explain me why this is the case. | 0 | 1 | 53,810 |
0 | 7,067,801 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2011-08-15T16:30:00.000 | 7 | 2 | 0 | linked list in python | 7,067,726 | 1 | python,linked-list | This sounds like a perfect use for a dictionary. | there are huge number of data, there are various groups. i want to check whether the new data fits in any group and if it does i want to put that data into that group. If datum doesn't fit to any of the group, i want to create new group. So, i want to use linked list for the purpose or is there any other way to doing so??
P.S. i have way to check the similarity between data and group representative(lets not go that in deatil for now) but i dont know how to add the data to group (each group may be list) or create new one if required. i guess what i needis linked list implementation in python, isn't it? | 0 | 1 | 468 |
0 | 7,078,262 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-08-15T02:07:00.000 | 0 | 3 | 0 | Generating dynamic graphs | 7,078,010 | 0 | graphics,python | If performance is such an issue and you don't need fancy graphs, you may be able to get by with not creating images at all. Render explicitly sized and colored divs for a simple bar chart in html. Apply box-shadow and/or a gradient background for eye candy.
I did this in some report web pages, displaying a small 5-bar (quintiles) chart in each row of a large table, with huge speed and almost no server load. The users love it for the early and succinct feedback.
Using canvas and javascript you could improve on this scheme for other chart types. I don't know if you could use Google's charting code for this without going through their API, but a few lines or circle segments should be easy to paint yourselves if not. | I'm building a web application in Django and I'm looking to generate dynamic graphs based on the data.
Previously I was using the Google Image Charts, but I ran into significant limitations with the api, including the URL length constraint.
I've switched to using matplotlib to create my charts. I'm wondering if this will be a bad decision for future scaling - do any production sites use matplotlib? It takes about 100ms to generate a single graph (much longer than querying the database for the data), is this simply out of the question in terms of scaling/handling multiple requests? | 1 | 1 | 228 |
0 | 7,129,002 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2011-08-20T00:39:00.000 | 2 | 4 | 1 | How to properly install Python on OSX for use with OpenCV? | 7,128,761 | 0.099668 | python,macos,opencv,homebrew | You need to install the module using your python2.7 installation. Pointing your PYTHONPATH at stuff installed under 2.6 to run under 2.7 is a Bad Idea.
Depending on how you want to install it, do something like python2.7 setup.py or easy_install-2.7 opencv to install.
fwiw, on OS X the modules are usually installed under /System/Library/Frameworks/Python.framework/ but you should almost never need to know where anything installed in your site packages is physically located; if Python can't find them without help you've installed them wrong. | I spent the past couple of days trying to get opencv to work with my Python 2.7 install. I kept getting an error saying that opencv module was not found whenever I try "import cv".
I then decided to try installing opencv using Macports, but that didn't work.
Next, I tried Homebrew, but that didn't work either.
Eventually, I discovered I should modify the PYTHONPATH as such:
export PYTHONPATH="/usr/local/lib/python2.6/site-packages/:$PYTHONPATH"
My problem is that I didn't find /usr/local/lib/python2.*...etc
The folder simply doesn't exist
So my question is this:
How do I properly install Python on OS X Snow Leopard for it to work with opencv?
Thanks a lot, | 0 | 1 | 7,935 |
0 | 7,162,436 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-08-23T14:09:00.000 | 0 | 1 | 0 | temporal interpolation of artery angiogram images sequences | 7,162,351 | 0 | python,interpolation | If you imagine each of the images as a still photo the frame number would be a sequence number that shows what order the images should be displayed in to produce a movie from the stills. If the images are stored in an array it would be the array index of the individul frame in question. | I have many sets of medical image sequences of the artery on the heart. Each set of sequenced medical images shows the position of the artery as the heart pumps. Each set is taken from different views and has different amount of images taken.
I want to do a temporal interpolation based on time (i was told that the time could be represented by the frame number. however i have no idea what the frame number is or what does it refer to. Could you enlighten me please?) I have two options: by interpolating the whole image frame or interpolating artery junction positions (coordinates). How do i go about doing both options? | 0 | 1 | 163 |
0 | 8,822,734 | 0 | 1 | 0 | 0 | 2 | false | 50 | 2011-09-03T00:27:00.000 | 1 | 5 | 0 | Store and reload matplotlib.pyplot object | 7,290,370 | 0.039979 | python,matplotlib | I produced figures for a number of papers using matplotlib. Rather than thinking of saving the figure (as in MATLAB), I would write a script that plotted the data then formatted and saved the figure. In cases where I wanted to keep a local copy of the data (especially if I wanted to be able to play with it again) I found numpy.savez() and numpy.load() to be very useful.
At first I missed the shrink-wrapped feel of saving a figure in MATLAB, but after a while I have come to prefer this approach because it includes the data in a format that is available for further analysis. | I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc.
Is there something built into matplotlib that would let me store and reload my matplotlib.pyplot object for later use? It would need to maintain access to all associated objects including figures, lines, legends, etc. Maybe pickle is what I'm looking for, but I doubt it. | 0 | 1 | 43,605 |
0 | 7,843,630 | 0 | 1 | 0 | 0 | 2 | false | 50 | 2011-09-03T00:27:00.000 | 0 | 5 | 0 | Store and reload matplotlib.pyplot object | 7,290,370 | 0 | python,matplotlib | Did you try the pickle module? It serialises an object, dumps it to a file, and can reload it from the file later. | I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc.
Is there something built into matplotlib that would let me store and reload my matplotlib.pyplot object for later use? It would need to maintain access to all associated objects including figures, lines, legends, etc. Maybe pickle is what I'm looking for, but I doubt it. | 0 | 1 | 43,605 |
0 | 7,301,095 | 0 | 0 | 1 | 0 | 1 | false | 18 | 2011-09-04T17:25:00.000 | 3 | 4 | 0 | Preserve code readability while optimising | 7,300,903 | 0.148885 | python,performance,algorithm,optimization,code-readability | Yours is a very good question that arises in almost every piece of code, however simple or complex, that's written by any programmer who wants to call himself a pro.
I try to remember and keep in mind that a reader newly come to my code has pretty much the same crude view of the problem and the same straightforward (maybe brute force) approach that I originally had. Then, as I get a deeper understanding of the problem and paths to the solution become clearer, I try to write comments that reflect that better understanding. I sometimes succeed and those comments help readers and, especially, they help me when I come back to the code six weeks later. My style is to write plenty of comments anyway and, when I don't (because: a sudden insight gets me excited; I want to see it run; my brain is fried), I almost always greatly regret it later.
It would be great if I could maintain two parallel code streams: the naïve way and the more sophisticated optimized way. But I have never succeeded in that.
To me, the bottom line is that if I can write clear, complete, succinct, accurate and up-to-date comments, that's about the best I can do.
Just one more thing that you know already: optimization usually doesn't mean shoehorning a ton of code onto one source line, perhaps by calling a function whose argument is another function whose argument is another function whose argument is yet another function. I know that some do this to avoid storing a function's value temporarily. But it does very little (usually nothing) to speed up the code and it's a bitch to follow. No news to you, I know. | I am writing a scientific program in Python and C with some complex physical simulation algorithms. After implementing algorithm, I found that there are a lot of possible optimizations to improve performance. Common ones are precalculating values, getting calculations out of cycle, replacing simple matrix algorithms with more complex and other. But there arises a problem. Unoptimized algorithm is much slower, but its logic and connection with theory look much clearer and readable. Also, it's harder to extend and modify optimized algorithm.
So, the question is - what techniques should I use to keep readability while improving performance? Now I am trying to keep both fast and clear branches and develop them in parallel, but maybe there are better methods? | 0 | 1 | 434 |
0 | 7,337,353 | 0 | 0 | 0 | 0 | 1 | true | 10 | 2011-09-07T14:10:00.000 | 8 | 6 | 0 | Can I load a multi-frame TIFF through OpenCV? | 7,335,308 | 1.2 | python,image,opencv | Unfortunately OpenCV does not support TIFF directories and is able to read only the first frame from multi-frame TIFF files. | Anyone know if OpenCV is capable of loading a multi-frame TIFF stack?
I'm using OpenCV 2.2.0 with python 2.6. | 0 | 1 | 12,017 |
0 | 7,351,024 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2011-09-08T15:46:00.000 | 0 | 2 | 0 | Need to do a math operation on every line in several CSV files in Python | 7,350,851 | 0 | python,csv,datestamp | The basic outline of the program is going to be like this:
Use the os module to get the filenames out of the directory/directories of interest
Read in each file one at a time
For each line in the file, split it into columns with columns = line.split(",")
Use datetime.date to convert strings like "2011-05-03" to datetime.dates.
Subtract the third date from the second, which yields a datetime.timedelta.
Put all your information in the format you want (hint: str(foo) yields a string representation of foo, for just about any type) and remember it for later
Close your file, reopen it for writing, and write your new stuff in | I have about 100 CSV files I have to operate on once a month and I was trying to wrap my head around this but I'm running into a wall. I'm starting to understand some things about Python, but combining several things is still giving me issues, so I can't figure this out.
Here's my problem:
I have many CSV files, and here's what I need done:
add a "column" to the front of each row (or the back, doesn't matter really, but front is ideal). In addition, each line has 5 rows (not counting the filename that will be added), and here's the format:
6-digit ID number,YYYY-MM-DD(1),YYYY-MM-DD(2),YYYY-MM-DD(3),1-2-digit number
I need to subtract YYYY-MM-DD(3) from YYYY-MM-DD(2) for every line in the file (there is no header row), for every CSV in a given directory.
I need the filename inside the row because I will combine the files (which, if is included in the script would be awesome, but I think I can figure that part out), and I need to know what file the records came from. Format of filename is always '4-5-digit-number.csv'
I hope this makes sense, if it does not, please let me know. I'm kind of stumped as to where to even begin, so I don't have any sample code that even really began to work for me. Really frustrated, so I appreciate any help you guys may provide, this site rocks!
Mylan | 0 | 1 | 2,947 |
0 | 7,362,256 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2011-09-09T12:12:00.000 | 4 | 6 | 0 | How to find indices of non zero elements in large sparse matrix? | 7,361,447 | 1.2 | python,algorithm,r,sparse-matrix,indices | Since you have two dense matrices then the double for loop is the only option you have. You don't need a sparse matrix class at all since you only want to know the list of indices (i,j) for which a[i,j] != b[i,j].
In languages like R and Python the double for loop will perform poorly. I'd probably write this in native code for a double for loop and add the indices to a list object. But no doubt the wizards of interpreted code (i.e. R, Python etc.) know efficient ways to do it without resorting to native coding. | i have two sq matrix (a, b) of size in order of 100000 X 100000. I have to take difference of these two matrix (c = a-b). Resultant matrix 'c' is a sparse matrix. I want to find the indices of all non-zero elements. I have to do this operation many times (>100).
Simplest way is to use two for loops. But that's computationally intensive. Can you tell me any algorithm or package/library preferably in R/python/c to do this as quickly as possible? | 0 | 1 | 6,600 |
0 | 7,364,602 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2011-09-09T15:28:00.000 | 1 | 1 | 0 | Graphing the number of elements down based on timestamps start/end | 7,363,997 | 1.2 | python,graph | I'd start by parsing your indata to a map indexed by dates with counts as values. Just increase the count for each row with the same date you encounter.
After that, use some plotting module, for instance matplotlib to plot the keys of the map versus the values. That should cover it!
Do you need any more detailed ideas? | I am trying to graph alarm counts in Python to give some sort of display to give an idea of the peak amount of network elements down between two timespans. The way that our alarms report handles it is in CSV like this:
Name,Alarm Start,Alarm Clear
NE1,15:42 08/09/11,15:56 08/09/11
NE2,15:42 08/09/11,15:57 08/09/11
NE3,15:42 08/09/11,16:31 08/09/11
NE4,15:42 08/09/11,15:59 08/09/11
I am trying to graph the start and end between those two points and how many NE's were down during that time, including the maximum number and when it went under or over a certain count. An example is below:
15:42 08/09/11 - 4 Down
15:56 08/09/11 - 3 Down
etc.
Any advice where to start on this would be great. Thanks in advance, you guys and gals have been a big help in the past. | 0 | 1 | 52 |
0 | 7,381,424 | 0 | 0 | 0 | 0 | 4 | false | 17 | 2011-09-11T21:08:00.000 | 2 | 11 | 0 | Minimising reading from and writing to disk in Python for a memory-heavy operation | 7,381,258 | 0.036348 | python,memory,io | Two ideas:
Use numpy arrays to represent vectors. They are much more memory-efficient, at the cost that they will force elements of the vector to be of the same type (all ints or all doubles...).
Do multiple passes, each with a different set of vectors. That is, choose first 1M vectors and do only the calculations involving them (you said they are independent, so I assume this is viable). Then another pass over all the data with second 1M vectors.
It seems you're on the edge of what you can do with your hardware. It would help if you could describe what hardware (mostly, RAM) is available to you for this task. If there are 100k vectors, each of them with 1M ints, this gives ~370GB. If multiple passes method is viable and you've got a machine with 16GB RAM, then it is about ~25 passes -- should be easy to parallelize if you've got a cluster. | Background
I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.
Requirements
The key aspect of this particular program I must write is that it must:
Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line)
Process the data on each line.
From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.
These vectors must all be saved to disk in some format or other.
Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)
Parenthesis: Technical Details
In case the actual procedure for building vectors affects the solution:
For each line in the corpus, one or more vectors must have its basis weights updated.
If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index).
Vectors do not depend on each other, nor does it matter which order the corpus lines are read in.
Attempted Solutions
There are three extrema when it comes to how to do this:
I could build all the vectors in memory. Then write them to disk.
I could build all the vectors directly on the disk, using shelf of pickle or some such library.
I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.
All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.
Goals
A good solution would involve:
Building as much as possible in memory.
Once memory is full, dump everything to disk.
If bits are needed from disk again, recover them back into memory to add stuff to those vectors.
Go back to 1 until all vectors are built.
The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.
Question
Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?
Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.
Additional Details
The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.
Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available? | 0 | 1 | 3,167 |
0 | 7,381,462 | 0 | 0 | 0 | 0 | 4 | false | 17 | 2011-09-11T21:08:00.000 | 1 | 11 | 0 | Minimising reading from and writing to disk in Python for a memory-heavy operation | 7,381,258 | 0.01818 | python,memory,io | Use a database. That problem seems large enough that language choice (Python, Perl, Java, etc) won't make a difference. If each dimension of the vector is a column in the table, adding some indexes is probably a good idea. In any case this is a lot of data and won't process terribly quickly. | Background
I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.
Requirements
The key aspect of this particular program I must write is that it must:
Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line)
Process the data on each line.
From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.
These vectors must all be saved to disk in some format or other.
Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)
Parenthesis: Technical Details
In case the actual procedure for building vectors affects the solution:
For each line in the corpus, one or more vectors must have its basis weights updated.
If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index).
Vectors do not depend on each other, nor does it matter which order the corpus lines are read in.
Attempted Solutions
There are three extrema when it comes to how to do this:
I could build all the vectors in memory. Then write them to disk.
I could build all the vectors directly on the disk, using shelf of pickle or some such library.
I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.
All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.
Goals
A good solution would involve:
Building as much as possible in memory.
Once memory is full, dump everything to disk.
If bits are needed from disk again, recover them back into memory to add stuff to those vectors.
Go back to 1 until all vectors are built.
The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.
Question
Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?
Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.
Additional Details
The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.
Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available? | 0 | 1 | 3,167 |
0 | 7,433,853 | 0 | 0 | 0 | 0 | 4 | false | 17 | 2011-09-11T21:08:00.000 | 0 | 11 | 0 | Minimising reading from and writing to disk in Python for a memory-heavy operation | 7,381,258 | 0 | python,memory,io | Split the corpus evenly in size between parallel jobs (one per core) - process in parallel, ignoring any incomplete line (or if you cannot tell if it is incomplete, ignore the first and last line of that each job processes).
That's the map part.
Use one job to merge the 20+ sets of vectors from each of the earlier jobs - That's the reduce step.
You stand to loose information from 2*N lines where N is the number of parallel processes, but you gain by not adding complicated logic to try and capture these lines for processing. | Background
I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.
Requirements
The key aspect of this particular program I must write is that it must:
Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line)
Process the data on each line.
From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.
These vectors must all be saved to disk in some format or other.
Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)
Parenthesis: Technical Details
In case the actual procedure for building vectors affects the solution:
For each line in the corpus, one or more vectors must have its basis weights updated.
If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index).
Vectors do not depend on each other, nor does it matter which order the corpus lines are read in.
Attempted Solutions
There are three extrema when it comes to how to do this:
I could build all the vectors in memory. Then write them to disk.
I could build all the vectors directly on the disk, using shelf of pickle or some such library.
I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.
All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.
Goals
A good solution would involve:
Building as much as possible in memory.
Once memory is full, dump everything to disk.
If bits are needed from disk again, recover them back into memory to add stuff to those vectors.
Go back to 1 until all vectors are built.
The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.
Question
Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?
Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.
Additional Details
The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.
Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available? | 0 | 1 | 3,167 |
0 | 7,381,527 | 0 | 0 | 0 | 0 | 4 | false | 17 | 2011-09-11T21:08:00.000 | 0 | 11 | 0 | Minimising reading from and writing to disk in Python for a memory-heavy operation | 7,381,258 | 0 | python,memory,io | From another comment I infer that your corpus fits into the memory, and you have some cores to throw at the problem, so I would try this:
Find a method to have your corpus in memory. This might be a sort of ram disk with file system, or a database. No idea, which one is best for you.
Have a smallish shell script monitor ram usage, and spawn every second another process of the following, as long as there is x memory left (or, if you want to make things a bit more complex, y I/O bandwith to disk):
iterate through the corpus and build and write some vectors
in the end you can collect and combine all vectors, if needed (this would be the reduce part) | Background
I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.
Requirements
The key aspect of this particular program I must write is that it must:
Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line)
Process the data on each line.
From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.
These vectors must all be saved to disk in some format or other.
Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)
Parenthesis: Technical Details
In case the actual procedure for building vectors affects the solution:
For each line in the corpus, one or more vectors must have its basis weights updated.
If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index).
Vectors do not depend on each other, nor does it matter which order the corpus lines are read in.
Attempted Solutions
There are three extrema when it comes to how to do this:
I could build all the vectors in memory. Then write them to disk.
I could build all the vectors directly on the disk, using shelf of pickle or some such library.
I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.
All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.
Goals
A good solution would involve:
Building as much as possible in memory.
Once memory is full, dump everything to disk.
If bits are needed from disk again, recover them back into memory to add stuff to those vectors.
Go back to 1 until all vectors are built.
The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.
Question
Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?
Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.
Additional Details
The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.
Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available? | 0 | 1 | 3,167 |
0 | 20,999,120 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2011-09-12T17:10:00.000 | 4 | 2 | 0 | Does Scikit-learn release the python GIL? | 7,391,427 | 0.379949 | python,multithreading,parallel-processing,machine-learning,scikit-learn | Some sklearn Cython classes do release the GIL internally on performance critical sections, for instance the decision trees (used in random forests for instance) as of 0.15 (to be released early 2014) and the libsvm wrappers do.
This is not the general rule though. If you identify performance critical cython code in sklearn that could be changed to release the GIL please feel free to send a pull request. | I would like to train multiple one class SVMs in different threads.
Does anybody know if scikit's SVM releases the GIL?
I did not find any answers online.
Thanks | 0 | 1 | 1,679 |
0 | 7,453,107 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2011-09-17T06:22:00.000 | 0 | 4 | 0 | I need a neat data structure suggestion to store a very large dataset (to train Naive Bayes in Python) | 7,452,917 | 0 | python,data-structures,machine-learning,spam-prevention | If you assume you didn't care about multiple occurrences of each word in an email, then all you really need to know is (that is, your features are booleans):
For each feature, what is the count of positive associations and negative associations?
You can do this online very easily in one pass, keeping track of just those two numbers for each feature.
The non-boolean features means you'll have to discretize the features some how, but you aren't really asking about how to do that. | I am going to implement Naive Bayes classifier with Python and classify e-mails as Spam or Not spam. I have a very sparse and long dataset with many entries. Each entry is like the following:
1 9:3 94:1 109:1 163:1 405:1 406:1 415:2 416:1 435:3 436:3 437:4 ...
Where 1 is label (spam, not spam), and each pair corresponds to a word and its frequency. E.g. 9:3 corresponds to the word 9 and it occurs 3 times in this e-mail sample.
I need to read this dataset and store it in a structure. Since it's a very big and sparse dataset, I'm looking for a neat data structure to store the following variables:
the index of each e-mail
label of it (1 or -1)
word and it's frequency per each e-mail
I also need to create a corpus of all words and their frequency with the label information
Any suggestions for such a data structure? | 0 | 1 | 381 |
0 | 7,513,167 | 0 | 0 | 0 | 0 | 1 | false | 16 | 2011-09-22T10:08:00.000 | -5 | 5 | 0 | Weighted logistic regression in Python | 7,513,067 | -1 | python,regression | Do you know Numpy? If no, take a look also to Scipy and matplotlib. | I'm looking for a good implementation for logistic regression (not regularized) in Python. I'm looking for a package that can also get weights for each vector. Can anyone suggest a good implementation / package?
Thanks! | 0 | 1 | 23,971 |
0 | 7,539,484 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2011-09-24T13:02:00.000 | 2 | 5 | 0 | Divide set into subsets with equal number of elements | 7,539,186 | 0.07983 | python,algorithm,r | I would tackle this as follows:
Divide into 3 equal subsets.
Figure out the mean and variance of each subset. From them construct an "unevenness" measure.
Compare each pair of elements, if swapping would reduce the "unevenness", swap them. Continue until there are either no more pairs to compare, or the total unevenness is below some arbitrary "good enough" threshold. | For the purpose of conducting a psychological experiment I have to divide a set of pictures (240) described by 4 features (real numbers) into 3 subsets with equal number of elements in each subset (240/3 = 80) in such a way that all subsets are approximately balanced with respect to these features (in terms of mean and standard deviation).
Can anybody suggest an algorithm to automate that? Are there any packages/modules in Python or R that I could use to do that? Where should I start? | 0 | 1 | 9,742 |
0 | 7,548,438 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2011-09-24T13:02:00.000 | 1 | 5 | 0 | Divide set into subsets with equal number of elements | 7,539,186 | 0.039979 | python,algorithm,r | In case you are still interested in the exhaustive search question. You have 240 choose 80 possibilities to choose the first set and then another 160 choose 80 for the second set, at which point the third set is fixed. In total, this gives you:
120554865392512357302183080835497490140793598233424724482217950647 * 92045125813734238026462263037378063990076729140
Clearly, this is not an option :) | For the purpose of conducting a psychological experiment I have to divide a set of pictures (240) described by 4 features (real numbers) into 3 subsets with equal number of elements in each subset (240/3 = 80) in such a way that all subsets are approximately balanced with respect to these features (in terms of mean and standard deviation).
Can anybody suggest an algorithm to automate that? Are there any packages/modules in Python or R that I could use to do that? Where should I start? | 0 | 1 | 9,742 |
0 | 7,673,206 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2011-10-04T23:56:00.000 | 1 | 1 | 0 | Pyplot/Matplotlib: How to access figures opened by another interpreter? | 7,655,323 | 1.2 | python,matplotlib | There is no simple way to reuse plot windows if you must use eclipse to run it. When I am working interactively with matplotlib, I use either spyder or ipython. Edit class, reload class, and run code again. If you just want to get rid of all the open plot windows, hit the stacked stop icons to kill all your runing python instances. | I am using matplotlib.pyplot (with Eclipse on Windows). Every time I run my code it opens several pyplot figure windows.
The problem is that if I don't close those windows manually they accumulate. I would like to use pyplot to find those windows (opened by another process of python.exe) and re-use them. In other words, I do not want to have multiple windows for the same figure, even across interpreter processes.
Is there a simple way to do it? | 0 | 1 | 302 |
0 | 7,725,290 | 0 | 0 | 0 | 0 | 1 | false | 28 | 2011-10-10T20:05:00.000 | 4 | 4 | 0 | Maximum Likelihood Estimate pseudocode | 7,718,034 | 0.197375 | python,statistics,machine-learning,pseudocode | You need a numerical optimisation procedure. Not sure if anything is implemented in Python, but if it is then it'll be in numpy or scipy and friends.
Look for things like 'the Nelder-Mead algorithm', or 'BFGS'. If all else fails, use Rpy and call the R function 'optim()'.
These functions work by searching the function space and trying to work out where the maximum is. Imagine trying to find the top of a hill in fog. You might just try always heading up the steepest way. Or you could send some friends off with radios and GPS units and do a bit of surveying. Either method could lead you to a false summit, so you often need to do this a few times, starting from different points. Otherwise you may think the south summit is the highest when there's a massive north summit overshadowing it. | I need to code a Maximum Likelihood Estimator to estimate the mean and variance of some toy data. I have a vector with 100 samples, created with numpy.random.randn(100). The data should have zero mean and unit variance Gaussian distribution.
I checked Wikipedia and some extra sources, but I am a little bit confused since I don't have a statistics background.
Is there any pseudo code for a maximum likelihood estimator? I get the intuition of MLE but I cannot figure out where to start coding.
Wiki says taking argmax of log-likelihood. What I understand is: I need to calculate log-likelihood by using different parameters and then I'll take the parameters which gave the maximum probability. What I don't get is: where will I find the parameters in the first place? If I randomly try different mean & variance to get a high probability, when should I stop trying? | 0 | 1 | 47,657 |
0 | 7,734,072 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2011-10-12T00:15:00.000 | 2 | 4 | 0 | Uniformly distributed data in d dimensions | 7,733,969 | 0.099668 | python,numpy,machine-learning,scipy | You can import the random module and call random.random to get a random sample from [0, 1). You can double that and subtract 1 to get a sample from [-1, 1).
Draw d values this way and the tuple will be a uniform draw from the cube [-1, 1)^d. | How can I generate a uniformly distributed [-1,1]^d data in Python? E.g. d is a dimension like 10.
I know how to generate uniformly distributed data like np.random.randn(N) but dimension thing is confused me a lot. | 0 | 1 | 13,886 |